Seeed Studio has introduced the launch of the Native Voice Chatbot, an NVIDIA Riva- and LLaMa-2-based giant language mannequin (LLM) chatbot with voice recognition capabilities — operating completely domestically on NVIDIA Jetson units, together with the corporate’s personal reComputer vary.
“In a world the place synthetic intelligence is evolving at an ingenious tempo, the mode of human-computer interplay has taken a revolutionary flip in the direction of voice interplay. This shift is especially evident in sensible houses, private assistants, and customer support assist, the place the demand for seamless and responsive voice chatbots is on the rise,” claims Seeed Studio’s Kunzang Cheki.
“Nonetheless, the reliance on cloud-based options has led to issues associated to knowledge privateness and community latency. In response to those challenges, we current an revolutionary Native Voice Chatbot venture that operates domestically, addressing privateness points and guaranteeing swift responses.”
The Seeed Native Voice Chatbot builds atop two current initiatives: NVIDIA’s Riva, a hardware-accelerated automated speech recognition (ASR) and speech synthesis engine, and Meta AI’s LLaMa-2 giant language mannequin (LLM). The thought is straightforward: speech is picked up by a microphone and transformed to textual content by Riva’s ASR; the textual content is fed to LLaMa-2, which generates a believable text-based response; and the response is then fed via the Riva text-to-speech engine to render it audible.
“Conventional voice chatbots closely rely on cloud computing providers, elevating legitimate issues about knowledge privateness and community latency. Our venture focuses on deploying a voice chatbot that operates completely {hardware}, mitigating privateness issues and providing a quicker response time,” Cheki claims. “The general structure ensures a safe, non-public and fast-responding voice interplay system with out counting on cloud providers, addressing knowledge privateness and community latency issues.”
The LLM runs domestically on-device, which means no charge limits or subscriptions required. (📷: Seeed Studio)
Operating all the things domestically does come at a value, in fact: whereas the software program itself is appropriate with any mannequin of NVIDIA Jetson, the memory-hungry LLM will not work correctly on something with lower than 16GB of RAM — which means the pocket-friendly Jetson Nano vary is shut out of the venture. “I accomplished all experiments utilizing [a] Jetson AGX Orin 32GB H01 Package,” Cheki notes.
The venture is documented in full on the Seeed Studio wiki.