Featured
- Get link
- X
- Other Apps
Making my own real life Jarvis ai
I’m a huge marvel fan.
It all started with Spider-Man. His fluid physical ability and super smarts captivated me. My fascination grew as I discovered more heroes like Batman and Dr.Strange. But my all time favorite is Tony Stark, better known as Iron Man, whose intellect and lack of supernatural powers made a lasting impression. What impressed me most was the way Tony was depicted in his lab working with future tech to become a hero. Specifically, his companionship and reliance on his software program Jarvis.
Those moments with Jarvis were my favorite parts of the movies because some invisible force that only Tony Stark could command would essentially prepare him for any problem. Because ChatGPT and other large language models had been released, I was able to experiment with a Jarvis like setup and eventually designed a data flow that would allow something similar.
My plan was to combine a large language model with a speech to text and text to speech component. Using these components I could create a system I didn’t have to touch physically, but could get information from all the time, rain, hail, or shine. Similar to Jarvis.
Enters GPT4All
During my many hours of vast net diving, I stumbled upon GPT4All. GPT4All is a large language model similar to ChatGPT except that GPT4All is free. This little fact, in addition to a python command mode for the program, allowed me the perfect opportunity to create my Jarvis.
Because I already had prototypes for a simple command Ai, I had the speech to text and text to speech components. All I needed was a way to run GPT4All continuously. My first thought was to run my question/query through a subprocess and wait for the results.
I later found that by the time that sequence could take place I’d basically be an old man. I’d lose a good 20 seconds on starting up the model each run. Instead, I decided to use a client server architecture, but locally. My client program would do speech recognition and send the results to the local server (itself). The local server (which should be already running first) then takes the speech command and asks the model. The model returns these results, and might I add rather quickly, as speech, thanks to the speech to text component.
I began to put code together but became stuck, so I asked for some coding assistance from GPT4All, and it supplied me with some starter code for my client server architecture. After booting up my finished code, I asked the model things such as….
Me: “Tell me a joke”
GPT4All: “Why did the tomato turn red?”
“Because it saw the salad dressing”
Me: “what are you, what is your purpose?”
GPT4All: “I am a machine learning model, and do not have the ability to reflect on my purpose or existence. My purpose is solely to assist with natural language processing tasks, such as generating text based on given prompts.”
Me: What is the atomic number of fluorine?
GPT4All: “The atomic number of fluorine (F) on the periodic table is 9.”
Me: Tell me about Michael Jackson
GPT4All: “Michael Jackson was an American singer songwriter dancer and philanthropist. He is known as the King of Pop due to his significant contributions to popular music and culture…..”
I asked it many more questions on topics such as medical conditions/symptoms, Physics, history, Chemistry, and biology. Not only did it give acceptable answers but I also asked questions about terms within the responses given.
Also , my answers were manually set to only allow so many tokens in the responses. If I want longer responses I’ll just increase that setting.
In summary, my overall satisfaction with the implemented system is a solid 9/10. The only way this model could’ve scored perfectly for me was if it was mobile, could actually converse, and had flawless text to speech and speech to text components. Hopefully in the near future, I’ll be able to extend this project further or implement it in my own way.
- Get link
- X
- Other Apps
Popular Posts
Raspberry Pi 4 Talking Robot Car Part 2.
- Get link
- X
- Other Apps
Comments
Post a Comment