ChatGPT as a voice-controlled assistance system in the electronics laboratory?

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

rock soderstrom

Tour de France
Joined
Oct 14, 2009
Messages
3,577
Location
Berlin
AI-based assistance systems are on the rise, and I have no doubt that they are the future. Currently, I am conducting a practical test with the ChatGPT app, which can now also be voice-controlled.

This initially works flawlessly; ChatGPT understands me without any problems, and even the German language poses no obstacle. Great. It's quite something to be able to get information about various tube data, such as maximum plate voltage or grid resistor, without having to touch the keyboard while soldering. It feels like the future.

Unfortunately, there are still issues with the quality and reliability of information. At first glance, ChatGPT provides quite plausible information, but unfortunately, most of it is simply incorrect!

After two days of testing, I would say that 80% of the technical information is faulty. What makes matters worse is the fact that ChatGPT lies convincingly and apologizes very sweetly when you point out the errors in the system.

What are your experiences? At the moment, I find it unusable for the mentioned purpose, but I think this is a temporary problem that will change. Maybe I need to set up my own ChatGPT bot trained specifically for this application. Does anyone have experience with that?

PS: translated with ChatGPT 😅
 
Last edited:
I tested it a couple months back (in another thread) but the bot's math was wrong.
 
I tested it a couple months back (in another thread) but the bot's math was wrong.
Yes, math is not its strongest feature either. What I find very surprising is that when you point out an error to the system, it often then provides the correct answer, even though I haven't corrected anything. Giving wrong answers when it knows the right one is active lying! :devilish: Sometimes, though, the system continues to "correct" itself incorrectly, and you can tell that the underlying information is insufficient or wrong. The bot starts to improvise... LOL.😅
 
The editorial in this month's Linux Magazine was on this very topic citing a recent article in the Washington Post. To greatly precis it, it covers the nature of errors made by Microsoft's effort called Copilot. The reason it was chosen is because Copilot can output its sources along with its output so you can verify its response. The article points out that GPR-4 outputs similar responses (but not sources).

The topic examined was asking basic questions about political elections. Apparently Bing's AI gave inaccurate answers to one out of every three questions about candidates, polls, scandals, and voting in a pair of recent elections in Switzerland and Germany. The errors cited include giving incorrect dates for elections, misstating poll numbers and failing to mention when a candidate dropped out of the race. The study also mentioned cases of the chabot "inenting controversies" about a candidate.

Inaccuracies were also language dependent. Questions in German were answered wrong 37% of the time whereas Answers to questions asked in English were wrong only 20% of the time.

Still a loooong way to go.

Cheers

Ian
 
After two days of testing, I would say that 80% of the technical information is faulty. What makes matters worse is the fact that ChatGPT lies convincingly and apologizes very sweetly when you point out the errors in the system.

Let's hope they use the feedback you gave to correct errors...

In fact, I'm almost certain they do exactly that. It's one of the pillars of AI.
 
In fact, I'm almost certain they do exactly that. It's one of the pillars of AI.
Absolutely right! But the question arises as to who or which authority is trusted more when it comes to the quality and accuracy of the information. Evaluating information is certainly a major task of Ai Systems during learning. A purely numerical assessment does not necessarily have to be correct. Just because many people perceive something as true does not necessarily mean that it is correct.

Human history is full of such errors.

I find the current information and explanations from ChatGPT on more general topics quite good. Where it often fails predictably are hard facts, such as technical data, which really surprises me.

I'm going to start a little experiment with tube data sheets, which I'll provide manually to my custom chatbot system. Let's see if I can increase the response quality in this small and manageable subject area. I'm curious, very interesting topic.
The editorial in this month's Linux Magazine was on this very topic citing a recent article in the Washington Post. To greatly precis it, it covers the nature of errors made by Microsoft's effort called Copilot. The reason it was chosen is because Copilot can output its sources along with its output so you can verify its response. The article points out that GPR-4 outputs similar responses (but not sources).

The topic examined was asking basic questions about political elections. Apparently Bing's AI gave inaccurate answers to one out of every three questions about candidates, polls, scandals, and voting in a pair of recent elections in Switzerland and Germany. The errors cited include giving incorrect dates for elections, misstating poll numbers and failing to mention when a candidate dropped out of the race. The study also mentioned cases of the chabot "inenting controversies" about a candidate.

Inaccuracies were also language dependent. Questions in German were answered wrong 37% of the time whereas Answers to questions asked in English were wrong only 20% of the time.
Very interesting.

I think the varying response quality German/English certainly has something to do with the numerical information density on the web. There was simply more information in English than in German available to the leaning model.

The dialogue capability, also in German, of Chat GPT is really high. During my tests, really incredible (spoken!) dialogues were created, very human-like.
Still a loooong way to go.
I agree, but the whole AI thing is developing extremely quickly. This also involves dangers, not just "Skynet" but also a few numbers smaller, as the editorial quoted by Ian points out. It's not nice when the AI system throws a fabricated scandal at you on the web! :devilish:
 
Last edited:
All of this AI stuff ignores many other "elephants in the room". One of my fave guys on Youtube is an attorney who discusses things like Automotive Lemon Laws, but often uses his logical mind to look at similar problems with consumer products.

Today, his podcast was about a pending class-action lawsuit about LG refrigerators that last only a year or two. It's interesting on that level, but if you aren't all that interested....go to 11:00 in his video and he has an excellent dissertation about the insanity of over complication in something as mundane as home appliances. "Reboot the washing machine??"

And we are gonna believe something spewing from an AI bot for accuracy???



Bri
 
Last edited:
All of this AI "crap" ignores many other "elephants in the room". One of my fave guys on Youtube is an attorney who discusses things like Automotive Lemon Laws, but often uses his logical mind to look at similar problems with consumer products.

Today, his podcast was about a pending class-action lawsuit about LG refrigerators that last only a year or two. It's interesting on that level, but if you aren't all that interested....go to 11:00 in his video and he has an excellent dissertation about the insanity of over complication in something as mundane as home appliances. "Reboot the washing machine??"
Sorry Brian but this has nothing to do with the thread topic. Planned obsolescence is vile and certainly a case for the law, but has nothing to do with AI chatbots. Please discuss this in a dedicated thread!
And we are gonna believe something spewing from an AI bot for accuracy???
This sentence will make people smile in 10 years' time. Guaranteed.
 
My understanding is that chatgpt is not made for all tasks.
Certainly not specific computational and physics based tasks. Or see in to the future hippie things...

But it could prob help you scan the net for articles where a certain tube circuit is discussed.
Letting you do the detailed work afterwards.

My university has just accepted the use of chatbots as long as you state that it has been used in your work. Also with the warning that the bots tend to lie...

I have not used them yet though...
Like to think for my self.
 
@rock soderstrom I didn't mean to derail your thread; I was using the "reboot the washing machine" concept to express my distrust of "It's New! It's better! It will change the world! Forget everything you have ever known! " mindset.

I can't see paying $20/month for something I cannot trust further than I can throw my insanely complicated Honda Odyssey minivan.

Bri
 
@rock soderstrom I didn't mean to derail your thread; I was using the "reboot the washing machine" concept to express my distrust of "It's New! It's better! It will change the world! Forget everything you have ever known! " mindset.
Brian, I intentionally opened this thread here in the lab and not in the brewery. It's about the actual work with a chatbot system as an assistant in the lab and the experiences made with it.

I am also interested in creating custom chatbot systems for this task, I also hope to exchange experiences with other members on this topic.

This thread is not intended to address your fundamental rejection of this modern technology, nor "everything used to be better" thinking.

Please open up a separate thread wherever you want but this should be about the Chatbot application and not your Honda.;)

BTW, I use the free version of ChatGPT.
 
Last edited:
I agree, but the whole AI thing is developing extremely quickly. This also involves dangers, not just "Skynet" but also a few numbers smaller, as the editorial quoted by Ian points out. It's not nice when the AI system throws a fabricated scandal at you on the web! :devilish:
I am not so sure that it is developing quickly. Today's AI is only possible because of the currently available computing power, memory and datasets. To quote the concluding paragraph of the Linux Magazine editorial:

"The AI industry made surprisingly little progress for years and slow walked through most of its history before the recent breakthroughs that led to the current generation. It is possible we'll need to wait for another breakthrough to make an incremental step, and in the meantime we could do a lot of damage by encouraging people to put their trust in all the bots that are currently getting hyped in the press"

Back in the 80s I worked on a voice recognition system that ran on an 8 bit Intel micro. Today's equivalents have a much wider vocabulary and run much faster but that is solely due to the improved hardware - the underlying technology has changed little.

Cheers

Ian
 
My understanding is that chatgpt is not made for all tasks.
Certainly not specific computational and physics based tasks.
This point is not unimportant, hence my intention to customise the existing ChatGPT bot with my own information specifications.

I guess, only then will the real possibilities of this type of system become apparent in the lab. At least at the moment.
Like to think for my self.
I see such systems as a supplement or an assistant. Similar to a web search, but faster and more dialogue-based, which I like. I don't see any competition with my own thinking.
 
OH! I thought ChatGPT was a monthly subscription.

I also feel insulted when you say:

"This thread is not intended to address your fundamental rejection of this modern technology, nor "everything used to be better" thinking and MAGA."

Bri
 
OH! I thought ChatGPT was a monthly subscription.
There is also a paid Pro version, but I haven't needed it yet.
I also feel insulted when you say:

"This thread is not intended to address your fundamental rejection of this modern technology, nor "everything used to be better" thinking and MAGA."
I don't want to attack you personally, but that's exactly what I'm thinking at the moment. I formally asked you in my first reply not to derail this thread, you ignored it and now we're talking about your feelings.

If you want to talk to me about it, send me a PM.
 
I am not so sure that it is developing quickly. Today's AI is only possible because of the currently available computing power, memory and datasets.
I am definitely not an expert, but I have been observing developments in this area for quite some time, partly from a professional perspective.

My impression is that disruptive changes are taking place here in real time. The pace is breathtaking and has increased noticeably in recent times.
Back in the 80s I worked on a voice recognition system that ran on an 8 bit Intel micro. Today's equivalents have a much wider vocabulary and run much faster but that is solely due to the improved hardware - the underlying technology has changed little.
I don't share your opinion at this point, for me it seems that at the end of the last decade an evolution has also taken place on the "algorithmic" level. Computing speed is certainly a driver of evolution, but I think there's more than that.
 
Recent changes to Youtube's suggestion algo make me suppose they are using some form of AI in it. It's just hard to enumerate without any background info about it.

The idea came from the fact that it recently started suggesting subjects I've never searched for on Youtube, starting with complete movies. At least, that's when I started noticing. Then it suddenly found funghi as a suggestion.

It is suddenly getting info from other sources.

I suspect even sales platforms like Amazon will be using AI shortly. I know they've been experimenting with it. In fact, I can't imagine a business that's not interested in it somehow. After all, it could be deadly ignoring it while your competitor uses it to get an edge.
 
I learned from PRR that The Lab is for fixing and The Drawing Room is for designing.
Just search for something, on any search engine, and you get dubious answers at the top, after the paid placements. "The Algorithm" is not really tweakable, even in AI. You filter for date, because who wants the answer to a current software question from 2014? You click on a potential hit, but it is useless, and your or your bot's click, just ranked the useless hit higher. Then there are the copycat sites, where the same kernel of half-information, regarding Word or Excel for instance, is copied on other "solution" sites. An elegant prompt still has to deal with a lot of clams(jazz horns ref.).
I've been experimenting with a smart prompter, and I encounter this GIGO sitch all the time. So far it is quicker for me to do it myself, defeating the purpose.
I almost got a good cartoon thylasine that looks like Bowie, so I will continue to the next round.
Mike
 
For most programming work it is a really great assistant, as long as you don't use absolutely cutting edge libraries it doesn't know about, productivity is definitely increased. Many development tools have already integrated it directly, so you don't even have to bother switching over to the browser and interrupt the workflow.

Not a surprise, programming code is obviously in its nature very language-like, actually much simpler following strict rules.
 
AI is statistics. It can come up with something is "plausible" but it doesn't think. It doesn't understand. Everything apart from highly limited tasks in strictly controllable circumstances is out of range for "AI" which is not what it says to be. See for example AI generated pictures - trouble with eyes and fingers because it does not understand the concept of fingers, or how many there should be.
As somebody wrote who spent 20 years in R+D - they tried to reverse engineer the simplest known nervous system on earth (it was from a worm). They failed. It is mostly marketing BS and once AI is pressed to give reliable results for money under real world conditions, the bubble will burst. Self driving cars? Which is a development the military pushed into civilian tech to syphoon money off. Once you put 5G every 100 yards on the roadside you might try it in good weather without much traffic, animals or the wind blowing dust/leaves across the road, but apart from that it fails. For more info on that, read the blog of Corey Doctorow. If they could build it, the military would use it by now.
The dangers and problems lie completely elsewhere.
pluralistic.net/2024/01/11/robots-stole-my-jerb/
 

Latest posts

Back
Top