ChatGPT is pretty amazing

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

ach

Active member
GDIY Supporter
Joined
Jan 18, 2021
Messages
28
Location
Canada
In my day job as a data systems engineer, I have been exploring ChatGPT to help find information and write quick code scaffolds. Today I was revisiting my knowledge of audio transformers and gave ChatGPT a go. The results were pretty amazing:

Prompt:
Have you read the Jensen transformers book 'Audio Transformers' by Bill Whitlock?

Response:
As an AI language model, I don't "read" books in the traditional sense, but I have access to a vast amount of information, including knowledge about various books and their contents up to my knowledge cutoff date in September 2021. I am familiar with the book "Audio Transformers" by Bill Whitlock, which was published by Jensen Transformers.
The book is a comprehensive guide to audio transformers, covering topics such as transformer theory, specifications, applications, and design considerations. It is highly regarded by audio professionals and enthusiasts alike for its in-depth exploration of transformer technology and its role in audio equipment.
If you have any questions related to the book or audio transformers, feel free to ask, and I will do my best to help you.

Prompt:
Based on information in the book. How is the reactive inductive reactance of a transformer calculated when the only information available are the primary and secondary impedances, the primary and secondary dc resistances, and the -3db corner frequency?

Response:
Based on the information given and considering the context of the book "Audio Transformers" by Bill Whitlock, we can approximate the reactive inductive reactance of a transformer using the following steps:

Calculate the turns ratio ( n ) using the square root of the ratio of secondary impedance (Z2) to primary impedance (Z1):
n = √(Z2 / Z1)

Calculate the inductive reactance (X_L) at the -3 dB corner frequency (f) for the primary winding:
X_L1 = 2 * π * f * L1

Since the -3 dB corner frequency is given, we can rearrange the formula to solve for L1 (primary inductance):

L1 = X_L1 / (2 * π * f)

Calculate the primary inductive reactance (X_L1) by subtracting the primary DC resistance (R1) from the primary impedance (Z1):
X_L1 = √(Z1^2 - R1^2)

Calculate the primary inductance (L1) using the formula from step 2:
L1 = X_L1 / (2 * π * f)

Calculate the secondary inductive reactance (X_L2) by subtracting the secondary DC resistance (R2) from the secondary impedance (Z2):
X_L2 = √(Z2^2 - R2^2)

Calculate the secondary inductance (L2) using the turns ratio ( n ) and primary inductance (L1):
L2 = L1 * n^2

Now, we have the primary and secondary inductances (L1 and L2) and their corresponding inductive reactances (X_L1 and X_L2).

Please note that these calculations are based on approximations and assumptions, as real-world transformers may have additional factors that influence their performance, such as core losses, leakage inductance, and parasitic capacitance. These calculations provide a starting point for understanding the behavior of the transformer but may not fully capture all the nuances of a specific transformer's performance.

Prompt:
Could you write a simple html page that takes each initial variable as an input in a form, and calculates each of the steps above using embedded javascript? Each steps should be labeled with the formula.

Response:
Certainly! Here's a simple HTML page with a form to input the initial variables, and embedded JavaScript to perform the calculations as described in the previous steps:

ChatGPT then proceeded to spit out a perfectly functional transformer calculator in HTML and javascript:
1680371495003.png

While I wouldn't quite trust it yet to generate a circuit for you, as a way to summarise and explore your understanding, it is an amazing companion!
 
Employers and smart employees are embracing this technology. Many millions of jobs will go away or be changed significantly.

JR

PS: speaking of companions, there was a gag on a recent TV show where a guy wanted to date a female voiced AI he was talking with on his computer, until a coworker changed the AI's voice to ozzie male. :unsure:
 
In my day job as a data systems engineer, I have been exploring ChatGPT to help find information and write quick code scaffolds. Today I was revisiting my knowledge of audio transformers and gave ChatGPT a go. The results were pretty amazing:



ChatGPT then proceeded to spit out a perfectly functional transformer calculator in HTML and javascript:
View attachment 107341

While I wouldn't quite trust it yet to generate a circuit for you, as a way to summarise and explore your understanding, it is an amazing companion!
Those are middle school formulae; I would say you can trust them....
 
Employers and smart employees are embracing this technology. Many millions of jobs will go away or be changed significantly.
Without question these AIs will have broad societal implications over the coming years.

Those are middle school formulae; I would say you can trust them....
I think you misinterpreted my statement. The demo I showed was perfectly valid and that is the *right* way to use it - i.e. to help you with tasks that you yourself can verify. But ChatGPT hallucinates information and will nearly always give you a confident sounding answer even if that answer is 'wrong'. For instance on an earlier chat I used the notation '10k ohms' and the resultant calculation was off by a factor of a thousand.
 
Some of the new member applications look like they are using chatGPT to answer a simple question... answer is correct but too wordy to be typical human.
 
Some of the new member applications look like they are using chatGPT to answer a simple question... answer is correct but too wordy to be typical human.
Hah - It's a race to the bottom,: AIs writing articles that get summarised by AIs, that then get regurgitated by other AIs...
 
Probably already seen them...

JR

PS: Not necessarily AI, but I don't like how movie streaming services send us down rabbit holes with more of the same (but worse) suggestions.
 
Some of the new member applications look like they are using chatGPT to answer a simple question... answer is correct but too wordy to be typical human.
I'm seeing this on one of the pedal forums, too. Bot behavior, but surface-level well written response.

I keep seeing people speculate that white-collar jobs are in danger from ChatGPT, but it's basically a super low-level employee.

Back when I was working as a legal assistant, I could write a pleading, but that doesn't mean the partner I worked for didn't read it carefully, and probably a lot more carefully than if they themselves had written it.

I did a project at my current job a while back where we were dealing with the push-pull between skepticism and complacency with AI. It's equally bad to be completely distrustful of the results -- in which case you can't take advantage of the time-saving nature of the tool -- and accepting whatever it tells you.
Getting the balance right requires training for the humans involved, and unfortunately, it requires training in both the specific machine learning tool being used, machine learning/AI in general, and domain knowledge of what the tool is being used on. This means big blindspots on the part of the trainers AND the users. It also means that almost everyone has a different equilibrium point.

But you know what? That last paragraph applies to people, too. Is there a functional difference between a very confident person who is wrong and a machine that is very confident and wrong, if I myself lack the expertise to judge the quality of the information? I don't think there is, and ironically my knowledge of whether that information came from a human or machine is irrelevant. My human brain is biased toward trusting humans, but that doesn't mean humans are inherently more likely to be correct.
 
At my last real day job (Peavey), one position I held was managing a mixer engineering design group. I had several design engineers reporting to me. My task was to direct and guide them toward cost effective product designs. I may have some specific experience about preamp designs.

At Peavey we had the luxury of decades of successful former designs to draw from. There was constant tension from my immediate management, to not reinvent any wheels. That said, any new product design engineer worth a flip hates to copy old designs (I can relate to that). Since I was sympathetic and wanted to encourage my better engineers to prosper, I would consider their new design ideas. Working inside Peavey involved a lot of designing the Nth version of some already successful product. The dealers and customers wanted new versions that were better, but not so different that they had trouble figuring out how to use them. In some cases the same engineer was responsible for doing the next version redesign of a product that they designed just several years ago. The calculus for making a newer generation version better involve several levers. #1 lower cost, or more/better features for same cost.

I have never met an engineer who was so satisfied with a product design that they didn't want to make improvements. Making the next generation version was the opportunity to fix known (to the designer) shortcomings.

If I was still managing junior engineers I might encourage them to use AI to survey the state of the art when pulling together a new design. That is basically what human design engineers already do. When I was technician/junior engineer I absorbed every design schematic I could get my hands to create my own personal design library of circuit blocks.

JR

PS; I have more related anecdotes but won't veer off too much.
 

Latest posts

Back
Top