Cutting edge.

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
So, when will ChatGPT be trained to mix a show? :cool:
back last century when I was thinking about mixer product definitions, one desire I had was a mixer smart enough to identify and apply a nominal EQ for typical signal sources (guitar, keys, vocals, etc). Probably not heavy lifting for modern technology...but even back then I was going to leave the final mix to humans.

The huge catch 22 is that these would start out being premium SKUs, and the customers with deep pockets are not the people to embrace mixers with training wheels even though only they could afford them.

JR
 
What would you use to train the AI?

Music? With sales numbers?

AI has been making music. Mixing seems an entirely different show.

The university of Sevilla, Spain, developed an expert system that could tell if a song would be a hit, or not. It was bought by one of the big media corporations, never to be heard of again.
 
What would you use to train the AI?
humans
Music? With sales numbers?
for TMI I recall back in the 80s hearing about some GA tech programmers that could mimic classical composers but that music is probably more formulaic.
AI has been making music. Mixing seems an entirely different show.
yup, and I wasn't even going to try. Suspect identifying types of instruments and overlaying EQ maps should not be heavy lifting.
The university of Sevilla, Spain, developed an expert system that could tell if a song would be a hit, or not. It was bought by one of the big media corporations, never to be heard of again.
:rolleyes: , sounds like those inventions that deliver 200 miles to the gallon.

JR
 
Izotope has neutron which analyzes a signal and then eq’s it but it seems more like database of some engineers eq tricks for a chooses instrument. Dialogue editing in the the right editors hands is like magic to me. Getting an AI program to work on that would be a wonder.

As mentioned a buffer can fix the delay. By downloading chunks of info for what would feel like real-time work and then saving the metadata or automation instructions back to the server would work, just not on a live event like a football game or awards show. But how much live makes up programming.

I just know every time a producer director are looking for money, they need a new mix of the latest edit of all the scenes and you kill yourself re mixing a 90 min movie knowing its not a final. And low budget seems to never have EDL’s of changes made to the movie. Takes a special breed engineer-editor to do that over the weekend. Having 3 different engineers working on the dialog, fx , music edits for a project remotely would allow a mix engineer to fly through a temp mix. Or not.
 
It's all a marketing pile. They weren't the first, nor the first in the USA, so they added "for a sports game". And even in that field they're probably not the first. Running the software in the cloud? That could be a first. But who needs to complicate things by using the "cloud". Can someone define the "cloud"? There's no detail at all about what it means.

The problem of different latencies for different sources is easy enough to solve.

Introduce a buffer that is larger than the highest latency. Then adjust timing on each of the sources. You're no longer live, but a delay of a second, or even five or ten seconds often doesn't matter at all.

It still wreaks havoc on monitoring. So, It doesn't always work for live. No time to adjust anything.

Broadcast has been doing this for years. Without the magical "cloud", I might add. It's what AVB is about.

I mean, I was building server setups for in-house video editing last century. No way I would run such a setup in the cloud. Unless it was my own "cloud". Not even if you need world-wide mobile access, as that would need a separate setup to handle many clients.

A remarkable number of companies have found the cloud to be a lot more expensive than dedicated servers. Especially if you consider safety. I do presume safety wasn't a biggie in this case, but still.

In fact, they're a bit late to start boasting about the cloud. Haven't they heard AI is the thing lately?

So, when will ChatGPT be trained to mix a show? :cool:
To answer the cloud question.
A setup they would normally use would consist of a surface with dsp controlling all aspects of the audio, connection to I/o via dante enabled connection.
So here it looks like they took all the dsp aspects of the control surface and put them on a server. It appeared reliable enough that they ran it for a broadcast.
From there it looks like they reduced the surface down to a fader pack and a touch screen. As for the I/o not sure what was connected or routed.
 
Last edited:
I'm trying to think through all of the latency issues in my "pea-sized" brain.....

When I hit a mute switch, I want it muted NOW....not many dozens (or more) milliseconds later. I guess it's possible. I'll have to map out the timing on a piece of paper.

In a "post" environment, it seems probable there is a way that I haven't grasped. It took me forever to comprehend how a digital delay was used to tweak the preview pathway timing in a disk mastering system!

So then, explain how a cloud virtual desk could work in a live venue PA system!

Nevertheless.....how cool to "rent" a Neve 8068 or SSL 4K or a Helios...or...or...in the cloud to do a song mixdown!!

Bri
 
To answer the cloud question.
A setup they would normally use would consist of a surface with dsp controlling all aspects of the audio, connection to I/o via dante enabled connection.
So here it looks like they took all the dsp aspects of the control surface and put them on a server. It appeared reliable enough that they ran it for a broadcast.
From there it looks like they reduced the surface down to a fader pack and a touch screen. As for the I/o not sure what was connected or routed.

Just one thing...

The setups I came across all handled video too. That's why I mentioned AVB. Although some didn't use AVB, but some proprietary variation of it.

Me, personally, I use a computer to go from my setup to whatever network.

My setup is based on multiple ADAT/optical over UTP cabling. Not network, but it works up to a 300 ft range and I've never needed more. I'm cheap. UTP is cheap and sometimes even available in whatever building we end up in. But I don't handle video. And obviously, all processing is local.

IF I take a next step, it will be adding a GPU for AI based audio processing.

I believe in network audio, for broadcast. Not for live music performances. YMMV, though, since I know of at least one setup in Germany that has one control room for something like 16 stages. Obviously, they' ve been networked for a while, so they can easily send streams to broadcast studios. When I heard about it, some years ago, they had a 10 gbps uplink. Probably with an SLA for latency and throughput numbers. Not a 50$/month connection...

I wouldn't know how to set up such a thing for temp use and move it around constantly. There are some mobile rigs with a satellite uplink, mostly TV news gathering and sports. Never seen one for mobile audio recording. Maybe in the classical recording scene?

So the practicality of it all [for us] escapes me...
 
Back
Top