Composite Object Sound Modeling ?

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

bernbrue

Well-known member
Joined
Mar 13, 2006
Messages
1,516
Location
Wolfenbüttel - Germany
Hi,
I´ve just listened to a demo of Universal Audio´s plugin version of the Thermionic Culture Vulture. It sounds fantastic! Since I tried to make my own (hardware) version of the Culture Vulture, namely The Rude Tube, I asked myself, how the folks from UA capture the sound and behaviour of this beast and put it in such a nice plugin? I´ve heard of the term "Composite Object Sound Modeling" used by Roland and of course the system used by Yamaha.

I´m an absolute noob in terms of programming, but would just like to know, if there is a kind of public software, maybe a kind of standard, where people could do their own plugins? Or does each company go their own way? I might be a bit naive, but basicly you draw a kind of schematic of the original hardware and compile it to a VST/AU plugin. No? Please enlight me.
Please, excuse my ignorance. I can imagine that it is much harder than that. 
regards
Bernd

edited: oh, almost found what I was looking for. Very good reading:
http://www.soundonsound.com/sos/aug10/articles/modelling-plugins.htm
 
I have some COSM with a Roland VG8 box and divided pickup.

It remains in my world as a main goto guitar - I have a guitar just for that purpose.

The retunings are scary good, the 'synthi' sounds are the only usable ones like that I ever heard.

A really nice box for many years remaining the basic 'lets see where we are at' box in the studio.


---


Now, cosm is all very proprietry, so not likely to get any open source models, methinks.

Especially NOT with vst_as-io drivers for VB  :)

----

But is certainly is all good - cosm guitars, vst with asio  and a divided pup   

8)
 
Acustica audio is working with multiple convolutions for non linear stuff, while a reverb uses only one convolution, you need one convolution per harmonic you want to emulate, it can be quite resources consuming, takes some memory for the vectors to be available and a lot of processing power in CPU, I have an i7 4770k and I've seen one process with 11 kernels (harmonics) to take one almost full virtual processor the eight available. They also has a limited basic user interface, with a lot to tweak on the background. But they do worth it for some task, not named on this article, AFAIK they do some samples on the device with different signals at a dedicated settings and levels, with what they create the vectors libraries, then creates the parameters on the host plugin to create the presets, where could be a lot of things happening, as dynamic control or other parts of the simulation that didn't appear on the vector, a lot of stuff to do just within it, accessible to the user, not the vectors which are taken from measurements from an actual equipment. Then you could pick from the presets and tweak a couple of parameters to get what you want, it has a liquidity control which controls how the vectors are interpolated to generate the unlinearities, it help to minimize the artifacts in order to require less processing when not available. You also could create your own presets starting from an existing one or using existing vectors and working on it, I never did the second one since could be quite hard, but also flexible to create a new behavior. As the controls are limited, you have to pick a preset to get closer to what you want, for example, if you want a 550 eq you need to pick the filter you want, and then add the gain of that band, and you can't pick a preset that behaves as the complete unit, I don't remember now if it is just like this but something close. Really good emulations comes out of that plugin, but it has it's limitations. You can capture your own set of vectors from something you have, there is reference to that in the article but I never did so, maybe I'll try it with my tape delay which I really love but as it use less than 1m of conventional cassette the tape ruins very fast, usually for a mix session I use 2 tapes, one for set it up and one for record the tracks, so having an emulation for it would be quite useful.

http://www.soundonsound.com/sos/feb08/articles/nebula3.htm

Another useful tool is the q-clone or something like that from waves, which is really good to get eq curves from analog gear, works fine, I took a couple of samples from my delay to use in plugin delays to get a better approach of what I was looking. That delay is a JLM tape echo which I bought for something around U$200 a couple of years ago, had to change the motor and a couple of other things, as pots, voltage regulator for the motor, trim pots, trim it a bit and it ended up really nice, some more mods maybe coming for it, or even I could take the heads outputs and inputs from it and build an external mixer to do any type of combinations on it, maybe with a Dsub connector... I don't know, project for another time and completely off topic.

Native instruments Reaktor let you build in blocks some processors, Sonic Birth on OS X also let you do so and compile OS X only AU and VST but I leave it behind because sound quality wasn't so good for some simple ones I builded. Maybe there are some other options out there, other than programing from zero on matlab or C, which are of course a good approach, but not so friendly if you aren't used to it. matlab multisim maybe a good help to start from a schematic, NI multisim may be a good tool to hear the results of an schematic but I don't think you can take as a plugin from it, maybe more helpful if designing analog gear.

JS
 
The Nebula stuff is very good (acustica audio) but like j says it's very cpu heavy they one thing that I like that they have delved into is off-loading the heavy cpu stuff to your gpu if it is available, the CUDA programing offered by nVidia makes this possible so I have a Cuda-core vga card that has 480 cores that really help in the convolutions department...

Not sure why more companies have not looked into parallel processing on the vga cards it's readily available as SDK's form pretty much the big boys for free...
 
iomegaman said:
The Nebula stuff is very good (acustica audio) but like j says it's very cpu heavy they one thing that I like that they have delved into is off-loading the heavy cpu stuff to your gpu if it is available, the CUDA programing offered by nVidia makes this possible so I have a Cuda-core vga card that has 480 cores that really help in the convolutions department...

Not sure why more companies have not looked into parallel processing on the vga cards it's readily available as SDK's form pretty much the big boys for free...
'

Now days you can run more standard plugins than you'll ever need with any reasonable modern i7 processor, any professional studio will have one, they don't need to bother making CUDA compatible... That's different for Nebula, in which one stage of the plugin open is making maybe 10 convolutions in parallel, and the more the convolutions the more the variables the impulse response they have, so they take a large amount of space to have all the impulse responses available really fast and they need to make a bunch of MACs for each sample, assigning the GPU as DSP for this task is quite useful since it has it's own memory banks and a lot of processors to do the math. Ideally a dedicated DSP with 3 memory banks and few tens of big MACs would do the job better, but dedicated means expensive, an Nvidia GPU is good enough for the task and easy to find in any PC store, which makes it cheaper.

Volterra series is a generalized expression for the convolution theorem, which allows multiple variable impulses response (which normally are one variable (time)) and takes into account not only the input signal (and time) as variables but other independent variables which may depend on the amplitude or other parameters or just be completely independent. How this higher order impulse responses are found is something I have to learn yet, for the first one the theory will say to introduce an impulse to the system and see what comes out, of course this is not possible in the practice, so a different wide bandwidth signal is used and then the result it extracted from that response, depending on the system a step is used, in audio we sometimes like to use other signals but we always know the max needed bandwidth while in many other systems that's still unknown.

This different impulse responses are responsible of one of the big problems in the speed of this processing technique, because you not only need to make ~10 impulse responses but also need to select before with which impulse response you are going to make it, so you need the memory and the fast access to all of them. In the article quoted here somewhere the development of such database is one of the discussed topics.

JS
 

Latest posts

Back
Top