The No.1 Website for Pro Audio
 Search This Thread  Search This Forum  Search Reviews  Search Gear Database  Search Synths for sale  Search Gearslutz Go Advanced
digital vs digital, vst synth vs digital hardware synths. Keyboard Synthesizers
Old 30th October 2016
  #1
Lives for gear
 

digital vs digital, vst synth vs digital hardware synths.

hi all, the analogue vs digital debate has bean going on for ages but what about digital vs digital?

To my ears digital synths both euro rack digital modules like braids and piston honda and all in one hardware synths like the sledge and virus sound better than most vst's, this is is interesting to me i'm wondering why this could be.

Is it because the hardware synths are being run through hardware converters or some thing in the coding of the hardware synths and modules firmware?

What do you guys think? do you agree or disagree with my findings?

and what have you guys found in the digital vs digital debate?

your thoughts on this subject would be most interesting, kind regards trey.
Old 30th October 2016
  #2
Lives for gear
 

The only thing I could attribute any difference to would be the AD/output circuitry - otherwise I can't imagine any difference.

The few that I've tried or owned both are virtually identical (Wavestation AD vs Legacy, etc).
Old 30th October 2016
  #3
Gear Maniac
I think there are vst synth that sound better than hardware and sometimes the opposite is true. It's all a matter of taste: there's no right or wrong synth sounds. Life is full of flavors! I personally like the sid chip sound (the noisy first generation one). For someone else it might be the worst sound ever... Whatever pleases you is right. You can chase after the "perfect" sound but you're better off learning the tools you already have.
Old 30th October 2016
  #4
Gear Addict
 
Subverter's Avatar
 

I think that there can also be a perceptive bias from the use of a real instrument. Not that there aren't differences, but the satisfaction of playing a real, physical instrument as opposed to clicking a mouse is obviously going to have an effect, however small.
Old 30th October 2016
  #5
I've found differences in the sound of two Casio CZ-1's. After testing them I determined that the difference was caused by the analog VCA's. The VCA on one of the units did not have the ideal bias adjustment which added some minute distortion in the signal. When I swapped the output boards between the two units the problem manifested itself the same in the other unit which previously didn't exhibit the problem and the unit that originally had the distortion was now the "clean" one.

It seems that depending on the output circuitry there's some sonic slop possible with hardware digital synths but it isn't very pronounced. The units I studied required some very focused listening tests to hear the difference between them. In the context of a music mix I could not detect any difference at all.

In the end I believe that synths judged within the context of a mix will give more information regarding their overall sound than when listened to on their own. I've heard many instruments, hardware, software, digital, analog which alone are unsatisfying to listen to but were spectacular when used within a complete song. Similarly I've found that some instruments which sound incredible by themselves can at times be quite the opposite in the musical context. I'm thinking of the Chroma Polaris for the latter - lovely sounding instrument but always needs to be shoehorned into a mix. Leave it to me to use an analog example for this discussion.

But I've found the same with software instruments. I once did a remix of a Depeche Mode song with softsyths and was originally using a different one for each track in the song. It sounded horrible so I deleted all the synths from each track which caused those tracks to return to the cheesy default synth that came with the DAW. Much to my surprise I found that the cheesy default synth on most of the tracks sounded better than using a variety of different synths. So again in the context of music the soft synth I normally thought of as cheesy sounding actually turned out to be the best choice for the song I was remixing.

But this being my own subjective experience its hard to say if it would be the same for others. I don't use soft synths anymore but it isn't because of the way they sound. I don't enjoy making music on a computer so I went totally Off Track Betting.
Old 30th October 2016
  #6
Lives for gear
 
acreil's Avatar
 

Quote:
Originally Posted by biggator6 View Post
The only thing I could attribute any difference to would be the AD/output circuitry - otherwise I can't imagine any difference.
I wish people would stop saying this. You're assuming that they're otherwise identical; that's not a good assumption at all.

Software doesn't necessarily work the same way as hardware. Even digital hardware "reissued" as software by the same manufacturer doesn't necessarily implement everything exactly like the hardware. The sample rate, interpolation, control rates and bit depths, etc. are often different, and this can affect the sound. But for various reasons the designers may not consider an exact emulation to necessarily be desirable anyway.

I've only noticed a few cases where the DAC itself (beyond just the quantization) actually makes a noticeable difference. In every instance it's because they're really, really bad DACs. This is important if you're emulating chiptune ICs or cheap home keyboards, but it's not generally relevant for high end professional gear. There's not really a significant difference now between the DACs used in modern digital hardware and the DACs used in audio interfaces.

The analog output stage in vintage gear is sometimes poorly designed in ways that affect the sound. This isn't typically beneficial (thumps, bass roll-off, noise, etc.), but in some of cases it can contribute desirable distortions. Any really significant effects (like a steep reconstruction filter in the audible range) obviously should be explicitly emulated. But very often the DAC and output stages can be safely neglected.

Last edited by acreil; 30th October 2016 at 07:15 PM..
Old 30th October 2016
  #7
Gear Head
 

It seems the consensus is that Korg M1 vst sounds better than a physical M1, I think so.
Anyway, I prefer playing the physical one. More inspiring for me. Nice keybed. Feeling I am playing an instrument...
Old 30th October 2016
  #8
Gear Nut
 

It's not like there isn't great sounding software out there. Serum smokes similar hardware for example.

Hardware is a bigger financial commitment as it adds the cost of tooling, manufacturing and more marketing. So the quality has to be very high these days in order to sell your product. Software is much less expensive to make and release. That's why there is so much mediocrity, but so little stuff that is of that same high quality as most hardware.
Old 30th October 2016
  #9
Lives for gear
 
Gnalvl's Avatar
 

There's no hard and fast rule to digital vs. digital. If two models of digital hardware can sound different even though they're supposed to sound the same (Q vs Micro Q, DX7 mk1 vs. mk2) then why should we be surprised what VST versions also have sonic differences? You really have to look at it on a case-by-case basis rather than follow generalizations.

I bought a TX802 and did a/b comparisons with the Dexed plugin, trying out both my custom patches and factory presets in each, and to me there was no audible difference. So I sold the TX802 the following week.

For some reason I like the sound of the PG8X plugin better than the MKS-70 I owned and sold after a month or so...and that's analog.

You can run the same classic PPG wavetables in Serum, Zebra, Largo, Microwave 1, and Microwave 2, and they will each have sonic pros and cons. I have also tried loading DW-8000 and SQ-80 wavetables into my Microwave 1, and it sounds similar in some ways and different in others - particularly depending on what octave you're playing in and what octave you sampled the wavetables from.

You really have to just try stuff out and decide for yourself what you like best.
Old 30th October 2016
  #10
Lives for gear
 

Hi guys thanks for your replies, I think the point about having biased when it comes to physical instrument versus software instrument is a good one. But I also think it does depend On On how the firmware or software is coded Programming implementations will obviously very Diva sounds completely different to massive for example.
Old 30th October 2016
  #11
Lives for gear
 
login's Avatar
It is all in your mind:

- Psychoacoustics
- Cognitive bias
- Subjective taste

To my ears there isn't a single digital hardware synth that sounds as good as Diva, Bazille, Falcon or omnisphere. But this is my subjective taste.
Old 30th October 2016
  #12
Lives for gear
 
Persemone's Avatar
I figure it like this: Differing code writes a differing sonic signature. That's all, mainly. Hardware brings an often-needed tactility and rapport which many find missing from [even very good] software, but software comes with a certain convenience and integrated footprint most hardware would envy. That's all, mainly. Best of all these days, there's room for both, and all the forum shoot-the-breeze aside, most of the cognoscenti on here [as in way beyond my useless talents] know this and use both to their advantage.

Hybrid all the way. There's too much good stuff all around to have to choose unnecessarily IMO.
Old 31st October 2016
  #13
Gear Guru
 
zerocrossing's Avatar
To me there is very little quality difference between ITB and OTB, and software can often have the edge in terms of pure quality. Inversely, a synth like the Solaris is, to my ears, superior to software in the sense that even if you built a software version, you'd be hard pressed to run it on a modern computer due to its 96 kHz internal processing.

More important, is character. You might be able to perfectly emulate a Waldorf Q in plugin form... but no one really has. So, if you want that sound you'll need the real deal.
Old 31st October 2016
  #14
Lives for gear
 
acreil's Avatar
 

Quote:
Originally Posted by zerocrossing View Post
Inversely, a synth like the Solaris is, to my ears, superior to software in the sense that even if you built a software version, you'd be hard pressed to run it on a modern computer due to its 96 kHz internal processing.
I don't think that's true at all. In general a modern CPU will be a lot more powerful. It was true decades ago that a dedicated ASIC could process things much more quickly than a general purpose CPU. And there may still be a few scattered examples of ASIC or DSP based platforms that can outperform a CPU when they're brand new. But once it's been around for more than a couple years, you're not running a Pentium 4 anymore, so the CPU is again going to be much faster. The advantage of a DSP or ASIC is more that it's cheaper and consumes less power for a given number of computations. So you can have a nice compact synth that draws 10w rather than a desktop PC.

I strongly doubt the Solaris does anything comparable to, say, Diva. For the most part all the really cutting edge computationally intensive stuff is in software (or things that have very limited polyphony, like the Roland Boutique stuff). The advantage of hardware is that it tends to be a more polished product with more manpower and research going into its development.

Quote:
More important, is character. You might be able to perfectly emulate a Waldorf Q in plugin form... but no one really has. So, if you want that sound you'll need the real deal.
There wouldn't be any incentive to emulate something like that anyway. If Waldorf wanted to port the code to a VST, they could do that. Otherwise it's not worth the trouble of exhaustively reverse engineering everything when it would be far easier to just come up with a new design that likely sounds better. So yeah, use the hardware if you like it, because probably no one's going to bother emulating it.
Old 31st October 2016
  #15
Lives for gear
 

Quote:
Originally Posted by acreil View Post
I wish people would stop saying this. You're assuming that they're otherwise identical; that's not a good assumption at all.

Software doesn't necessarily work the same way as hardware. Even digital hardware "reissued" as software by the same manufacturer doesn't necessarily implement everything exactly like the hardware. The sample rate, interpolation, control rates and bit depths, etc. are often different, and this can affect the sound. But for various reasons the designers may not consider an exact emulation to necessarily be desirable anyway.

I've only noticed a few cases where the DAC itself (beyond just the quantization) actually makes a noticeable difference. In every instance it's because they're really, really bad DACs. This is important if you're emulating chiptune ICs or cheap home keyboards, but it's not generally relevant for high end professional gear. There's not really a significant difference now between the DACs used in modern digital hardware and the DACs used in audio interfaces.

The analog output stage in vintage gear is sometimes poorly designed in ways that affect the sound. This isn't typically beneficial (thumps, bass roll-off, noise, etc.), but in some of cases it can contribute desirable distortions. Any really significant effects (like a steep reconstruction filter in the audible range) obviously should be explicitly emulated. But very often the DAC and output stages can be safely neglected.
While you're not wrong.. in theory.. i haven't actually seen it in practice.

The digital synths that I've tried that were emulated in VSTs were identical. I've owned both an SQ80 and Wavestation AD and now have the VSTs.. certainly spent lots of time with M1s and I hear no significant difference there.

Can you offer some examples where the VST differs significantly?
Old 31st October 2016
  #16
Lives for gear
 
gentleclockdivid's Avatar
 

Quote:
Originally Posted by zerocrossing View Post
To me there is very little quality difference between ITB and OTB, and software can often have the edge in terms of pure quality. Inversely, a synth like the Solaris is, to my ears, superior to software in the sense that even if you built a software version, you'd be hard pressed to run it on a modern computer due to its 96 kHz internal processing.

More important, is character. You might be able to perfectly emulate a Waldorf Q in plugin form... but no one really has. So, if you want that sound you'll need the real deal.
Reaktor can run @192 kHz..both audio and control rate
Old 31st October 2016
  #17
Gear Guru
 
zerocrossing's Avatar
Quote:
Originally Posted by acreil View Post
I don't think that's true at all. In general a modern CPU will be a lot more powerful. It was true decades ago that a dedicated ASIC could process things much more quickly than a general purpose CPU. And there may still be a few scattered examples of ASIC or DSP based platforms that can outperform a CPU when they're brand new. But once it's been around for more than a couple years, you're not running a Pentium 4 anymore, so the CPU is again going to be much faster. The advantage of a DSP or ASIC is more that it's cheaper and consumes less power for a given number of computations. So you can have a nice compact synth that draws 10w rather than a desktop PC.

I strongly doubt the Solaris does anything comparable to, say, Diva. For the most part all the really cutting edge computationally intensive stuff is in software (or things that have very limited polyphony, like the Roland Boutique stuff).
That assumes your CPU is going to be dedicated to your instrument. In reality it's got a large number of tasks it's trying to perform in real time. I'm pretty sure if they rewrote the Solaris code to run in windows it would max out your CPU. It's. capable of very good audio rate modulation. If you listen to Diva, it sounds kind of tame. RePro sounds great, but it's monophonic and still pretty hungry, though Urs has said the release will be optimized. Look at the new Roland Jupiter 8 plug out. No plug in. I imagine that they couldn't get that level of quality and run well. Not with getting the crossmod right. (Which sucks on the JP-08.)
Old 31st October 2016
  #18
Gear Guru
 
zerocrossing's Avatar
Quote:
Originally Posted by gentleclockdivid View Post
Reaktor can run @192 kHz..both audio and control rate
Sure. Let me know how many voices you're getting with that. I've maxed my i7 3.4 out @ 96 kHz doing single voice block patches that weren't too complex.
Old 31st October 2016
  #19
Lives for gear
 
fusionid's Avatar
 

I have eurorack oscilators from different brands. I did a side by side comparison between the bass station II and monark at 96khz internal. I could not tell them apart. That was the moment I sold the BSII.
Old 1st November 2016
  #20
Lives for gear
 
acreil's Avatar
 

Quote:
Originally Posted by biggator6 View Post
Can you offer some examples where the VST differs significantly?
Synclavier V or Roland's D50 card for the V-Synth. But what you consider a significant difference depends on how picky you are.

Quote:
Originally Posted by zerocrossing View Post
That assumes your CPU is going to be dedicated to your instrument. In reality it's got a large number of tasks it's trying to perform in real time.
No, that's an explanation for why your computer can't achieve latency comparable to hardware. The audio has to be buffered because it's a multitasking operating system. When you're running a DAW, the vast majority of the CPU power is going to audio processing, unless you're playing games and encoding video files and doing 3D modeling at the same time.

Quote:
I'm pretty sure if they rewrote the Solaris code to run in windows it would max out your CPU. It's. capable of very good audio rate modulation.
The thing you're apparently not getting is that standard VAs aren't in any way comparable to Diva or Reaktor or whatever. If you ported an early VA like the Nord Lead 1 or JP-8000 or MS-2000 or whatever to software, probably you could get 1000 note polyphony or better. Newer hardware VAs are obviously more sophisticated, but they still wouldn't be CPU killers the way the latest software is. Calculating modulation, etc. at audio rate, running everything at a high sample rate, taking measures to reduce aliasing, etc. isn't really a big deal now.

Stuff that uses circuit simulation techniques is far, far more computationally demanding. It's a fairly recent development (at least for real time audio processing), and I expect that aside from Roland's ACB stuff and some amp simulators, it's almost exclusively the domain of software. A major reason is that the computational requirements aren't constant. The circuit equations are solved iteratively until they converge upon a solution. Depending on the input and the state of the circuit, the number of iterations can vary over a wide range. This works a lot better with buffered audio (i.e. software). If you're trying to squeeze the best performance out of the DSP in a hardware VA, you want each voice to use a fixed number of instructions. Anything that requires a variable number of instructions to compute has to be given enough room for the worst case. So most of the time, a large portion of the instruction cycles will be wasted.

DSPs certainly have their advantages, but this isn't one of them. I'd be very surprised if the Solaris did anything like circuit simulation; as far as I know it was developed well before that sort of thing was available in software. I expect its computational requirements are more on the level of the Korg Radias, Novation Ultranova, or just a decent sounding but not cutting edge VST.
Old 1st November 2016
  #21
Lives for gear
 

i haven't gotten in to using software instruments on my mac yet but i'm about to cause i'm getting NI complete.

I've heard that the CPU load can be vary high when using software instruments so shurly given this fact a dedicated hardware CPU would produce better sounding results because its only focusing on one thing? but i maybe wrong.
Old 1st November 2016
  #22
Gear Guru
 
zerocrossing's Avatar
Quote:
Originally Posted by acreil View Post
Synclavier V or Roland's D50 card for the V-Synth. But what you consider a significant difference depends on how picky you are.



No, that's an explanation for why your computer can't achieve latency comparable to hardware. The audio has to be buffered because it's a multitasking operating system. When you're running a DAW, the vast majority of the CPU power is going to audio processing, unless you're playing games and encoding video files and doing 3D modeling at the same time.



The thing you're apparently not getting is that standard VAs aren't in any way comparable to Diva or Reaktor or whatever. If you ported an early VA like the Nord Lead 1 or JP-8000 or MS-2000 or whatever to software, probably you could get 1000 note polyphony or better. Newer hardware VAs are obviously more sophisticated, but they still wouldn't be CPU killers the way the latest software is. Calculating modulation, etc. at audio rate, running everything at a high sample rate, taking measures to reduce aliasing, etc. isn't really a big deal now.

Stuff that uses circuit simulation techniques is far, far more computationally demanding. It's a fairly recent development (at least for real time audio processing), and I expect that aside from Roland's ACB stuff and some amp simulators, it's almost exclusively the domain of software. A major reason is that the computational requirements aren't constant. The circuit equations are solved iteratively until they converge upon a solution. Depending on the input and the state of the circuit, the number of iterations can vary over a wide range. This works a lot better with buffered audio (i.e. software). If you're trying to squeeze the best performance out of the DSP in a hardware VA, you want each voice to use a fixed number of instructions. Anything that requires a variable number of instructions to compute has to be given enough room for the worst case. So most of the time, a large portion of the instruction cycles will be wasted.

DSPs certainly have their advantages, but this isn't one of them. I'd be very surprised if the Solaris did anything like circuit simulation; as far as I know it was developed well before that sort of thing was available in software. I expect its computational requirements are more on the level of the Korg Radias, Novation Ultranova, or just a decent sounding but not cutting edge VST.
I think you're way off on that. I think you have no idea what the Solaris is or what it does. If you've heard it and think it's remotely in the same league as the Radias or Ultranova, you might want to get your ears checked. I don't mean that the Radias and Ultranova can't do some cool sounds, but I've listened to tons of demos of all the synths mentioned and to me the Solaris sounds better by a very long shot. Better than any equivalent software... though for the life of me I can't really think of an equivalent software instrument. Tell me about a 10 voice synth that's got this feature set at say the quality of Diva:

Solaris Specifications

Ask yourself this question as well. Why hasn't there been a really good Jupiter 8 emulation? Don't say Diva. Doesn't really cut it. Now we've got the new Jupiter 8 Plug Out. I think it sounds great. As good as the monophonic Plug Outs, which IMO, is probably among the best analog modeling synths available and definitely a step up from Diva, but, why no VST version of the Jupiter 8 Plug Out? To get more people to buy the System 8? Maybe, though they're leaving a ton of money on the table for people like me who don't have room for another keyboard. Well, then why hasn't someone else stepped up and done an emulation as good then? All the answers I can think of point to the fact that to get the quality that's on par with the previous plug outs, or even Monark, or Legend, they'll need to commandeer nearly your entire i7, which no software manufacturer wants to do because they know you're also running at least a half a dozen other plug ins and want to keep your buffer down to 128 samples or less.

One plug in that I do think is in the neighborhood of the sound quality and features of the Solaris is MPowersynth. (Though not a 1:1, I'm just saying it's as feature rich) To get it at that quality you've got to run it at at least 3x oversampling at it's best render settings. At that point, if you're looking for a decent voice count, you'd better have some hardware synths as well or be prepared to freeze that track if you want some other instruments going on.
Old 1st November 2016
  #23
Gear Guru
 
zerocrossing's Avatar
Quote:
Originally Posted by fusionid View Post
I have eurorack oscilators from different brands. I did a side by side comparison between the bass station II and monark at 96khz internal. I could not tell them apart. That was the moment I sold the BSII.
That's one of the silliest things I've read in a while. Don't get me wrong, I'm a fan of Monark, but a synth is more than it's oscillators. The Bass Station 2 is a profoundly different sounding instrument and there is nothing in the software world that sounds remotely like it. I have never, ever, heard any software synth that does feedback (route the headphone out into the audio input) like it. I'm not saying that you have to own it, or even like it, but I personally find it's well worth having even with Monark, Legend and 4 other analog monos.
Old 1st November 2016
  #24
Lives for gear
 
fusionid's Avatar
 

I never said the feedback, overdrive, FM or filter sweep in the BSII sounds the same as monark. That is just silly it doesn't. it is not even supposed to sound like it.

OP said,

Quote:
To my ears digital synths both euro rack digital modules like braids and piston honda and all in one hardware synths like the sledge and virus sound better than most vst's, this is is interesting to me i'm wondering why this could be.
He said nothing about feedback, overdrive, FM or filter sweep. If someone posted a quote saying, OMG reaktor does this just like hardware I would be like that's silly

while I sold the BSII, I own 4 analog eurorack oscillators.
Old 1st November 2016
  #25
Lives for gear
 
acreil's Avatar
 

Quote:
Originally Posted by zerocrossing View Post
I think you're way off on that. I think you have no idea what the Solaris is or what it does. If you've heard it and think it's remotely in the same league as the Radias or Ultranova, you might want to get your ears checked. I don't mean that the Radias and Ultranova can't do some cool sounds, but I've listened to tons of demos of all the synths mentioned and to me the Solaris sounds better by a very long shot. Better than any equivalent software... though for the life of me I can't really think of an equivalent software instrument. Tell me about a 10 voice synth that's got this feature set at say the quality of Diva:
Your opinion of the sound quality has little to do with the number of operations it takes to compute the sound (otherwise we'd just throw out all the D50s and Wavestations and PCM70s and everything because they're computationally trivial compared to modern stuff). Most of the features of the Solaris don't add a lot to the computational cost. Aside from the effects, it's mainly the oscillators, filters and ring modulator. And filter algorithms that were considered state of the art at the time the Solaris was designed are now lightweight and cheap compared to newer ones that model analog filters more accurately. Which one you subjectively prefer is an entirely different matter. The new stuff is a CPU killer because it behaves more like an actual circuit, not because you should like it better.

I'm not replying to you again.
Old 1st November 2016
  #26
Lives for gear
 

Quote:
Originally Posted by zerocrossing View Post
I've maxed my i7 3.4 out @ 96 kHz doing single voice block patches that weren't too complex.
The "Blocks" in Reaktor are extremely inefficient in terms of CPU usage. Some reasons, why performance is terrible:

- Reaktor only uses one core. "Maxing out" your i7 therefore only uses about 25% of the available processing power...

- Reaktor does not really work well with dynamic speed adjustments of the CPU (for power saving, thermal management, etc). I can run 50% more voices at half the buffer length when I lock my clock rate.

- Reaktor mostly uses all the fancy single instruction multiple data stuff that modern CPUs do for multiple voices that get processed in parallel. It is not a huge difference (pretty hard to optimize stuff, since the user can do quite crazy stuff), but additional voices only seem to add 60-70% of the CPU load of the first voice in most cases. Obviously it gets cheaper, when some stuff (effects) stays in mono.

- Blocks do an absurd amount of (potentially unneeded) audio rate multiplications. If you connect both modulation buses of a block you will end up with 2 audio rate multiplications for EACH knob you could potentially control.

- not sure, but some blocks seem to be absurdly CPU intensive. One day I will find out why...

If you move closer to a "fixed" architecture (e.g. forbidding voice interaction between different voices, make sure all your voices use the same routing, have fixed modules that you can use etc), you can do A LOT of optimization. Using multiple cores and SMID could give you 8-32 times the processing speed (depending on the precision used and integer vs. float use).
Old 1st November 2016
  #27
Lives for gear
 

A gentle reminder that the Korg Kronos -- which is, basically, every kind of synthesis you could ever want, in one keyboard, the last of the Great Workstations -- is, ultimately, a pretty old and slow PC running Linux integrated with the interface.

Any i7-based (aka "modern" since around 2011) PC is multiples faster than that.

The bias towards any hardware digital over software in terms of sonic quality is purely arbitrary, and psychological, definitively NOT technical, well except maybe in the sense that older technologies, because familiar, sometimes catch the ear in distinct ways that due to habit are considered "pleasing."

This debate went on endlessly in the then-existent Clavia Nord Modular forum, when the G2 came out and the G1 wasn't so old most people had forgotten about it. Long-seasoned G1 experts, like Rob Hordijk (inventor now of the Rungler, among other fascinating modular contraptions!), Chet Singer, Marko Ettlich, a host of others of that level of expertise and musical sensitivity, almost universally complained that the G2 didn't sound "as good as" the G1. Possibly because it was using newer-generation DSPs, instead of Ye Olde Motorola 56303s, which everyone knows have as classic a digital sound as Curtis chips do analogue?.... [the 56303s are in virtually every digital synth made between '96-2002, well except for the larger Japanese companies using proprietary self-created hardware or, honestly, early '90's Roland stuff which -- I kid you not! -- used the same CPU used in the Sega Genesis game system!].

That anorak/grognard rejection of the G2 basically helped kill the market for any future Nord Modulars, I suspect. Not intentionally -- there was/is a difference in sound, due in part to the hardware, in part to the increase in quality and precision of the technology available, in part due to Clavia's research pursuit of advanced FM possibilities with the G2 instead of simply recreating past success.... lots of factors.

ANYWAYS, the point is mainly that there ARE differences in different digital hardware, but almost any even half-competent software made these days is light-years ahead, in terms of both resources available and software circuitry design subtleties, of anything made even three-four years ago.

And, ultimately, if it's digital.... it's all software.

A/D-D/A converters have -- even the cheapest -- long ago achieved "Burr Brown" levels of quality, to the point that they're not a factor in sound any more. I suspect a lot of people use crap sound cards in their PCs, or have lousy monitoring, and thus judge the output accordingly.

Any sampling rate above 44.1kHz only matters in terms of the internal mathematics of digital circuit processing; if you have a lot of numbers at high speed getting passed through a lot of calculations at a level of multizillions a nanosecond, small losses in digital precision can add up and produce barely-detectable noise or coloration in the final output -- I'm talking internally in the software calculation process long before anything goes to an output of the software circuit -- and that's why using 48kHz or 96kHz oversampling can produce detectable sonic differences. But in terms of real-world needs? Except for pseudo-scientific GS "findings," irrelevant, even there.

Reaktor Blocks tends to be very CPU-intensive because I believe they were originally built with Core, which as has been stated, is not multi-threaded in its design, would be/is (no idea where NI is in thinking about doing this) an absolute monster to redesign to be multi-threaded and use multiple processors, and is at root trading off computational efficiency for flexibility and exposure of things down to the z-1 sample level for DSP circuit design emulation (without needing reference to SPICE) available to end users. The number of people on the planet interested and competent enough to actually use Core effectively could probably fit comfortably into a studio apartment in downtown Berlin, so it's not a very profitable idea to think about spending lots of development time on making things multi-threaded for those six people.....

Open up Razor sometime in a full version of Reaktor, and drill down to all the component pieces. You will cry. It makes the average circuit design for a synth look like child's play by comparison. Any of NI's softsynths released since Razor tend to have similar levels of complexity; a lot of knowledge and insight and foundation has been built up over twenty years within the Reaktor community and library, and in NI's own independent research, that provides the basis for more recent softsynth output.

BTW.... inside NI's Monark, my fave name for a module of ALL TIME! is the "Mix&BlubBlub Enhancer" in the circuit.

I think that's the magic of Moog; it's Blub Blub Enhancement, clearly!

So.... there's all that.

Last edited by realtrance; 1st November 2016 at 02:31 PM..
Old 1st November 2016
  #28
Lives for gear
 

Quote:
Originally Posted by bug2342 View Post
The "Blocks" in Reaktor are extremely inefficient in terms of CPU usage. Some reasons, why performance is terrible:

- Reaktor only uses one core. "Maxing out" your i7 therefore only uses about 25% of the available processing power...

- Reaktor does not really work well with dynamic speed adjustments of the CPU (for power saving, thermal management, etc). I can run 50% more voices at half the buffer length when I lock my clock rate.

- Reaktor mostly uses all the fancy single instruction multiple data stuff that modern CPUs do for multiple voices that get processed in parallel. It is not a huge difference (pretty hard to optimize stuff, since the user can do quite crazy stuff), but additional voices only seem to add 60-70% of the CPU load of the first voice in most cases. Obviously it gets cheaper, when some stuff (effects) stays in mono.

- Blocks do an absurd amount of (potentially unneeded) audio rate multiplications. If you connect both modulation buses of a block you will end up with 2 audio rate multiplications for EACH knob you could potentially control.

- not sure, but some blocks seem to be absurdly CPU intensive. One day I will find out why...

If you move closer to a "fixed" architecture (e.g. forbidding voice interaction between different voices, make sure all your voices use the same routing, have fixed modules that you can use etc), you can do A LOT of optimization. Using multiple cores and SMID could give you 8-32 times the processing speed (depending on the precision used and integer vs. float use).
.... and a simple reminder for those even less computationally literate than I am..... multiplication is a computationally far more expensive operation in computing than addition......
Old 1st November 2016
  #29
Lives for gear
 
evosilica's Avatar
 

Quote:
Originally Posted by acreil View Post
Stuff that uses circuit simulation techniques is far, far more computationally demanding. It's a fairly recent development (at least for real time audio processing), and I expect that aside from Roland's ACB stuff and some amp simulators, it's almost exclusively the domain of software.
The inaccurate and simplified accent behaviour of the tb03 raises doubt if Roland's ACB is actually component level circuit simulation or simply marketing.
In my humble opinion it's the latter.
Old 1st November 2016
  #30
Lives for gear
 
Gnalvl's Avatar
 

Quote:
Originally Posted by biggator6 View Post
Can you offer some examples where the VST differs significantly?
lol, try Komplexer vs Micro Q. It actually imports raw uQ sysex but sounds totally different.

Other than that, I agree that most digital hardware doesn't differ that much from any VSTs actually intended to emulate it. I think the problem is more that very few VST's actually aim to emulate specific digital hardware. Consequently people wind up buying certain VST's hoping it can replace a piece of digital hardware they owned and just wind up disappointing themselves.

For example Omnisphere vs Roland Romplers, Dune 2 vs. Access Virus, and or vs. Microwave II/XT. In each case the VST fills a similar role and may have sonic similarities, but if you're expecting exact emulations, you're going to be let down.
Topic:
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump