The No.1 Website for Pro Audio
 All  This Thread  Reviews  Gear Database  Gear for sale     Latest  Trending
Avid HDX Benefits?
Old 6th October 2017
  #31
Gear Nut
 

It should be noted that screaming fast PC's with lots of PCI slots are relatively cheap to build, and work well with PT HD/HDX under Windoz, which avoids the hassles with Apple H/W.

Quote:
Originally Posted by RyanC View Post
Not trying to nitpick here, but even the trashcans are dated or have some reason where they don't perform as well as hacks for low latency/audio. The best I've seen on the logic audio benchmark is around 250 tracks- My machine maxes out at 238 tracks @ 88.2 @ 32 sample buffer. Which in my tests does scale to double the tracks at half the buffer so the equivalent of 468 tracks at a 16 sample buffer. It's just playback in logic there, so not a real stress of the buffer...

In any case nobody using a real mac has experienced a state of the art native machine IMO since about 2012.
Old 12th September 2019
  #32
Gear Addict
 
Zoot's Avatar
Old thread, I know. I just don't want to start a new one.

I have been using an HDX system in a 2012 Mac Pro 12 core since 2012. I still run Mavericks 10.9.5. It's a rock solid system with the drive bays filled with SSDs managed with chronosync. After all these years, HDX is still very impressive with both tracking and mixing. It can't seem to handle 64 sample buffer when a lot of native plugins are stacked, but it works flawlessly with DSP plugs while tracking.

The reason I am bumping this thread is because I'm curious as to what the future might hold. Do you think that Avid's Pro Tools/HD/HDX will always be top dog and the only near-zero-latency tracking system on the market? I am a little blown away that UA and Waves haven't put forth such a system with their own processors and daw. I know that designing a low latency DAW system is no simple feat, but I wonder what the options will be in the future.

My converters send out AES/EBU and are very low latency, so theoretically I could get any system in the future that accepted AES/EBU that offered near-zero latency monitoring... but will there ever be one? I love Pro Tools. I've used it since PT5 and have stayed at PT11 since 2012. No plan on ever switching, but if there were another option that offered the unparalleled tracking/monitoring with plugins, I would probably consider it.
Old 12th September 2019
  #33
Motown legend
 
Bob Olhsson's Avatar
 

HDX is still very current because it's the center of Avid's consoles.
Old 13th September 2019
  #34
Quote:
Originally Posted by Zoot View Post
Old thread, I know. I just don't want to start a new one.

I have been using an HDX system in a 2012 Mac Pro 12 core since 2012. I still run Mavericks 10.9.5. It's a rock solid system with the drive bays filled with SSDs managed with chronosync. After all these years, HDX is still very impressive with both tracking and mixing. It can't seem to handle 64 sample buffer when a lot of native plugins are stacked, but it works flawlessly with DSP plugs while tracking.

The reason I am bumping this thread is because I'm curious as to what the future might hold. Do you think that Avid's Pro Tools/HD/HDX will always be top dog and the only near-zero-latency tracking system on the market? I am a little blown away that UA and Waves haven't put forth such a system with their own processors and daw. I know that designing a low latency DAW system is no simple feat, but I wonder what the options will be in the future.

My converters send out AES/EBU and are very low latency, so theoretically I could get any system in the future that accepted AES/EBU that offered near-zero latency monitoring... but will there ever be one? I love Pro Tools. I've used it since PT5 and have stayed at PT11 since 2012. No plan on ever switching, but if there were another option that offered the unparalleled tracking/monitoring with plugins, I would probably consider it.
Presonus have tried it up to a point; Apogee and Logic are also fairly tightly integrated. But neither has really developed true low latency hardware with DSP that makes native buffers irrelevant when tracking.

Anything that adds a cuemixer will add a layer of complication that immediately puts it down a league compared to proper integration. So yes - UAD or Antelope or someone need to have true low level integration with a software manufacturer to get that same immediacy. Console monitoring built into Cuba’s for example.

Trouble is, all these manufacturers have their own hardware so you’re going to be struggling there. Apple don’t of course but that’s only half the potential sales.

I love uad for tracking vocals, in a way I prefer it to hdx, but as soon as you get multiple mics, it’s a pain.

So you need to get all that, make it BETTER than PT to encourage people to switch, and at a decent price point. It’s hard! All those people saying “PT isn’t the standard anymore” are clueless - sure there’s fewer “real” studios than ever and more people making professional music on their own terms/in their own spaces, but that doesn’t mean it’s changing in the real studio world.

And as Bob says - you need proper integration with decent control surfaces to take over the high end, and if you don’t have that, you won’t take over the mid area from people who go back and forth with the high end either!
Old 13th September 2019
  #35
Lives for gear
 
deuc647's Avatar
i do have to say that HDX s WAY more stable than anything ive used. I will say i loved cubase and honestly, I could be just as happy with that but what pulled me back into DSP PT is the control surface integration. All other ones seem like they are trying to be as integrated as possible where the c24 allows me to just use the controller to do about 95% of everything i need without having to look at the computer monitor, makes mixing so much more entertaining. But HDX is super stable and allows me to extend the shelf like of my i7 2600k Sandybridge PC

Old 14th September 2019
  #36
Lives for gear
I think it would be really interesting if there were more hard science on sensitivities to sub 10ms monitoring latency.

For me a lot of the anecdotal stories are about comb filtering and I find this interesting because comb filtering is quite noticeable even at 2ms (open a project, slap mod delay iii on the master, set to 2ms both sides and mix to 50/50). You can clearly here it even at 1ms (first null at 500hz). While they sound different in tone, to me, 3ms and 8ms both sound about equally comb-filtery. IE I don't think I would really choose one over the other if that was the monitoring choice...Subjectively on the project I opened, I thought 7ms sounded least objectionable (probably luck of the draw figuring the key of the song and the nulls in the comb filter).

But I do respect that a lot of great engineers have experiences where getting lower has solved problems for them...It's interesting reading around because some people swear by analog only for the cues. This makes more sense to me then the difference between 3ms and 8ms at least on the comb filter perspective...It would be cool if there was an actual AES test, double blind, with a large sample size on actual sensitivity to the delay. I wouldn't be surprised to find that some people on the extreme end of the bell curve could pick out the difference between analog, 3ms and 8ms. But then I would also want to know if spatial emulation like the Klang products, or even a tight reverb with ERs would lower those peoples ability to detect the difference.

I always liked the idea of analog only for being clearly the best of all (eliminating any chance of an issue at least), but it's been hard for me to make it work from a workflow perspective. An amazing solution would be a daw maker that put out a digitally controlled modular 500 series type rack with an integrated analog mixer/router that works with the DAW. Something like that could be capable of all analog in the cue path, printing dry tracks, and playback/mixing running back through the outboard and storing it all for automation and recall.
Old 14th September 2019
  #37
Motown legend
 
Bob Olhsson's Avatar
 

I favor analog-only simply because many of us found that obtaining a decent take took much longer when people were hearing themselves through TDM.

A little $25 Nady completely solves the latency problem!
Old 14th September 2019
  #38
Gear Guru
 
Brent Hahn's Avatar
 

Quote:
Originally Posted by Bob Olhsson View Post
I favor analog-only simply because many of us found that obtaining a decent take took much longer when people were hearing themselves through TDM.

A little $25 Nady completely solves the latency problem!
I've found the opposite a few times, that people are so accustomed to that tiny bit of TDM latency that when they hear themselves without it, it seems foreign and uncomfortable. Also, with TDM. using a Trim plug to flip polarity on the monitor side of the channel(s) being recorded seems to give vocalists wearing headphones a subjectively fuller sound, even though you won't hear a difference on speakers.
Old 14th September 2019
  #39
Motown legend
 
Bob Olhsson's Avatar
 

We always needed to get the headphone polarity right back in analog days too.
Old 4 weeks ago
  #40
Lives for gear
 
nukmusic's Avatar
 

Interesting topic to bump.

I have been wondering how newer/more powerful computers play(will play) into the equation being that they should be able to do more at lower buffer settings.
Old 4 weeks ago
  #41
Lives for gear
 

if there were an aax dsp version of auto tune i would be set for life. Honestly there would be no reason to use anything else for tracking.

ej
Old 4 weeks ago
  #42
Quote:
Originally Posted by nukmusic View Post
Interesting topic to bump.

I have been wondering how newer/more powerful computers play(will play) into the equation being that they should be able to do more at lower buffer settings.
Even a proper trip through the software at 32 samples is “worse” than dsp use. And you’d need a computer powerful enough to run that buffer setting at all times - because right now you can have a mix straining with native plugs and still punch in at low latency over the top.

Quote:
Originally Posted by ejsongs View Post
if there were an aax dsp version of auto tune i would be set for life. Honestly there would be no reason to use anything else for tracking.

ej
Well, there isn’t really now - AT native on low latency setting and a low native buffer is still pretty good, and given the nature of singing through auto tune, here it’s one situation that a small latency isn’t an issue.

I agree though - would it kill them to make an AAX version of Autotune Live? If they can code it for uad, surely the hdx market is big enough?! EVERY studio would buy it!
Old 4 weeks ago
  #43
Lives for gear
Quote:
Originally Posted by psycho_monkey View Post
Even a proper trip through the software at 32 samples is “worse” than dsp use. And you’d need a computer powerful enough to run that buffer setting at all times - because right now you can have a mix straining with native plugs and still punch in at low latency over the top.
From what I've gathered from @ ProPower who's measured with a function generator-

@44.1/48 HDX is 1.9 with only an audio track, 2.5ms with one send and submix and ~10-100 samples per plugin monitored through, so depending on plugins 2.7ms to 4.7ms. HDN is 3.35ms at the 32 sample buffer and most plugins add 0 samples.

@ 96k HDX is .45 only an audio track, ~1ms with one send and bus and plugins adding from .1ms to another 1ms, so 1.1ms to 2ms. HDN is 1.8ms @64 buffer.

But HDX also increases RTL with more auxes and busses...not by a lot, but in these margins even an .5 ms increase is significant. Adding submixes and auxes has no effect on RTL for native. Maybe ProPower will chime in with more details.

Of course it's true that you have to run the lowest buffer settings, but this is no issue on my 7980xe systems if the sessions don't have VI's or if you freeze the VI's. I wish there was an option to have double or even quad buffer settings for VI's in PT.

It is true that CPU use goes up dramatically when you go from the 128 buffer to the 64, from 64 to 32 isn't as dramatic, so there is some threshold there (at 44.1/48). From what I can see tinkering with 2caudio breeze, support for AVX 512 seems to make a solid improvement there...IE less of a jump going from 128 to 64 and then 32. I wasn't able to test it before and after though so this is a bit conjecture. I will say that at the 32 sample buffer plugins have to be coded well, some plugins, in spite of having plenty of CPU overhead, will not run cleanly at the lowest buffers.
Old 4 weeks ago
  #44
Lives for gear
Its getting to be a while since I had HDX but Ryan C looks mostly correct.

1ms tracking on HDX at 96kHz with careful plugin choices is totally doable. I have only tracked at 96kHz for last 5 years for for RTL reasons so can't comment on how long one can maintain 32 buffer at 44.1.

At 96kHz I find 32 buffer (not available in PT) very hard to sustain and virtually every high quality Reverb throws pops and clicks in. In S1 it is close to working but still not. Buffer 64 same result. Only gets good with buffer 128 - 256. RTL is then in the >3ms range - doesn't work for me.

With Native I rely on Interface DSP - today have both Antelope and Apogee. Both deliver <0.5ms RTL at 96kHz. Plugins are limited to what they support for this function. With the Apogee version super limited on plugins (and forever in Beta) but I have a working workflow with Logic that is as close to HDX as I have found.

All that said - these DSP assist workflows are really only good for singe monitor mix situations. Once multiple monitor mixes are needed - I would wish for HDX above all others...
Old 4 weeks ago
  #45
Lives for gear
Quote:
Originally Posted by ProPower View Post
With Native I rely on Interface DSP - today have both Antelope and Apogee. Both deliver <0.5ms RTL at 96kHz.
That's measured through the converters? Did you ever try all analog? And are you in the comb filtering camp?

I still find the comb filtering position to be interesting considering that if the performer is even 1' from the mic, and even with .5ms RTL for native, that's 1.6ms of latency- which is still significant comb filtering, with the first null starting at ~350hz.

I'm not challenging anything as much as trying to make sense out of it. Maybe with acoustic guitar ~1.6ms actually helps time align with the latency it takes to get from the guitar to the ears.

Open or closed back headphones?
Old 4 weeks ago
  #46
Lives for gear
Also FWIW I can run 32 sample buffer at 44.1 fairly reliably on the 7980xe hacks, abbey road plates, breeze, Pro-R all work no probs...Actually the biggest problem with PT is it makes glitchy noises when you stop, but playback is good with the exception of a session with a lot of VI's.

It will be interesting to see what the new mac pro's are capable of on the native side.

I don't personally have any issues with even 5-10ish ms. Maybe it's because I'm a piano player and even the action of a piano has 4-5ms of lag before the hammer hits the string...
Old 4 weeks ago
  #47
Lives for gear
Yes - I too struggle with the reality of physics and the numbers in that I sing between 6" (rare) and 1.5 feet from the Mic which totally effects the total RTL (Hardware + distance) and thus the comb filtering. In general I have little issues with 1ms RTL from the hardware. But it is so hard to get a full Native system to run there. I am sure 44.1 is better from the computer POV. Whenever I engage 2+ms from the hardware it is a big tonal change. Vocals is the most significant.

The 0.3ms I quote is analog to analog (measured scope and Function Generator) through the Apogee FPGA FX. Antelope is 0.3 + some for each added AFX.
Old 4 weeks ago
  #48
Quote:
Originally Posted by RyanC View Post
From what I've gathered from @ ProPower who's measured with a function generator-

@44.1/48 HDX is 1.9 with only an audio track, 2.5ms with one send and submix and ~10-100 samples per plugin monitored through, so depending on plugins 2.7ms to 4.7ms. HDN is 3.35ms at the 32 sample buffer and most plugins add 0 samples.

@ 96k HDX is .45 only an audio track, ~1ms with one send and bus and plugins adding from .1ms to another 1ms, so 1.1ms to 2ms. HDN is 1.8ms @64 buffer.

But HDX also increases RTL with more auxes and busses...not by a lot, but in these margins even an .5 ms increase is significant. Adding submixes and auxes has no effect on RTL for native. Maybe ProPower will chime in with more details.

Of course it's true that you have to run the lowest buffer settings, but this is no issue on my 7980xe systems if the sessions don't have VI's or if you freeze the VI's. I wish there was an option to have double or even quad buffer settings for VI's in PT.

It is true that CPU use goes up dramatically when you go from the 128 buffer to the 64, from 64 to 32 isn't as dramatic, so there is some threshold there (at 44.1/48). From what I can see tinkering with 2caudio breeze, support for AVX 512 seems to make a solid improvement there...IE less of a jump going from 128 to 64 and then 32. I wasn't able to test it before and after though so this is a bit conjecture. I will say that at the 32 sample buffer plugins have to be coded well, some plugins, in spite of having plenty of CPU overhead, will not run cleanly at the lowest buffers.

Quote:
Originally Posted by ProPower View Post
Its getting to be a while since I had HDX but Ryan C looks mostly correct.

1ms tracking on HDX at 96kHz with careful plugin choices is totally doable. I have only tracked at 96kHz for last 5 years for for RTL reasons so can't comment on how long one can maintain 32 buffer at 44.1.

At 96kHz I find 32 buffer (not available in PT) very hard to sustain and virtually every high quality Reverb throws pops and clicks in. In S1 it is close to working but still not. Buffer 64 same result. Only gets good with buffer 128 - 256. RTL is then in the >3ms range - doesn't work for me.

With Native I rely on Interface DSP - today have both Antelope and Apogee. Both deliver <0.5ms RTL at 96kHz. Plugins are limited to what they support for this function. With the Apogee version super limited on plugins (and forever in Beta) but I have a working workflow with Logic that is as close to HDX as I have found.

All that said - these DSP assist workflows are really only good for singe monitor mix situations. Once multiple monitor mixes are needed - I would wish for HDX above all others...

Quote:
Originally Posted by ProPower View Post
Yes - I too struggle with the reality of physics and the numbers in that I sing between 6" (rare) and 1.5 feet from the Mic which totally effects the total RTL (Hardware + distance) and thus the comb filtering. In general I have little issues with 1ms RTL from the hardware. But it is so hard to get a full Native system to run there. I am sure 44.1 is better from the computer POV. Whenever I engage 2+ms from the hardware it is a big tonal change. Vocals is the most significant.

The 0.3ms I quote is analog to analog (measured scope and Function Generator) through the Apogee FPGA FX. Antelope is 0.3 + some for each added AFX.
All that sounds realistic - and I totally agree that the cuemixer workflow (I like UAD best) works great for single source overdubs with one monitor mix. As soon as you've got multiple performers or a large mic setup - it's a nightmare by comparison!
Old 4 weeks ago
  #49
Lives for gear
 
nukmusic's Avatar
 

Quote:
Originally Posted by ProPower View Post
Yes - I too struggle with the reality of physics and the numbers in that I sing between 6" (rare) and 1.5 feet from the Mic which totally effects the total RTL (Hardware + distance) and thus the comb filtering. In general I have little issues with 1ms RTL from the hardware. But it is so hard to get a full Native system to run there. I am sure 44.1 is better from the computer POV. Whenever I engage 2+ms from the hardware it is a big tonal change. Vocals is the most significant.

The 0.3ms I quote is analog to analog (measured scope and Function Generator) through the Apogee FPGA FX. Antelope is 0.3 + some for each added AFX.

What computer did you run the latency tests on? Or should I ask if you were able to run it on a 12core system?
Old 4 weeks ago
  #50
Lives for gear
No 12 cores yet :-)

For RTL testing computer is of little importance. Interface, and DAW are the main players and I use Function Generator and Oscilloscope for measurements (Analog to Analog). For running at low buffers (what i think you meant) I have used the following machines and systems over the last few years...

2013 Hex Core MP
2017 27" iMac (all models)
2019 iMac i9

Interface have been:
HDN, HDX, DigiGrid, ULN8, Antelope Orion 2017, Zen Tour and Apogee Ensemble TB. The list makes me wonder why I ever sold my 2009 MP and PCIe cards... LOL!
Old 4 weeks ago
  #51
Lives for gear
 
nukmusic's Avatar
 

LOL Thanks and yes I was leaning more toward real world tests I guess. I did know about the audio convertors and software having more of an impact on the round trip latency. I asked out of general curiosity though.

Over the years I have seen many different people mentioned their "real world" experiences with low buffers settings without much focus on the overall power of their computer. I run a 2010 2.93 12core Macpro with SSD drives at a 64 buffer most of the time(HD Native with 2019 Ultimate). I would never expect to have the same results on say a 4, 6, or 8 core 2010 Macpro. Same goes for my Macbookpro quad core. When I do upgrade to a newer system with a faster, more efficient CPU(s) I will expect to have better performance especially at lower buffer settings.

I have been wondering how well that 18core iMacpro runs real world Protools sessions at the 64 buffer setting. My own system scores about 24000 on Geekbench 4 and the 18core iMacpro is said to score about 48000. That 2019 Macpro should be a beast at lower buffers but time will tell. Geekbench doesn't tell you everything but I feel it does give an overall idea when comparing systems. I'm still in the mindset of trying to double my cpu processing power in order to really notice an improvement.
Old 4 weeks ago
  #52
Lives for gear
Quote:
Originally Posted by nukmusic View Post
LOL Thanks and yes I was leaning more toward real world tests I guess. I did know about the audio convertors and software having more of an impact on the round trip latency. I asked out of general curiosity though.

Over the years I have seen many different people mentioned their "real world" experiences with low buffers settings without much focus on the overall power of their computer. I run a 2010 2.93 12core Macpro with SSD drives at a 64 buffer most of the time(HD Native with 2019 Ultimate). I would never expect to have the same results on say a 4, 6, or 8 core 2010 Macpro. Same goes for my Macbookpro quad core. When I do upgrade to a newer system with a faster, more efficient CPU(s) I will expect to have better performance especially at lower buffer settings.

I have been wondering how well that 18core iMacpro runs real world Protools sessions at the 64 buffer setting. My own system scores about 24000 on Geekbench 4 and the 18core iMacpro is said to score about 48000. That 2019 Macpro should be a beast at lower buffers but time will tell. Geekbench doesn't tell you everything but I feel it does give an overall idea when comparing systems. I'm still in the mindset of trying to double my cpu processing power in order to really notice an improvement.
My 7980xe hacks score ~70,000 in geekbench (18c @4.4g). You can navigate a PT session fairly flawlessly at the 32 sample buffer assuming no VI's and staying away from plugins that are poorly coded for low latency. But there is no denying that CPU usage, and watts burned go up A LOT from the change from 128 buffer to 64. 32 is actually not that much worse then 64 (at 44.1/48).

I think that maybe the bigger factor will be AVX512. It's an instruction set from intel that is focused on lower latency performance for floating point operations which can be used by audio processing. AFAIK only 2caudio Breeze supports it so far and the difference is significant at lower buffers. This is only in i9 and the newest xeons (like the new mac pros).

But also some smart pre-rendering would go a long way. IE I think digital performer has an option to pre-render tracks (IE automatically freeze them in the background) to where only when you touch a parameter on a plugin will it revert that track back to live processing. This kind of thinking will go a long way IMO. Also if VI's could have a double or quad buffer, a VI only has half the software latency (only output buffer) so an x2 buffer option for VI's would obviously help. My guess is the Avid will only take these sorts of steps if they decide that HDX is EOL and they don't do another DSP card.

Last edited by RyanC; 4 weeks ago at 07:49 AM..
Old 4 weeks ago
  #53
Lives for gear
Quote:
Originally Posted by ProPower View Post
Yes - I too struggle with the reality of physics and the numbers in that I sing between 6" (rare) and 1.5 feet from the Mic which totally effects the total RTL (Hardware + distance) and thus the comb filtering. In general I have little issues with 1ms RTL from the hardware. But it is so hard to get a full Native system to run there. I am sure 44.1 is better from the computer POV. Whenever I engage 2+ms from the hardware it is a big tonal change. Vocals is the most significant.

The 0.3ms I quote is analog to analog (measured scope and Function Generator) through the Apogee FPGA FX. Antelope is 0.3 + some for each added AFX.
What is curious to me is the same physics don't seem to affect us so negatively in actual acoustic situations. IE, singing in the shower- where early reflections are coming in at probably 3-6ms to start.

Switching over to the science of studio design, people like Thomas @ Northward put a lot of work into a dense listener to listener (not speaker to listener, but rather self noises back to self) ambience in an otherwise dead room (the room is dead from a speaker to listener perspective save the floor bounce). The theory is that we live almost all of our lives with early reflections (and floor bounce) and if we don't have them it puts us into an anxious state. Having some early reflections that are reasonably diffused eases the mind and opens up the ears. Same basic theory with NE control rooms.

This is interesting to me with regard to RTL because I wonder if the problem is actually the delay, or the unnatural state of having a singular specular reflection. Also if the hypothesis is that it is comb filtering that bothers us, this is a problem related only to a pronounced specular reflection that isn't mitigated by other early reflections with different flight paths. IE the difference between a shower and cans with RTL (plus acoustic latency) is mainly that that the shower has myriad early reflections, and the cans only have the 1. Then we add reverb...

Something more thought out in terms of space emulation like the Klang:Fabrik is really interesting to me. I think typical reverb presets with longer decay times naturally get mixed too low to properly emulate a dense ER field. So you get a specular reflection with a dense reverb field that is 10dB or more below the first reflection (or RTL+acoustic delay), and there would (could) still be comb filtering. But on the science side, there is PLENTY of comb filtering starting at just 1ms (voice to mic delay only) and quite a bit by 2ms total (including the acoustic latency plus any RTL)...

Really just thinking out loud here- it's hard for me to wrap my head around not being able to live with a couple ms before the first reflection, when in reality we spend all of our lives outside of headphones like that.
Old 4 weeks ago
  #54
Lives for gear
Very interesting posts ^^^^^

I was thinking about this last night and was just playing with a Flea 47. I set my usual 0.3ms monitoring up - then backed up 2 feet from the mic. While singing I flicked the DSP on and off effectively having the same plugins with 0.3ms to ~3ms. The Apogee Dual Path makes this actually very easy with one button in Logic. Same exact settings of the plugins running either on the FPGA or in the DAW. Of course do they really sound the same... I am just going to assume yes today LOL!

- The sound changed in a large way between the settings. The 0.3ms had a full and tight lower register and a clear high end. Engaging the DAW buffer (128 at 96kHz) the low end was much less - and the high was thinner and brighter. The worst was I had to sing louder to hear myself vs the DSP setting. It wasn't "bad" it was just very different.

I am going to work more with this today but i also note that backing up to add the delay is totally unlike adding it in artificially in the DAW. I am thinking your speculation on specular vs real ER is the key....

Busy gig days now - more in a few days...
Old 4 weeks ago
  #55
Lives for gear
 
nukmusic's Avatar
 

How are you factoring in the proximity effect of the mic as well as the acoustical effects of the room?
Old 4 weeks ago
  #56
Lives for gear
I was able to A/B - DSP/DAW at each distance. Result was always audible. At 6", 1 foot, 1.5 foot, 2 foot always favored DSP in the way I described. I was able to keep increasing gain on the Mic to still have a good signal level in the HP. Will look at larger distances in the future :-)
Topic:
Post Reply

Welcome to the Gearslutz Pro Audio Community!

Registration benefits include:
  • The ability to reply to and create new discussions
  • Access to members-only giveaways & competitions
  • Interact with VIP industry experts in our guest Q&As
  • Access to members-only sub forum discussions
  • Access to members-only Chat Room
  • Get INSTANT ACCESS to the world's best private pro audio Classifieds for only USD $20/year
  • Promote your eBay auctions and Reverb.com listings for free
  • Remove this message!
You need an account to post a reply. Create a username and password below and an account will be created and your post entered.


 
 
Slide to join now Processing…
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump