View Single Post
Old 9th August 2018 | Show parent
Gear Head

As far as running it on the desktop with headphones--makes total sense. We actually have been conceptualizing a similar goal to integrate an installation with live performance. I'm guessing you/we'll have to work around the limitations of whatever software you go with. I've never used SuperCollider but I'm guessing it could end up being great for this. In Unity, you'd either have to:

1) Work around the limitation of only one Audio Listener (it looks like there are assets to do this for split-screen games/etc. -- but eventually you'll hit a max number of users)
2) Build the same Unity "game/application," run a bunch of instances of it, and direct the audio from each out to a different audio output

Might not be easy--but Unity really is great for visually creating environments, doing all the distance/direction calculations and spatial audio processing, etc. Definitely where I'd start trying.

I believe Apple's ARKit uses some form of Visual Intertial Odometry--essentially, yep, those sensors in combination with the camera. It's amazingly accurate for audio purposes. I've launched it in one place with an object, walked three rooms away, walked back and seen the object had moved less than a foot. I've even walked halfway around the block and back and the original object position only moved about a yard or so. It solved a lot of the inside out tracking issues we confronted in looking at different sensors, and lets us distribute widely! I know AR can look a little jittery visually, but for audio purposes it's really phenomenal.

Would be remiss to not mention--I've been lucky enough to get to do some toying with the 4DSOUND system. Obviously the use of one of those systems, depending on the project, could be amazing! 4D has some great artist residency programs.