The No.1 Website for Pro Audio
 All  This Thread  Reviews  Gear Database  Gear for sale     Latest  Trending
Game Audio Guru Brian Schmidt Mini Q+A 2018
Old 24th August 2018
  #1
Lightbulb Game Audio Guru Brian Schmidt Mini Q+A 2018

We're thrilled to welcome back @ bschmidt for another little Q+A in advance of GameSoundCon 2018 - we did one last year and it was really interesting - so let's bring our collective knowledge bang-up-to-date and fire some more questions to him.

Please reply to this post with any questions you have and Brian will be popping in from Monday to answer - he's available for a couple weeks, but it's always better to ask early!

As with last year, one participant in the Q+A will receive a free pass for GameSoundCon, worth $800 - so if you're near LA or willing to travel, this could be a very valuable prize.

==

Game Audio Guru Brian Schmidt Mini Q+A 2018-sound-brian-schmidt-1280.jpg

==

About Brian:

Brian Schmidt is the founder and creator of GameSoundCon. The 2008 recipient of the Game Audio Network Guild’s Lifetime Achievement Award, Brian has been creating game music, sounds and cutting edge game sound technology since 1987. With a credit list of over 130 games and a client list including Zynga, Sony, Electronic Arts, Capcom, Sega, Microsoft, Data East, Namco, SounDelux and many others. Brian has used his combined expertise and experience in music composition, sound design and his deep technical knowledge to change the landscape of the game audio industry. Brian is a frequent and in-demand speaker on the creative, technical and business aspects of game audio, having given literally hundreds of educational and inspirational talks at conferences all over the world. Events such as the Game Developers Conference, Microsoft’s Gamefest, Sega Devcon, the Audio Engineering Society Conference and esteemed institutions such as Yale University, Northwestern University, and Digipen have invited Brian to share his knowledge and insight into the industry.

Brian began his career in game audio in 1987 as a composer, sound effects designer and music programmer for Williams Electronic Games in Chicago writing music and creating sound effects for pinball machines and coin-operated video games. While there, he was the primary composer of the video game NARC. His main Theme from NARC was later recorded and released by The Pixies; his other work has been featured in the CD set, “Legends of Game Music.” In 1989, Brian left Williams and became one of the industry’s first independent game audio composers and sound designers, where he worked on such games as John Madden Football, the Desert Strike Series, and the award winning Crueball. Other credits include Guns and Roses Pinball, where he worked closely with Slash to create a truly interactive Rock and Roll game experience.

In 1998, Brian was recruited by Microsoft to lead the direction of game audio technologies. While there, he joined the then-fledgling Xbox organization as the primary architect for its audio and music system. Brian has been credited with bringing Interactive Dolby Digital Surround Sound to interactive gaming through his efforts at Xbox where he also created the original Xbox startup sound. During his 10-year tenure at Microsoft, Brian continued to drive and advance game audio technologies through tools such as the award-winning “XACT” (Xbox Audio Creation Tool); the first-of-its kind tool to provide interactive mixing for video games. Brian was also responsible for the overall audio system of the Xbox 360 game system, including the XMA audio compression format, winner of the G.A.N.G “Best New Technology award” and finalist in IGDA’s “Best new technology” category.
Brian is currently a consultant to the video game industry working with companies large and small.

Brian received undergraduate degrees in music and computer science from Northwestern University in1985,where he created the first dual degree program between the School of Music and the Technological Institute. He went on to complete his Masters degree in Computer Applications in Music in 1987, where portions of his thesis work was published in the prestigious Computer Music Journal and presented by invitation to the AES specials conference on Audio Technology. While in school, Brian worked as an apprentice to film and jingle composer John Tatgenhorst, where he learned to appreciate the art and science of putting sound to picture. Brian currently sits on the advisory board of the Game Developer Conference, is a founding board member of the Game Audio Network Guild (G.A.N.G.) is a former steering committee member of the Interactive Audio Special Interest Group (ia-sig) of the MMA, and has been a featured keynote speaker at The Game Developers Conference and Project BBQ. Brian was also a member of a select group of ten game audio professionals who successfully lobbied NARAS into making video game soundtracks eligible for the Grammy Award in 1999. In 2012, Brian was elected President of the Game Audio Network Guild and currently serves in that role.

==
Attached Thumbnails
Game Audio Guru Brian Schmidt Mini Q+A 2018-sound-brian-schmidt-1280.jpg  
3
Share
Old 24th August 2018
  #2
Hello Brian! How are you? Congrats for your carreer and acomplishments!

As an audio engineer mixing with FB360, SPAT, etc and also a Game/Unity enthusiast, I ask you:

How do you think the future will be regarding the different aproaches for Vr/360 video, and programmed/game oriented material.

I ask this because today we have basically two aproaches when starting a new project. If it's a traditional 360 video, we focus on spatialization of the audio, starting with capturing the material, and going through all the process of mixing and finalization of the Video.

The other aproach would be a game platform (my experience is with Unity) and this is a deeper area where spatial audio is nothing new, but the HMDs evolution and popularization of this kind of material is obviously what the masses are going to ask for. How do you think these 2 aproaches could be merging in a near future?

Thanks for your time and experience!

Eric Goldberg (from Brazil)
@fyastudio (instagram)

YouTube
YouTube
2
Share
Old 26th August 2018
  #3
Brian - thanks again for joining us.

I’d love to hear your take on this very recent article in the UK’s Guardian -

‘Bigger than MTV’: how video games are helping the music industry thrive | Games | The Guardian

Very positive obviously - is it accurate? What do you make of their observations on the state of play? Any advice for those GSers that are firmly entrenched in the music side of the business on how to get a piece of this action?

Thanks!!
2
Share
Old 27th August 2018
  #4
Special Guest
 
bschmidt's Avatar
 

Quote:
Originally Posted by Rootss View Post
Hello Brian! How are you? Congrats for your carreer and acomplishments!

As an audio engineer mixing with FB360, SPAT, etc and also a Game/Unity enthusiast, I ask you:

How do you think the future will be regarding the different aproaches for Vr/360 video, and programmed/game oriented material.

I ask this because today we have basically two aproaches when starting a new project. If it's a traditional 360 video, we focus on spatialization of the audio, starting with capturing the material, and going through all the process of mixing and finalization of the Video.

The other aproach would be a game platform (my experience is with Unity) and this is a deeper area where spatial audio is nothing new, but the HMDs evolution and popularization of this kind of material is obviously what the masses are going to ask for. How do you think these 2 aproaches could be merging in a near future?

Thanks for your time and experience!

Eric Goldberg (from Brazil)
@fyastudio (instagram)
Hi Eric,
Thank you for your question!
It really depends on what the end product is going to be.
In VR/AR we talk about "Degrees of Freedom" (aka "DoF") which describe how much information the VR/AR system knows about the user. In 3-dof, the system knows your heads' pitch, yaw, and roll (i.e. how is your head tilted), but has no idea where in the room you are.

360 audio recording, for example using Ambisonics or similar technologies, can be good at providing contend for what are called 3-Dof experiences
3Dof experiences are great for largely passive, but still VR experiences. Think of a VR experience of being on a roller coaster that someone has filmed with a 3D camera in the front seat. It's pretty much a passive experience--you sit down and go for the ride; it is always the same ride, and you can't do much interactively with the experience (i.e. you can't decide to jump into the back seat in the middle of a loop!)--even though you can actively look around while you're experiencing the ride. So it's only interactive in the sense that what you see and hear depends on where you're looking.

For these experiences, You would likely use something like an Ambisonic microphone to capture the sounds of the experience, and marry it to the 3D visual capture. Then when the user experiences it (say via FB360 or YouTube VR, as they rotated their head to look around, the soundfield around them would remain fixed; that is, it would NOT be locked to the head, but would rotate in response. That way the sound of the people in the car behind you are comes from behind you if you are looking forward, but comes from in front if you if you are looking at them.
It’s these semi-passive 3Dof experiences where you focus on capturing the live environment, through a 3D mic such as the one below

<whoops..turns out I can't post an image... do a google search for ambeo microphone>

A step up from 3Dof experiences is 6-dof. In a 6 dof experience, in addition to knowing where you head is pointing, the VR system also knows where you are in a particular space.. I.e. where you are in the room.

Think of a VR horror game where you can walk up to a door to see what's behind it, or crouch down to avoid a zombie's outreaching arm swinging at you.

That’s the experience that an Oculus Rift or HTC Vive will give you. For these, it is not enough to capture a soundfield with an Ambisonic type recording, and this is where the game engines come in.

The big problem using a game engine solves is that the user can move around; and as they move around, the entire mix needs to change in response. Suppose you are in a VR room with a ticking clock on the wall. If you are only a few feet away from it, you probably won’t hear it at all. But if you walk up and stand right next to it, it will be quite loud. To achieve that, you need some sort of “object based” audio engine. And games have had object-based audio engines for decades. These game engines have already had to deal with how to let sound designers author content that changes dynamically in response to things like movement, distance and relative position to the user. So it was a fairly natural step for 6-Dof experiences to use game engines.

Here is a picture of the kinds of control a object-based audio system (you'll recognize this being from the “Unity” game engine) gives a sound designer. You’ll notice the job of the sound designer is to specify the values of various parameters (Volume, Low-pass filter, reverb send, etc.) are as a function of distance from the sound source to the listener. The sound designer also needs to specify how much of a Doppler effect a sound should have as it is moving.


<yes, I’m finally getting to your actual question here!>

One place where these 2 techniques can ‘merge’ (though I would hesitate to call it a ‘merge’—more like ‘use both at the same time) is in rendering 3D soundscapes in a 6DOF environment that don’t have detailed audio objects in them, but you still want a feeling of rotation as the user moves their head around. So many people are using ambisonics (or similar) to record room tones, or other ambiences (i.e. rainfall, etc.) and render these as Ambisonic soundfields, but then also use the object-based systems for individual sound effects. That gets you a nice, semi-dynamic room/ambience but still the detailed changes in mix that need to occur in a VR experience as the user walks around the space.

I should also point out that some “object based” 3D audio technologies, such as Google Resonance Audio, use High Order Ambisonics (HOA) as their object-based rendering technology, so that in a sense is a merging of the two as well.

So it really gets down to are you
1) Using VR to give a user the ability to re-experience an event (such as the rollercoaster example) or
2) Using VR to experience a new experience that will be very unique for each person depending on their actions
And also how many degrees of freedom the user has.

There is some work going in trying to allow for 6DOF experiences using Ambisonic-type recordings, but those aren’t really out there yet. But even there, those would be used for the re-creation, rather than the creation of experiences.

But the bottom line is that I think it's a matter of using the right tool (object based audio or pre-rendered/recorded audio) for the right job.
Hope that helps!
Brian
5
Share
Old 27th August 2018
  #5
Quote:
Originally Posted by bschmidt View Post
Hi Eric,
Thank you for your question!
It really depends on what the end product is going to be.
In VR/AR we talk about "Degrees of Freedom" (aka "DoF") which describe how much information the VR/AR system knows about the user. In 3-dof, the system knows your heads' pitch, yaw, and roll (i.e. how is your head tilted), but has no idea where in the room you are.

360 audio recording, for example using Ambisonics or similar technologies, can be good at providing contend for what are called 3-Dof experiences
3Dof experiences are great for largely passive, but still VR experiences. Think of a VR experience of being on a roller coaster that someone has filmed with a 3D camera in the front seat. It's pretty much a passive experience--you sit down and go for the ride; it is always the same ride, and you can't do much interactively with the experience (i.e. you can't decide to jump into the back seat in the middle of a loop!)--even though you can actively look around while you're experiencing the ride. So it's only interactive in the sense that what you see and hear depends on where you're looking.

For these experiences, You would likely use something like an Ambisonic microphone to capture the sounds of the experience, and marry it to the 3D visual capture. Then when the user experiences it (say via FB360 or YouTube VR, as they rotated their head to look around, the soundfield around them would remain fixed; that is, it would NOT be locked to the head, but would rotate in response. That way the sound of the people in the car behind you are comes from behind you if you are looking forward, but comes from in front if you if you are looking at them.
It’s these semi-passive 3Dof experiences where you focus on capturing the live environment, through a 3D mic such as the one below

<whoops..turns out I can't post an image... do a google search for ambeo microphone>

A step up from 3Dof experiences is 6-dof. In a 6 dof experience, in addition to knowing where you head is pointing, the VR system also knows where you are in a particular space.. I.e. where you are in the room.

Think of a VR horror game where you can walk up to a door to see what's behind it, or crouch down to avoid a zombie's outreaching arm swinging at you.

That’s the experience that an Oculus Rift or HTC Vive will give you. For these, it is not enough to capture a soundfield with an Ambisonic type recording, and this is where the game engines come in.

The big problem using a game engine solves is that the user can move around; and as they move around, the entire mix needs to change in response. Suppose you are in a VR room with a ticking clock on the wall. If you are only a few feet away from it, you probably won’t hear it at all. But if you walk up and stand right next to it, it will be quite loud. To achieve that, you need some sort of “object based” audio engine. And games have had object-based audio engines for decades. These game engines have already had to deal with how to let sound designers author content that changes dynamically in response to things like movement, distance and relative position to the user. So it was a fairly natural step for 6-Dof experiences to use game engines.

Here is a picture of the kinds of control a object-based audio system (you'll recognize this being from the “Unity” game engine) gives a sound designer. You’ll notice the job of the sound designer is to specify the values of various parameters (Volume, Low-pass filter, reverb send, etc.) are as a function of distance from the sound source to the listener. The sound designer also needs to specify how much of a Doppler effect a sound should have as it is moving.


<yes, I’m finally getting to your actual question here!>

One place where these 2 techniques can ‘merge’ (though I would hesitate to call it a ‘merge’—more like ‘use both at the same time) is in rendering 3D soundscapes in a 6DOF environment that don’t have detailed audio objects in them, but you still want a feeling of rotation as the user moves their head around. So many people are using ambisonics (or similar) to record room tones, or other ambiences (i.e. rainfall, etc.) and render these as Ambisonic soundfields, but then also use the object-based systems for individual sound effects. That gets you a nice, semi-dynamic room/ambience but still the detailed changes in mix that need to occur in a VR experience as the user walks around the space.

I should also point out that some “object based” 3D audio technologies, such as Google Resonance Audio, use High Order Ambisonics (HOA) as their object-based rendering technology, so that in a sense is a merging of the two as well.

So it really gets down to are you
1) Using VR to give a user the ability to re-experience an event (such as the rollercoaster example) or
2) Using VR to experience a new experience that will be very unique for each person depending on their actions
And also how many degrees of freedom the user has.

There is some work going in trying to allow for 6DOF experiences using Ambisonic-type recordings, but those aren’t really out there yet. But even there, those would be used for the re-creation, rather than the creation of experiences.

But the bottom line is that I think it's a matter of using the right tool (object based audio or pre-rendered/recorded audio) for the right job.
Hope that helps!
Brian
Brian, thanks a lot for your complete and lucid answer!! Good tip using ambisonic to provide room, or ambience and still spatialize the objects in Unity i.e.

Here is a link of a musical experience I did in unity with my band song called Inspiration. Hope you like it.

Again thanks for your time and knowledge sharing!!

INSPIRATION_FYA INC by Rootss
1
Share
Old 28th August 2018
  #6
Lives for gear
 
PatrickFaith's Avatar
 

I'm kind of chicken on middleware, been avoiding it and staying in protools mainly. Recently more and more of what I do is just small "mono" files and even sections which are used as grains, going into middleware. I'm just "ok" with unity3d c# but have been totally chicken on integrating unity with other layers like wwise. Do you have sugestions for us old timers trying to move from protools channels to unity/wwise objects? Did you have to do any mindshifts into using middleware? Is middleware as important as I think it is?
1
Share
Old 28th August 2018
  #7
Special Guest
 
bschmidt's Avatar
 

Quote:
Originally Posted by Whitecat View Post
Brian - thanks again for joining us.

I’d love to hear your take on this very recent article in the UK’s Guardian -

‘Bigger than MTV’: how video games are helping the music industry thrive | Games | The Guardian

Very positive obviously - is it accurate? What do you make of their observations on the state of play? Any advice for those GSers that are firmly entrenched in the music side of the business on how to get a piece of this action?

Thanks!!
Hi Whitecat!
Thank you for a great question.
That article touches on a lot of things, from song licensing, to original compositions to game music concerts.

There is definitely a lot of work in games, though realistically perhaps it’s not quite as rosy at the article implies.

[Sidenote: my personal pet-peeve is articles that overstate the size of the ‘game’ industry, especially when comparing to things like the movie industry. I wrote a whole article on that.. ]Conference on Composing Video Game Music and Sound Design | Single Post.

With regards to video games being the next MTV…
While it is true that both indie and established music artists have been having their songs placed in video games, that’s is really more the exception than the rule. Steve Schnur of EA who is quoted in the piece has definitely done a lot of great work using games (Madden, FIFA, NFS, etc) as a platform for small artists to get their music known. I even very recently licensed in a bunch of indie bands for a game called Mutant Football League, which is coming out in about 3week (#BlantantPlug).
That said, it is still a fairly niche game genre (mainly sports, racing or music games) that would significantly license in “songs” for their game, or quite so directly benefit an artist.
Most game soundtracks of other genres will instead commission custom music to be composed explicitly for the game. One reason of course is that they would like the music to become part of the brand of the game itself (think of the opening monks chanting in the HALO music), and that’s where much/most of the work for game composers lies these days.
The article is correct in that more composers are keeping their music rights, BUT, that’s really only for smaller, indie games. For most professionally produced games, and almost all the big ones, the music is “Work for Hire.”

As far as advice on ‘getting a piece of the action’…
The game industry, for better or worse, is largely a network-driven industry; People hire people they know or have worked with before. So start attending events, and ‘networking’
Note: By “networking” I don’t mean going to a mixer and handing out 83 business cards in 2 hours. People who I’ve seen make their way into the game industry as composers or sound designers treat it as a long-play game. The most valuable conversation you have for your career at a networking will be that 25 minute conversation you had with someone over your shared love of fly-fishing; far more valuable than a ‘speed-dating’ kind of biz card exchange

Speaking of networking, one thing that often surprises traditional media composers about working in games is the importance of networking among fellow composers and game audio people. I have been told that for Film composing, “it is pointless to network with other film composers—network with Film Directors instead.” However in games, networking with other composers can be extremely helpful to your career. The main reason is that in games, there is no “Director” like there is for a film. For many games, the composer is selected by a role called Audio Director for the company, who is responsible for everything coming out of the speakers: music, sfx, dialog, etc. And the audio director is almost always a composer (current or former) themselves. So the competitor/freelance composer you get to know today, 2 years from now may be the Audio Director of a large game company, and be the one making the decision on who to hire to score the game.

It definitely helps to be fluent in what’s current in games. If you find yourself at a conference like indiecade or GDC (the big Game Developers Conference held each year in San Francisco), and you’re asked “so what games to you like to play?”.. if you answer is “Centipede” that’s not going to make quite as much of an impression as something more contemporary. I’m not saying you need to become a 20 hour/week gamer (please don’t!), but knowing some of the current games can certainly help.

Knowing some of the tech and issues that games have that more traditional media (Film, TV, music) don’t can be very helpful as well. There are specific game audio tools that let you create music designed to be played back more interactively; that’s pretty much bread and butter for a game composer these days. So either start playing with of some of the game audio tools (the biggest are “Wwise” by audiokinetic and FMOD Studio, by Firelight Technologies). If you’re not into that, try to team up with someone who is.
So the “any advice” question is super broad, so I’d maybe summarize as:
• Network. Both with people who make games, and also other composers and sound designers.
• Get somewhat familiar with the industry. Know the trends, what’s hot, etc. Be able to answer the “what do you like playing now” with something current (and don’t BS…the follow up might be a question about the game itself!)
• Look at some of the issues specific to games and some of the tools used to address them.
• Understand the business side of games. It's a bit different from scoring for film or a TV show

Hope that helps. I created GameSoundCon (2 days, 5 concurrent rooms) to cover some of this, so it’s a bit hard to digest it down to a single forum post!
2
Share
Old 31st August 2018
  #8
Special Guest
 
bschmidt's Avatar
 

Quote:
Originally Posted by PatrickFaith View Post
I'm kind of chicken on middleware, been avoiding it and staying in protools mainly. Recently more and more of what I do is just small "mono" files and even sections which are used as grains, going into middleware. I'm just "ok" with unity3d c# but have been totally chicken on integrating unity with other layers like wwise. Do you have sugestions for us old timers trying to move from protools channels to unity/wwise objects? Did you have to do any mindshifts into using middleware? Is middleware as important as I think it is?
Hi Patrick!

"Is middleware as important as I think it is" is a really, REALLY good question.

Which means the answer is "it depends..."

The short answer is that, yes, if you want to do game sound design (and to a certain extent music) it's pretty much expected these days that you know Wwise and/or FMOD Studio. I'm in the middle of writing a report on an analysis of game audio job postings over the previous 9 months; virtually all have "Wwise" or "FMOD" or "Audio Middleare" in their list of required skillsets.

Wwise, FMOD both are pretty easy to integrate into Unity, once you're familiar with the middleware itself.
So I would say, take a look at the Wwise 101 course (linked below); they make it quite easy to integrate Wwise into a Unity project, and if you can code Unity C# already, you're way ahead of 95% of everyone else!


For those reading who aren't sure what "Audio Middleware" is.

Middleware is a set of software that game developers use to put sound into their games. Modern day middleware (Wwise by audiokinetic and FMOD Studio are the 2 biggest ones, but there are others as well.. Fabric, CRI ADX2, Elias to name a few).

One part of the middleware is a GUI progam designed for sound designers and composers. What these do is allow a sound designer or composer to create complex, interactive sound effects or music out of raw sound materials--most typical wave files.
For example, suppose that you as a composer wanted the music to change when the player's health reaches "1" (i.e. almost dead) but you wanted it to change on a measure boundary, and do a nice .4 second crossfade using an equal power crossfade to go from "Normal music" to "Almost dead" music. You would use the GUI tool to specify what wave file is "normal music" and what is "almost dead" music, and where the measure boundaries are. Then you can set up a rule that says "WHen the programmer issues a command to go to the "almost dead" music, wait for the measure boundary, and then start a .4 second crossfade.

Another simple example is a variable explosion sound. In games, very often, the same thing can happen again and again. For example, something explodes.
Well, we dont' want to play the same 'explosion221' wave file every time; we want some variety. So using Middleware, we can make an explosion "sound" that takes 3 separate .wavs as input.. say a low "boom" a big "Crack" and a debris sound. We can tell the middleware that the "Explosion Sound" is really those 3 waves, playing simulatenously, but with a random pitch and volume setting for each. We could go further and say that I have 5 "booms" 5 "cracks" an 5 "debris" wave files, and that it should pick one of each, randomly, and then also randomize the pitch and volume. All of a sudden, every time I play the "Explosion Sound" it sounds different every time.
That is bread and butter game sound design.

The other part of the middleware actually gets included along with the game program itself, and actually performs that music crossfade and the random selectin and simultaneous playback of the 3 parts of the explosion, within the game as it's running.

<middlware does a ton more than that, but I hope you get the idea..>


I found part of your question actually somewhat amusing. You said "I'm just 'ok with C# (programming language), but have been totally chicken integrating Unity with other layers like Wwise.

The reason I found that semi amusing is that the main reason Middleware exists is to make it easier create and implement complex audio behaviors that--without middleware-- would require writing pretty complex computer code in C++ or C#. So it seems like you actually already have the ability to do the 'harder' thing (basic coding in C#) than the 'easier' thing (using the Audio Middleware GUI to create complex audio behaviors like measure boundary crossfading). That is, there's really nothing you can do in Wwise that you couldn't do by writing a whole bunch of code. But generally programmers don't want to be bothered writing a lot of audio code, which is why programs like Wwise so popular.

It is true that the middleware GUI's can be a bit overwhelming at first blush. Fortunately, there are some very good tutorials online. For FMOD, you can search for "FMOD studio tutorial" and there are a number of good videos. For Wwise, they have a full "Wwise 101" and Wwise 201 certification course. I Highly recommend running through at least the 101. That will give you a very good understanding of how Wwise works.

Now it is true that not all games use middleware like Wwise or FMOD. Some games use their own custom audio tools. Some games use the built-in audio systems in Unity (or Unreal, another game engine).

Almost all of the modern, large-budget video games use something like Wwise or FMOD. A large modern video game (like Destiny, Overwatch) has an incredibly complex sound landscape. And the very complex behaviors they are trying to do really require something powerful like Wwise, FMOD, Fabric, etc to pull of.

But many, very simple games, really need to do little more than play a .wav file when a game event happens. I.e. a simple puzzle game probably has some background music, little button effects, and them maybe some reward SFX when puzzles are completed. For a game like that, perhaps all you need is the basic Unity playback system.


That said, for a really good example of what using middleware can bring to even a simple puzzle game, check out this talk from the Game Developers Conference by Guy Whitmore, composer/audio director for EA/Popcap. The extremely tight synchronization between gameplay, music and sound effects, is really only possible by using middleware. Listen to how the sound effects of balls bouncing around are in sync and in key with whatever the underlying background music is doing at the time..

GDC Vault - Peggle 2: Live Orchestra Meets Highly Adaptive Score
6
Share
Old 2nd September 2018
  #9
Hi Brian,

Thanks for taking the time to answer some questions here.

I produce a dark industrial type of music and for futuristic effects (of which I use a lot) I often tend to sample old sci fi films. I have always been intrigued as to how these types of sounds i.e. futuristic computer sounds, mechanical sounds etc are produced. I have a massive versatile synthesizer sat on my desk that I use for pads and basses but i have never ventured into sound effects. Could you give me an idea of where to start? I'm unsure as to how these techniques would be learnt as the sounds are often so complex. As I write this I feel it might be too broad a question and apologies if that is the case.

Further to the above, I also understand that many sound effects in game and film are sample based originally but I am referring to the clearly synthesized sounds.

Thanks,
Charlie
Old 2nd September 2018
  #10
Special Guest
 
bschmidt's Avatar
 

Quote:
Originally Posted by Retouch View Post
Hi Brian,

Thanks for taking the time to answer some questions here.

I produce a dark industrial type of music and for futuristic effects (of which I use a lot) I often tend to sample old sci fi films. I have always been intrigued as to how these types of sounds i.e. futuristic computer sounds, mechanical sounds etc are produced. I have a massive versatile synthesizer sat on my desk that I use for pads and basses but i have never ventured into sound effects. Could you give me an idea of where to start? I'm unsure as to how these techniques would be learnt as the sounds are often so complex. As I write this I feel it might be too broad a question and apologies if that is the case.

Further to the above, I also understand that many sound effects in game and film are sample based originally but I am referring to the clearly synthesized sounds.

Thanks,
Charlie
Hi Charlie,

While it's true that a lot of SFX in games (especially cinematic, story-driven games) are sample based (i.e. they are serving as diagetic sounds), a lot of game sounds are also completely synthetic. For cinematic/story driven games, there are still non-diagetic elements (menu buttons, item pick up sfx, etc).

And in non-story driven games, very often almost all the SFX are synthesized.

As far as what to use for those--that is all over the map. Personally, I do a LOT of my non-diagetic SFX with a hardware synth; my Motif XS 8. These range from simple beeps, to very complex synthetic ambiences, with multiple layers.

One thing for these types of SFX. It's quite rare that the SFX is just the sound of a 'synth patch'. Rather, these sfx are almost always themselves fairly complicated, multli-layer MIDI sequences, that often make extensive use of CC information to modulate filters or other DSP effects, in addition to sequences of notes themselves.
I.e. a 'pickup' sound, might be a quick sequence of 32nd notes of a pitched patch with a very quick attack and medium decay, going G4,c4,g5,c5.
Or a computer sfx might be extremely fast random pitches, with a resonant filter slowly sweeping up and down on one track, while a low-pad plays on another track.

Extreme pitch shifting is another technique. Take just about anything and pitch it down 4 octaves and you get something pretty darn cool.

So although you call the sounds "so complex" you might want to analyze a few of them in some detail. You may find that they are not as complex as they initially sound, if you think of them as being comprised of a number of fairly simple layers. It's only when all the layers are playing that they sound very complex. In fact, they layers themselves NEED to be fairly simple, or else you can up with a bunch of sonic mush, rather than a nicely sculpted, abstract SFX.

I describe an example of complex layering of fairly simple sounds in an article I wrote on how I created the boot sound for the original Xbox.

Also, Alan Howarth has done a bunch of good interviews where he describes his approach to sound design (eg for the Star Trek movies, which he did sound design for).

YouTube

It is a great question, and there are lots of other things people use. Pure Data is popular in games. And game engines themselves are starting to have sophisticated procedurely driven sound generation engines. At GameSoundCon this year, for example, one session is called "Designing Procedural UI sounds using Unreal Engine 4"

So it sometimes feels like it's all about starting with fairly ordinary stuff and then morphing the...heck..out of it.

Hope that's helpful!
Thanks for the question!
1
Share
Old 3rd September 2018
  #11
Special Guest
 
bschmidt's Avatar
 

Programming? Wwise

I happened to come across this question from a couple weeks ago on a separate thread; I'm re-posting it and my reply here. (hopefully, that's not a violation of a GearSlutz policy.. )

Quote:
Originally Posted by Egedai View Post
If you were aiming to work in the game audio business, What would you start doing today?

I’ve been hearing a lot about Wwise and Unity for audio implementation so I guess I’ll start going into them; but every gaming company also asks you to be able to code and I don’t know anything about it really... and most of them want C++ which as far as I’ve heard is one of the hardest languages to learn.

It seems big gaming companies such as Blizzard, Bethesda, Ubisoft etc... all ask of you to have years of experience in the field and at least two AAA titles behind you in order to just consider you. Where should one start this journey? I’m a long time audio guy; but I’m too far behind on coding etc. What would you guys recommend someone to do step by step over the years of learning.
This can depend a lot on what it is that you do; there can be very different paths for composers vs sound designers.

For composers, Donedeal had a great answer. To his list of 3, I'd add "start attending game conferences, and try to meet some up and coming game developers. Conferences like GDC (Game Developers Conference, which is HUGE) is good as are smaller ones like indiecade or casual connect.
You can also make some very good game audio connections with composers and sound designers who work in games at GameSoundCon (Conference on Composing Video Game Music and Sound Design)

[disclaimer: I run GameSoundCon, and GearSlutz is currently giving away a ticket to it on this thread that I'm doing an AMA on right now]:

Game Audio Guru Brian Schmidt Mini Q+A 2018

I'm going to duplicate this answer in that thread.

Regarding coding...

If you are a sound designer, there's good news and 'bad' news. The good news is that in the past 6-9 months, I've seen a good number of salaried game sound designer jobs at companies large and small. These are mostly not freelance gigs, but are full on, "Salary, benefits, 401(k), etc" jobs. The more good news is that not all of these jobs say "Must have shipped 2 AAA titles"..

The 'bad' news is that increasingly, they are listing "coding" in their list of either REQUIRED or PREFERRED skills on the job listings. However, for the most part, the coding they're looking for isn't usually C++ (which is a pretty hardcore programming language), but usually an easier to learn cousin called C# (C-Sharp).


I know personally of a sound designer, who was hired right out of school (DigiPen's Bachelor of Arts in game Music and Sound Design) [Disclosure: I teach part time at DigiPen]. The hiring manager for that job is a good friend of mine, and he said that one of the reasons this person got the gig was because he had taken some additional programming classes, on top of what was required for his degree (DigiPen requires 2 semesters of programming for their Music & Sound degree). In his mind, that not only showed skills, but also interest and a passionate curiosity in the topic.

The candidates who are competitive for these jobs can do cool sound design, know recording, etc. But they then can also put those sounds into a tool like Wwise (a game-specific audio tool). And the really good ones can go further and build a simple Unity 'demo' using C# from scratch to show of their sound design skills. Such a sound designer can tell a prospective employer "yes, I did all the interactive sound design in this app, which you can download from Google Play or the App Store. Oh, and I also wrote all the Unity code that drives the demo app."


For the jobs that are specifically requiring C++, those are pretty hard-core programming jobs, not sound design jobs. I.e. to get yourself to the level of coding to take one of those jobs will require more than just taking a couple classes.

If you are a composer, then the odds that you will get asked to code anything is probably fairly small. However, it is still a great skill to have, even if it is just a passing familiarity with it. Code is the DNA of a video game, and having an understanding of it can only make you more valuable to a potential client.
At an early GameSoundCon, Marty O'Donnell (Composer for the HALO series and Destiny) told the crowd "Oh, and everyone should take a programming class." His point wasn't that all game composers need to be programmers, but that taking a class will help you much better understand what it takes to put sound/music into the game.


Now I will put a disclaimer into all this. At the highest levels of game development, more specialization occurs. That's why, as Donedeal points out, Jesper Kyd didn't need to worry about coding when he wrote the Hitman score. At that level, for music, they very often use 2 people--a composer and a 'technical sound designer' whose job is to deal with the techie stuff. At that point, the composer really just composes.
1
Share
Old 4th September 2018
  #12
Quote:
Originally Posted by bschmidt View Post
Hi Charlie,

While it's true that a lot of SFX in games (especially cinematic, story-driven games) are sample based (i.e. they are serving as diagetic sounds), a lot of game sounds are also completely synthetic. For cinematic/story driven games, there are still non-diagetic elements (menu buttons, item pick up sfx, etc).
Hi Brian,

Thank you so much for the lengthy reply.

Some really useful idea's there. I often mess around with the pitch of samples (usually pitching down considerably) and have always found it to have pleasing results. I've never thought about layering to create sound effects though and will cerainly start experimenting with that.

Thanks again,
Charlie.
1
Share
Old 5th September 2018
  #13
Quote:
Originally Posted by bschmidt View Post
Hi Whitecat!
Thank you for a great question.
That article touches on a lot of things, from song licensing, to original compositions to game music concerts.

There is definitely a lot of work in games, though realistically perhaps it’s not quite as rosy at the article implies.
...

Thanks so much for the detailed answer.

As a little followup question and going back to the fact that this is a gear forum first and foremost, what do you think needs to be in every game audio toolbox? Is there any software (or even hardware) that it would be very diffucult to live without if you were making a stab at doing this full-time?

Basically, what are the must-have tools for composers and designers (I realise these boxes could look very different!)
2
Share
Old 6th September 2018
  #14
Special Guest
 
bschmidt's Avatar
 

Quote:
Originally Posted by Whitecat View Post
Thanks so much for the detailed answer.

As a little followup question and going back to the fact that this is a gear forum first and foremost, what do you think needs to be in every game audio toolbox? Is there any software (or even hardware) that it would be very diffucult to live without if you were making a stab at doing this full-time?

Basically, what are the must-have tools for composers and designers (I realise these boxes could look very different!)
How do I say this without starting up "music production's oldest flame war...".

I'll just say it.

A Windows PC.

The majority of game developers build on PC's. And their prototype games, which you need to be able to play, typically run on PC. And the game audio tools, while technically cross platform, generally work best on PC.

I'm NOT saying "Switch from Mac to PC." But you should have a machine, with a decent graphics cards that can run Windows applications.

So at the least, get virtual Box or Parallels, so that you can run windows applications.

To be a "one stop" game audio freelancer, I'd also add
* A professional SFX library.
* a good quality portable recorder, like the Zoom H6
* a good quality voice mic, such as the Shure SM7B (or better)
* enough baffling to be able to some basic VO
* Daw of your choice. Reaper is increasingly common among game audio creators, but the best DAW is typically "The one that you know the best"
* a basic set of DSP plugins, including noise Reduction such as RX

And install either Wwise (Audiokinetic | Industry Leading Interactive Audio Solutions) and/or FMOD Studio (www.fmod.com) on your machine. These are the main 2 game audio creation tools that many (but not all) game companies use.
The nice thing is that they are completely free for the sound designer to download and use! They get paid by the game developer when the game ships..
3
Share
Topic:
Thread Tools
Search this Thread
Search this Thread:

Advanced Search
Forum Jump
Forum Jump