Well, let me clarify that I do sound design, not music. I've never written a note that's been in a game and I likely never will.
As far as projects, I'm still fairly new to this, so I don't have as long a credit list as others who would be more qualified to answer some of your questions. I'm 26 and in my second year as a game sound person. But, that said, I've worked on the following:
Tony Hawk's Project 8 (360/PS3/etcetcetc)
Bionicle Heroes (Nintendo DS)
Neverwinter Nights: Wyvern Crown of Cormyr (PC)
I've done a bit of consulting work that I can't discuss (titles unannounced), and I'm currently working on two projects for the 360 and 360/PS3, but I can't really talk at length about them. Such is the nature of NDAs.
What I can tell you is this: if you come from a traditional post background, and you're used to pushing faders, get ready for some culture shock. It's quite a bit different.
My general workflow is to use Cubase SX and Sound Forge to create/composite my sound assets for the project. These are often taken from original source, as I prefer to record my own stuff rather than use libraries (although it happens sometimes). Once I've created the assets, I then implement them into the game. This can be done via pretty GUI, or it can be done by working with XML scripts. It varies by project.
The actual mixing of the game is handled by the in-game audio engine. Generally you're attaching sound definitions (more on this in a sec) to triggers, environmental objects, keyframes or whatever in the game world. It varies depending upon the sound.
The "sound definitions" I mentioned are little bits of data, for lack of a better term, that pull sounds from a wavebank and perform various processes to them when they are played back. For example, when triggering a gunshot, the sound definition may actually have three sounds to choose from, it may pitch it slightly up or down, adjust its gain, etc as it plays it back.
This is all really generalized though, and some of the terms I've used here aren't universal. In the end, the entire workflow depends upon the engine the game is running within and how flexible it is. Game audio is a bit of a moving target, but I like that. It makes it more interesting for me. But, according to those more experienced than I, the implementation side is becoming an increasingly important part of the sound design process. Where in the past you might cook all the effects processing into the sound, these days the audio engines are applying reverb, occlusion, filtering, delays and other effects in realtime, and it's the job of the sound designer to get into the engine and make things behave the way they want.
Any other game audio people care to take a crack at this?