I'd like to add that in Europe the proceedings are slightly different.
First of all, DMU can lock to 25 fps TC, while running at 24 fps film speed, which is a very common situation in Europe (all VTRs worked with 25 fps TC, while the recorded program was running at 24 fps true speed, 1 frame pulldown every second or one field pulldown every 12 frames).
You cna choose to display feet/frames or TC. You can sync to bi-phase or TC.
We record tones in a tonefile, which is a separate "reel" on a MO disc. Tones are 1K@ -20dbFS for 5 channels and 80 Hz@ -20 dBFS for LFE. For analog, we usually shoot 1K@-20 followed by pink noise from DMU.
Dolby SR noise reduction has 3 positions: OUT, CAL (pink noise) and IN. OUT position is interesting for recording a stereo mix e.g. for TV.
2ch mix can be routed to MO or External out (outs 7&8) and recorded to an external unit or (if you have a DMU inserted in ProTools, for "Mixing in the Box" situation) record it back into ProTools.
You can monitor the line input in discrete mode, or listen through AC3 encoder/decoder chain (with 2 frames delay)
You can configure the meters in LtRt or Discrete modes, PK or VU, line in or Monitor output. The metering has a different scale than the good old DS-IV remote box meters. -20 is Dolby Level, which was -6 on DS-IV meters, and is referenced as 50% of modulation. Therefore, 0dB on DS-IV will correspond to -14 on DMU meters. Confusing.
You can record both Digital and Analog masters at the same time. You need to listen to Analog after that, in a second pass.
The container on Analog tracks can be trimmed during the recording. Values are referenced to 100% modulation (-14dBFS).
Anyway, you can always hop over to Concept Films in Belgrade and take a closer look.
p.s. great thread, Geo!
here's a note from another thread that got started. Thought it might be good to post it here as well.
There are discussions within fair use for educational and documentary use in specific circumstances that you can use 10% or 30 seconds. EXAMPLE
"Use 10% of a song, not to exceed 30 seconds, and do not show the finished video out of the classroom. Do not duplicate, distribute, broadcast, webcast or sell it. Proper attribution must be given when using copyrighted materials. i.e. "I Am Your Child" written by Barry Manilow/Martin Panzer. BMG Music/SwanneeBravo Music. The opening screen of the project must include a notice that "certain materials are included under the fair use exemption and have been used according to the multimedia fair use guidelines". Your fair use of material ends when the project creator (student or teacher) loses control of the project's use: e.g. when it is distributed, copied or broadcast"
But this is a very specific use. NOT for a commercial project.
If you are manufacturing and distributing copies of a song which you did not write, and you have not already reached an agreement with the song's publisher, you need to obtain a mechanical license. This is required under U.S. Copyright Law, regardless of whether or not you are selling the copies that you made. You do not need a mechanical license if you are recording and distributing a song you wrote yourself, or if the song is in the public domain.
Also for Film/broadcast/new media ( anything locked to picture) you need a Sync license from one of the following http://www.ascap.com
BMI.com | Welcome
U.S. Copyright Office
A music synchronization license - or sync license, for short - is a music license that allows the license holder to "sync" music to some kind of media output. Often sync licenses are used for TV shows and movies, but any kind of visual paired with sound requires a sync license. A sync license gives you the right to use a song and sync it with a visual in that when you hold a sync license, you are allowed to re-record that song for use in your project. If you want to use a specific version of the song by a specific artist, you also need to get a master recording license. Typically, a sync license is obtained from a music publisher while the master recording license is obtained by from the record label or owner of the master. A sync license covers a specific period of time, and the license will stipulate how the song can be used. There is one flat fee involved in obtaining a sync license, and once the license is in place, the song can be used as stipulated as many times within the license period as the license holder likes. In other words, if you obtain a sync license and use the song in a film, you do not have to pay a fee on the sync every time the film is viewed.
Also, Master use rights are required for previously recorded material that you do not own or control.
A sample is typically the use of an excerpt of a sound recording embodying a copyrighted composition inserted in another sound recording. This process is often referred to as digital sampling and requires licenses for the use of the portion of the composition and the sound recording that was re-used in the new sound recording. In some instances, artists re-record the portion of the composition used in the new recording and, therefore, only need to obtain a license for the use of the sampled composition.
There are occasions where FAIR USE comes into play for Documentary and educational films... here's a PDF of some fair use issues. Lots of good fair use details here: Fair Use & Copyright: -- Center for Social Media at American University
your project, as described, does not fall under fair use doctrine.
btw even Weird Al gets permissions for his parody material.
also on the comment ( here i go, into the storm)
And what happens when you want to write music that is itself a form of criticism? What if you want to make a literal quotation but the copyright holder does not want to be a party to such criticism? Should such criticism be possible only with textual products and not musical ones? Or should music, too, be a viable basis of cultural criticism?
Here the courts said that there is fair use when quoting music, despite the protestations of Yoko Ono over the use of Imagine.
If we must, by default, seek maximal permission for our musical creations and enterprises, then we should expect "dangerous" music to go underground, and only "safe" music to be mainstream. I cannot think of a more insidious way to destroy the cultural value of music.
the court ruled on this based on the movie being a social commentary about intelligent design. ( therefore a documentary/news/educational piece if you will... ) The ruling has nothing to do with the tune or music unto itself what-so-ever... safe , dangerous or otherwise....
It's all about how something was used and with what. not the something itself. the whole fight could have been over a picture, or a poem, or a video, or a document, any copyrightable widget.... it's got nothing to do directly with the song.
Video and Audio frame rate mismatch in FCP - potential fixes.
First rule of FCP... don't mix frame rates and, don't mix Video speed (23.97 fps) with film Speed (24 fps ) material in one time line.
FCP "frame rate" is the video frame rate setup in the sequence setup, in your case 23.97 ( Video speed) . Audio "frame rate" as you suggest does not have anything to do with the timeline frame rates.
When you import video and audio using a camera and FCP's log and capture, you are capturing at the video frame rate. If you are using a dual system, you capture at the video frame rate matching the video rate shot. ( again, in your case 23.97 Video speed) Make sure this is all set up in the session setup/ capture setup and sequence setup.
When you import audio standalone due to the use of a dual system, and drop the audio into the sequence, there is a rate conversion done by FCP to match the current timeline, even when the audio does not match the video frame rate. Therefore the audio may be slowed down or sped upa s the case may be. One of your issues is that FCP sets an NTSC VIDEO flag on the audio tracks even though they are shot at 24 ( film speed ).
You are probably seeing the video and audio in sync at the head of each clip, and as the timeline is played the audio drifts. you can ( although not recommend) chage the audio speed to match the video. I believe it's 99.9% or 100.1% depending on your drift problem. This will cause the audio to be resampled. ( for us audio purists, this is scary, as the audio has now be re-sampled twice)
Here's another way... ( can't absolutely promise success )
1. Select the problem sequence in the FCP browser, right-click it and choose Export > XML.
2. Open that XML file in a text editor.
3. Look for the tags <ntsc>.
If they all say <ntsc>TRUE</ntsc> or if they all say <ntsc>FALSE</ntsc>
then you don't have a problem.
But if there is a mix of TRUE and FALSE values, you need to change them so they are all the same. match the option to your video clip settings.
When you add a video clip to a sequence, FCP knows what frame rate to use. It just looks at the video clip properties. Audio doesn't have a frame rate like video, but FCP needs to assign it one because many editing operations are tied to the video frame rate. So when you add an audio clip to a sequence, FCP has to pick a frame rate for it. That choice is apparently made based on some combination of sequence and capture presets, and may depend more on what settings are in the cache than what are currently selected. And while FCP seems to be able to adapt to whatever frames-per-second are appropriate, it needs some help to get the NTSC values right.
Here's a document that talks in detail to loudness and tehniques for maintaining loudness.
here's an overview of the Document:
Despite the conclusion of the DTV transition, many broadcasters and the production community have been slow to effectively adapt to the changes required to transition from analog NTSC audio techniques to contemporary digital audio practices. With digital television’s expanded aural dynamic range (over 100 dB) comes the opportunity for excessive variation in content when DTV loudness is not managed properly. Consumers do not expect large changes in audio loudness from program to interstitials and from channel to channel. Inappropriate use of the available wide dynamic range has led to
complaints from consumers and the need to keep their remote controls at hand to adjust the volume for their own listening comfort. The NTSC analog television system uses conventional audio dynamic range processing at
various stages of the signal path to manage audio loudness for broadcasts. This practice compensates for limitations in the dynamic range of analog equipment and controls the various loudness levels of audio received from suppliers. It also helps smooth the loudness of program-tointerstitial
transitions. Though simple and effective, this practice permanently reduces dynamic range and changes the audio before it reaches the audience. It modifies the characteristics of the original sound, altering it from what the program provider intended, to fit within the limitations of the analog system.
The AC-3 audio system defined in the ATSC Digital Television Standard uses metadata or “data about the data” to control loudness and other audio parameters more effectively without permanently altering the dynamic range of the content. The content provider or DTV operator encodes metadata along with the audio content. From the audience’s perspective, the Dialog
Normalization (dialnorm) metadata parameter sets different content to a uniform loudness transparently. It achieves results similar to a viewer using a remote control to set a comfortable volume between disparate TV programs, commercials, and channel changing transitions. The
dialnorm and other metadata parameters are integral to the AC-3 audio bit stream. ATSC document A/53 Part 5:2007 , which the FCC has incorporated into its Rules by reference, mandates the carriage of dialnorm and correctly set dialnorm values. The industry has recognized that a new proficiency in loudness measurement, production monitoring, metadata usage, and contemporary dynamic range practices is critical for meeting the
expectations of the content supplier, the broadcaster, the audience, and governing bodies. This document provides technical recommendations and information concerning:
• Loudness measurement using the ITU-R BS.1770 recommendation.
• Target loudness for content exchange without metadata.
• The set up of reference monitoring environments when producing for the expanded range
of digital television, with consideration for multiple listening environments in the home.
• Provides methods to effectively control program-to-interstitial loudness.
• Effective uses of audio metadata for production, distribution, and transmission of digital
• Dynamic range control within AC-3 audio and contemporary conventional dynamic range
control as an addition or alternative, including recommendations for loudness and
dynamics management at the boundaries of programs and interstitial content.
Goal is to correctly setup your listening environment once and make sure you
are always listening at this level when creating content. This is true even if you must use headphones to monitor.
With the monitor level set correctly, always mix relying on your hearing. Use a BS.1770 loudness monitoring tool to confirm what you hear.
When generating content and the program delivery level requirement is
unknown or has not been specified, mix Dialog Level to -24 LKFS with true peaks below-2 dB TP.
The station AC-3 encoder’s dialnorm will be set to match the loudness of average Dialog Level of the content.
Measure the loudness of all audio channels14 and all elements of the soundtrack integrated over the duration of the short form content.
Measure the long form content audio when typical dialog is present and record this value as the Dialog Level of the content.
I especially liked the part about Film being the only Medium which can communicate Silence. Maybe it is something to do with Film being a cool Medium. I'll have to think about that. Very interesting. A thread on those kinds of things would be great too.. but thanks. This is what I call an efficient Thread Format :D
Editing because I just thought to qualify.
Cool Medium being, one which appeals to more than (one) sense. as Film does. Books, Press and Radio are Hot Mediums because they appeal to (only) one sense. I was wondering if sound and Vision together allows Film to signify a kind of closure by closing down one of those senses. so It creates silence. But it was very interesting when you seemed to show how this happens in the Language of Film itself.. I need to re-read that part.. Great Thread.
Spent most of my morning reading through this... great collection of information. I was sparked by the discussion that was had during a AES New York section meeting at Sound One last week, with an excellent demonstration by Dominick Tavella.
Always had a soft spot for all things post. Since upgrading the studio at work last summer, time to put some more of this info into practice.
Note: THE FOLLOWING IS A PERSONAL OPINION, NOT A FACT, the only fact about is that it will probably piss off at least 200 people....
I would not recommend/nor would I be happy mixing for FILM and THEATRICAL utilizing a bass management enabled A-Chain. I feel that it may not provide you with a proper understanding of what the mix will sound like in a theatrical B-Chain system.
I see no reason why you shouldn't mix on a bass managed system for BROADCAST or GAMES etc. Since that will end up being the B-chain anyway. So i'd go for it with your gear, what the heck!
If anyone else can chime me in on using Bass Management, I'd be very grateful. I am using the JLB LSR 5.1 which has it's own BM and works really well.
I know for TV, DVD, and the like it is necessary to use it so when mixing in 5.1 you are not overpowering the .1 since most at home do not have a full range speakers (and home receivers use BM set up anyway's).
HOWEVER, for film (and I have read a ton of threads on here and else where) and a don't think it's a good idea to use BM for a theatrical release.
Here is an example: Let's say there is a scene of a space ship landing and you have some really powerful effects with some SERIOUS sub harmonic lows for um.. oh about a few seconds right. You send it to all the speakers along with the score (which has some strong Cello and Bass Strings occurring at the same time). With BM on (lets say at 120) it's hitting the sub fairly well but both the mixer and the director want a tad more oomph going to the Sub so you send some (i.e. bussing it or using waves 360 or whatever means) and it hits it VERY nice and everyone is happy for that scene. All the Speakers are hitting with the right level and the sub (again we have BM on) is getting the lows below 120 from all the 5 speakers and also from the mixer bussing it another signal and the that SUB is REALLY moving, shaken the room and you can feel it in your spine. (not distorting, just set and flavored flawlessly)
Everything is all ear candy and dandy and everyone's happy.
Now, they test it out in a theater which of course doesn't use BM and when that scene comes and the spaceship lands, you hear a subtle thump. That's it. All the bass from the SFX and music score no longer has it's impact that it originally did when in the mixing room.
Unless I missed something here, BM shouldn't be used in Theatrical mixing since in the scenario I have, it would defeat the whole purpose and if it was mixed without BM, then everything would translate over into the theater.
I don't see any questions in your post....?
Besides, your scenario is pretty flawed
You are over-thinking it --- the bas management / non-bas management is not a question of life and death, it is much subtler than that. Work with what you have, and when you check your mix on a dub stage, you'll correct the problems (if any) - that's it.
Three commonly used workflows, as well as a fourth that is growing in popularity with the availability of hard disk recorders and the BWF file format, provide a good means for identifying all of the considerations.
1. Feature film – double system at 24fps and 48kHz audio recording for 24fps postproduction.
2. Film-based television providing sync dailies on DigiBeta (23.976 and 48kHz) for 23.976 postproduction.
3.Film or HD Production at 23.976 with single or double system audio recording for 23.976 postproduction.
4.Feature film and film-based television production at 24fps with hard disk recording at 48.048kHz for 23.976 postproduction.
1. Feature Film Double System
Most, if not all, feature film production intended for theatrical shoots film at 24fps while recording audio digitally at 48kHz. Because the audio elements tend to move to audio post via OMF, capturing the production audio digitally and at the highest quality and digitally is a requirement.
Picture: The camera negative is transferred to video directly or from a workprint, but the film is now running at 23.976fps during the telecine process in order to create a known 2:3 pulldown cadence to the 29.97fps video rate. Once digitized into the Avid in a 24p project, the frames are “stamped” as 24fps in order to play back in sync with audio captured directly via AES/EBU or Broadcast WAV files recorded at 48kHz.
Audio: Because the audio was captured digitally – either synced to work clock or imported as 48kHz – it expects to be in sync with the picture as it was originally captured – 24fps. The native sample rate of a 24p project is 48kHz and all other rates are resolved to that during capture. An example of this is capturing analog audio with the .99 setting active. This tells the Media Composer to “count” samples .1% slower and stamp the captured files as 48kHz.
When playing back at 48kHz, the audio plays back .1% faster creating a true 24fps playback from 23.976 sync sources. When capturing digitally at 48kHz, no samples are converted. It is a digital clone.
2. Film-Based Television with Sync Dailies
Due to the tight schedules in television programming, dailies are already synced when delivered to the editing room. The transfer facility has already resolved the original shooting rate of 24fps to 23.976 and has sample-rate-converted the digital audio sources to be in sync in the source tapes. Because some tapes expects only 48 kHz for 29.97 NTSC, the audio must be sample rate converted when going from 24fps to 23.976 on the video. The path looks like this:
Picture: 24 -> 23.976 to 29.97 video creating 2:3 pulldown
Audio: 48kHz -> 47.952 slow down (.1%) sample corrected -> 48kHz to 29.97 video
In this case, the postproduction should be done in a 23.976 project type, since it assumes that the 48kHz audio sample rate is in sync with picture playing back at 23.976fps from the DigiBeta captured sources. It has the same result than that of a film-to-tape transfer to tape. But since there is no need to speed up to true 24fps in this project, audio samples remain untouched at 48Khz throughout the postproduction process, through the audio mix and back to the NTSC broadcast master. Using this project type for this workflow will only go through one sample rate conversion during the film to tape transfer.
3. Film or HD Production at 23.976
Film cameras have been able to shoot at 23.976fps for many years, but it was not until HD acquisition became popular that people started using the frame rate regularly in production. Although HD cameras can shoot at true 24fps, the preferred shooting rate is 23.976fps because of the audio consideration when down converting to NTSC. No one wanted to deal with a sample rate conversion in the audio when working in a fully digital environment. In a double system environment, the DAT or hard disc recorder records at 48kHz. So shooting at 23.976fps eliminates the need to do a sample rate conversion or an analog audio capture with the .99 setting.
The resulting NTSC down convert is now the same as in the previous example where 23.976 video with 2:3 pulldown is in a DigiBeta tape with sync 48kHz audio. Keep in mind that when capturing directly from the Sony HDW-F500, the down conversion and 2:3 insertion processes cause the picture to lag 2 frames behind the audio. To address this condition, Avid Media Composer Adrenaline and Avid Xpress Pro systems include an audio delay feature in the Capture Window to delay audio by 1-5 frames as needed to resync picture and sound during the digitizing process.
If working double system, the DAT or BWF files from the hard disk recorder, the 48kHz recording will come straight in with no sample rate conversion or speed change to sync with the 23.976 picture.
4. Feature Film with 48.048kHz Audio Recording
Even though film cameras can run at 23.976, using them in that way never caught on for a variety of reasons. However, the audio workflow can change to allow a 23.976 postproduction workflow despite the film running at 24fps. This workflow is only for picture capture frame rate of true 24fps and a NTSC postproduction workflow. DAT, and more common to this workflow, hard disk recorders, can record at 48.048 kHz – which is really just 48kHz with a .1% speed up as part of the capture.
- 0.1% 47.953 Khz
48.000 Khz normal
+0.1% 48.048 khz
If a BWF file is detected as having been recorded as 48.048, you will be presented with a dialog box asking whether the import should perform a sample rate conversion or import as 48 kHz. If no sample rate conversion is chosen, the imported files are stamped as 48kHz, thus slowing them down by .1%; the same amount that the film is slowed down during the film to tape transfer. This way no sample rate conversion is performed, and a digital audio pipeline is maintained for the postproduction process.
To support the double system workflow used in feature film, it was necessary to create a true 24fps environment. This allowed audio dailies from the set to be directly captured into the Film Composer and remain in sync with the picture. This capability became a requirement when audio recording moved to digital formats such as DAT. Productions wanting to maintain a pure digital pipeline for audio needed to capture as AES/EBU with no sample rate conversion in order to maintain the highest quality. Audio sample rate and digital audio workflow are the main decision drivers of working at 23.976 or 24 in the Avid.
Digital Cut Window
The same can be done during the film-to-tape process from DAT or BWF recorded at 48.048kHz to create sync digital dailies directly to disc via a MediaStation or directly to a DigiBeta tape.
Capture, Edit, Digital Cut
Capture: The project type determines the native capture rate of the project, either 23.976 or 24p. It also determines the native audio sample rate of that project that will not have a sample rate conversion or analog process involved when capturing, playing, or digital cut.
Edit: In the Film/24p settings you will see the “Edit Play Rate” as either 23.976 or 24. This control sets the play rate of the timeline. It does not affect any of the digital cut output settings. This control lets you set a default state of frame rate for outputs that are made directly to tape, such as a crash record.
Digital Cut: Here you can output the timeline as 23.976, 24, or 29.97. The important thing to remember is that this is the playback speed of the Avid timeline, not the source tape destination. The NTSC frame rate of 29.97 cannot be changed. What is changing is the frame rate of the picture within the NTSC signal.
1. 23.976. This creates a continuous 2:3 cadence from beginning to end of a sequence and is the expected frame rate of a broadcast NTSC master from 24 frame sources.
2. 24: This is used for feature film production to create a true “film projected” speed from an Avid timeline on NTSC video. It is also the output type to use when using picture reference in a Digidesign Pro Tools system using OMF media from a 24p project type. Note that this is not a continuous 2:3 cadence. Adjustments are made over 1000 frames with the pulldown cadence. No frames are dropped, just the field ordering with the 2:3 cadence.
3. 29.97: Timeline will play back 25% faster to create a 1:1 film frame to video frame relationship. This can be considered a 2:2:2:2 pulldown cadence. This output is useful for animation workflow or low cost kinescope transfers where a 2:3 pulldown cannot be properly handled.
The above workflows are intended to maintain a digital audio pipeline from capture through post and into the final mix. It is possible to cross frame rates with sample rates using analog capture instead of digital or using a digital audio sample rate converter at time of capture.
Here are 4 rules of thumb that determine whether or not 48.048 kHz workflow is possible in your project, and how it must be used. These are:
Rule # 1: That 48.048 kHz workflow only works if picture is being shot at true 24 fps, and picture and sound editorial is being done in NTSC 29.97 or 23.98 HD video.
Rule # 2: That 48.048 kHz workflow only works when the project is finishing and releasing on video (not film).
Rule # 3: That 48.048 files are only useful if they are stamped using –F mode.
Rule # 4: Sound Editorial must agree that it’s a good idea to use 48.048 kHz sound, and the producers and post supervisor must agree with them
it was from 2 different sources... then I mangled it together. too good to pass up for the reference data. most from the one you cite... as always.. in the morning I surf the web and read books.. this was a great article that needed to be made more available...
there are a lot of great folk on this forum that have discussed VO. I'd do a search first on the subject, then if you have any specific questions feel free to ask. there are only about a gazillion ways to record VO, type of voice, mic selection, eq and mic pre, dynamics, position of mic and distance from talent, recording in the same room or thru the glass, directing the talent and what the VO is for... is it the voice of GOD in the Monster TRUCK Truck truck rally... or is it the VO of a guy who was just beat up on screen and telling his story in a bar with VO.... its an Art... and it's subjective.
I have a quick question about using 'source sounds' and licensing.
I'm working on a short that takes place in a lobby of a doctor's office. The
office is pretty empty, so for the main element of the backgrounds, I'm using
a TV broadcast (just audio - there is no TV in any of the shots).
So I set up a mic in my living room and turned on the TV and recorded a short
piece of a broadcast (about 3.5 minutes worth).
I'm using the entire 3.5 minutes (thats the length of the short).
As I mix, I'm realizing that you can clearly hear what they are saying in the
broadcast, so my question is, am I allowed to use this recording in the project
without getting into trouble with licensing or whatever?
BTW, the director is entering this into a film festival (I think it is the school's
festival, but I'm not sure).
(Sorry, I'm a noobie! I just want to make sure that neither the director or I
get into trouble with THE LAW.)
I wonder if I can resurrect the X-curve as a topic? From my reading, the X-curve came about because film mixes in theatres were perceived as overly bright. The mixes were bright because the high frequency decay (RT60) of a room increases with room volume; if the film was mixed in a smaller room, and subsequently screened in a room of substantially larger volume, voila, a bright/ even harsh mix is heard unless the X-curve is inserted into the theatre's playback system.
It is common practice to insert the X-curve (aka Film curve) during a mix into the monitoring chain at a dub stage, because the room size is comparable to the theatre, and the theatre uses the curve- not because the acoustic properties of the two rooms are different.
If mixes are prepared in a smaller room, the physical properties of the room will reduce the high frequency reverberation time according to the Sabine equation (RT=(0.049*volume/ abs), where 'abs' is absorbance in Sabines at the given frequency). This will in effect, add an 'acoustic' X-curve to the room. If further high frequency roll-off is added through an X-curve EQ on the monitoring chain, during mixdown, an overly bright/ harsh mix will be heard in the theatre.
We have in fact been hearing this phenomenon in theatrical mixes coming out of our room, which has a much smaller volume, than a typical dub stage (yes, we do preview the mix at a full sized stage). We have now removed the X-curve to suit our smaller room, with good results.
I am beginning to think that the X-curve, is something that needs to be applied thoughtfully, based on the mix environment.
Are you able to comment on this? Thanks so much!
First of.. there are quite a lot of X-CURVE / FILM-CURVE posts on my blog and others... just do a search and you'll probably having reading material for a week.
in general the x-curve came about because
A. There was a lot of hiss in the old playback systems
B. The old amplifiers couldn't recreate Bass all that effectively due to power limitations.
What's left of the x-curve is that it is used to this day to match calibration in a bazillion different theaters around the US and around the world. If a theater is calibrated properly its "supposed" to should like every other calibrated movie theater that was calibrated. that's about it ... its all about DUBSTAGES and THEATERS sounding alike, and all those movies we love to listen to, all sound the same, no matter which theater we walk into, and no matter where it was mixed.
the dreaded x-curve is not really inserted into the mix. A professional dub stage and for that matter a good professional broadcast room should be calibrated. There are a lot of specifics to calibration other than just the EQ curve used in the room.
Rooms vary, not only according to size, but to shape, RT times, and a number of other variables. All need to be taken into account when designing a new room or simply fixing an old one.
No phenomenon here. Just plain acoustics. If you are going to play a product like your mix in a big theater, you need to either mix it in a big theater or set up a comparable room that "sounds like" a big theater. No other options than to simply guess at your end product while mixing, or as some do, after mixing a lot of project, adjusting their perceived mix so that it "works" in the theater.
Frankly, I'm mixing a small low budget 20 min short and I'm taking it to SoundTRAX in NY to mix in the A room for this very reason. It's just a short but the client wants a theatrical mix, so he's getting one.
Professionals use the correct tools to do professional quality work. Don't skimp on the tool set. It like trying to build a car with a single adjustable wrench. you might be able to do it, but I wouldn't want to be your knuckles when you get done.
Hey Georgia, thanks for this thread! It's awesome. I'm a bit of a n00b when it comes to post production, but I'm looking to start making the transition from music to post, and I'm sure this thread will come in very handy when I need to learn something.