I see... They were generally dealing with analogue sources and destinations in the 70s, so you can't compare what they did then to what you're doing now. They used analogue peak limiters to just catch the worst transients for cutting lacquer masters. This was done to prevent the stylus from jumping the groove on playback. The input signal would have been +4dB nominal and the output would have been essentially the same. They didn't have digital hard limiters, obviously, as they didn't hit the market till 1993 or so. There was no brick wall, no output ceiling, just a variable threshold and a make-up gain.
When people are remastering stuff from the 70s for today, they go back to the mix tapes whenever possible, which generally wouldn't have any limiting on them. What they do in the remastering process itself for hard limiting depends on what the client (record label, but sometimes the artist) wants. That can mean anything from no limiting at all to "slam your head against the wall till it bleeds." The general rule of thumb for digital mastering is -0.3dB peak for all material, but I see plenty that are higher than that, even as high as a Least Significant Bit. iTunes is suggesting -1dB. The peak level really doesn't matter as long as you avoid inter-sample clipping on replay. The average level is what determines overall loudness and obviously, the more limiting you use, the less sonic impact it has.
Broadcast processors are designed to handle a wide variety of source levels without an engineer present to level-match anything, so average and peak levels don't matter to the processor much either, but the hotter the signal that's feeding it, the more it will get crushed on-air. So, avoiding limiting will give you a sound that's more true to the mix when it's played on the radio.