Question for recording artist/engineers


Let's say you have a jazz band who wants to sell cds of their music with the best quality of sound they can achieve at the lowest out-sourced cost or do-it-yourself. If one wants to do a just-in-time type of manufacturing of their cd, how can they improve things?

Currently they are recording at 48k in Pro-tools, mastered in Sonic Solutions by Air Show Mastering, and then they use top of the line cds (Taiyo Yuden) with a Microboards Orbit II Duplicator. This has produced average cds but we want to do better.

What would you engineers do to improve this so it gets closer to audiophile quality? Would you recommend using a different mastering house, different cds, or a different Duplicator? Or would you just bite the money bullet and go directly to a full-scale manufacturer? We are trying not to have that much money tied up in inventory.

If this is the wrong place to post this question, please suggest another message board to post.

Thank you for your feedback and assistance.
lngbruno
All of the downconversions from 96k and 88.2k use the same algorithm, it's just that the non-integer conversions are computationally greater. 96k->48k and 88.2k->44.1k are one phase filters, while 96k->44.1k and 88.2k->48k are multiphase filters of 147 and 80 phases respectively. This means that 147 or 80 sets of filter coefficients have to be stored instead of just one set, and the math has to be written to rotate regularly through all of the phases. Multiphase filters are correspondingly more software and memory intensive to implement than single phase filters, and can be much harder to do in realtime. This is a good reason for consumer manufacturers to stay away from them. Professional equipment usually has more software horsepower. As far as 88.2khz having an advantage over 96khz, most current players can play at 48khz as well as 44.1khz so there seems to be no compelling reason to stay with multiples of 44.1khz (assuming dvd release format).

Frequency doubling is not related to compute power. The most important reason for it is clocking. All equipment, consumer or pro, has to process both new and old formats using the same system clocks in its hardware and software operations, and system clocks can usually be divided or multiplied by factors of 2 fairly easily. Second, pro equipment needs to maintain compatibility with previous audio and video recording frequencies in order to deal with archival as well as new material. Ease of sample rate conversion is a factor but probably a distant 3rd in comparison to the first two.

'Throwing out every other sample' is something even the worst of software writers knows better than to do. You should listen to this kind of aliasing sometime in order to know why it's wrong.
Thanks for the tutorial (I wasn't necessarily being literal about the 'throwing away' part). What I'm still curious about is this: How did we end up with two standards so close together as 44.1KHz and 48KHz? I understand the reason for picking a frequency in this area, just not how we got to both of these...
In the not-so-distant past, 44.1khz was the consumer standard and 48khz the professional standard. High sampling rates were something a few engineers experimented with, but were nothing like an accepted standard as recently as ~10 years ago.
Thanks for taking a stab Flex, but the reply only reiterates the question...anybody else have an insight?
Zaikesman, this may be closer to what you are looking for.

44.1khz came about because of its relationship to NTSC and PAL tv line rates. Early digital audio was recorded using versions of video recorders, and the audio frequency had to be related to the horizontal video frequency in order that both video and audio frequencies could be derived from the same master clock. 44.1khz was the original PCM-F1 format, which I believe was adopted first in Japan and ultimately became the compact disc standard.

The use of 48khz is based on its compatibility with tv and movie frame rates (50Hz,60 Hz), and with the 32khz pcm rate used for broadcast. 48khz has integer relationships with all of the above and therefore makes it easier to set up time code for studio sync. Looking at an article on 48khz, the author mentions your original idea of sample rate conversion as a primary reason for the concern with integer frequency relationships in the early days of audio.