Why does a Music Server require high processing CPU power?


I noticed that some music servers use, for example, a dual multicore CPU’s running under a custom assembled operating system.  In addition, the server is powered by a linear power supply with choke regulation and a large capacitor bank utilizing the highest audiophile grade capacitors.  Various other music servers have similar high CPU processing capabilities.  

I know that music is played in real-time so there is not much time to do any large amounts of processing.  I also know that the data stream needs to free of jitter and all other forms of extra noise and distortion.   I believe that inputs and outputs are happening at the same time (I think).

I also know that Music Servers needs to support File Formats of FLAC, ALAC, WAV, AIFF, MP3, AAC, OGG, WMA, WMA-L, DSF, DFF, Native Sampling Rates of 44.1kHz, 48kHz, 88.2kHz, 96kHz, 176.4kHz, 192kHz, 352.8kHz, 384kHz, 705.6kHz, and 768kHz and DSD formats of DSD64, DSD128, DSD256 and DSD512 including Bit Depths of 16 and 24.  

Why does a music server require high processing power?   Does the list above of supported formats etc. require high processing power?  Assuming the Music Server is not a DAC, or a pre-amp, what is going on that requires this much processing power?   

What processing is going on in a music server?  How much processing power does a music server require?  

Am I missing something?   Thanks.   


hgeifman
Thanks. Yes. My original question was about a home music server/streamer. For example, please see below:

Innuos ZENITH MK3 1TB BLA
Aurender N10 Music Server
LUMIN X1
Aurender N100h
SGM Extreme
AURALIC ARIES G1 STREAMER / MUSIC SERVER
Innuos Statement
And Many more
Etc., etc.
The exact same principles for ANY music streamer are that they
have as linear a power supply as possible, as little electronic noise as possible (jitter) with the most accurately timed data stream as possible.

Music servers do not pass ones and zeros through wire, they switch on and off either power or light or whatever means the signal is transferred it is done by switching between on and off.

The less electronic noise the more effectual the measuring, reading, transferring of the encoded data is. It’s an analog means of transferring data as a series of nulls and ones (on and off) which can be deciphered into logical and meaningful information.

To analogize, jitter or electrical noise mixed with the actual datastream can cause incorrect bits in the data. Without getting into things like electrical noise and mitigation; it’s not unlike two people giving instructions when only one of them is correct, and you can’t tell them apart.

Field-Programmable Gate Arrays is a bit like having a configurable integrated circuit. It's not set in stone like an IC is, where the function is set. They are often used to prototype and test before the IC is created from the FPGA prototype.
@djones51:
I just got a Hi-Fi Berry Digi+ for my RPi3; Hoping to configure this soon. Did you have any issues with the setup you can share?
I'm setting up another roon endpoint system in the house and had an RPi laying around. 
TIA!
-Dave
+1 to those who say the best they have heard digital sound is through a dedicated Roon core--in particular the Intel NUC running Roon ROCK.  

This blew me away. I've always streamed Tidal to a Bluesound Node and shot the digital algorithm out to a DAC.  Well, going Roon ROCK (Intel NUC) right into the DAC is like buying a real upgrade!  No foolin'.  This is the best $500 I've spent. 
@jbhiller absolutely! 
Roon ROCK has been incredible. 100% stable no variation in SQ, no performance issues, lagless UI experience; best of all, the SQ is crystalline pure.
I now never use the MConnect/UPnP option on my ethernet DAC; Roon is fantastic!
I can only hope that future software updates retain the excellent SQ; 

For another system, I finally setup my RPI3 with a HiFi Berry Digi+ to use as WiFi (or hardwired) Roon Endpoint; Majorly impressed with this hardware and configuration over internet browser;