PSTN digital network – it’s TIME to upgrade past 8KHz!

February 28, 2013

Look, internet. Telecom companies. Cell phone companies. VoIP providers. Get this shit straight. It’s 2013 now. We finally have realistic, viable, mass-market electric cars after decades of failures and false starts. We have a digital powerhouse in every pocket. Blindingly-fast 2 megabyte-per-second download speed is the accepted norm for broadband. TV is all on-demand now. Everyone is streaming their life in real-time on Facebook via their mobile phones.

So why in the FUCK are we still forced to endure 8KHz audio as the “norm” for telephone calls? I can’t even hold back here – that is nothing but bullshit. I don’t even answer phone calls anymore. With the mess of cell phones garbling audio quality and working to uphold that 8KHz piss-poor standard, I find myself spending most of my mental effort on the phone trying to decode the words said by the person on the other end, instead of thinking about the topic at hand. The only thing I ever want to do when I get/make a phone call to someone, is how much longer I must endure this conversation before I can get off the phone again.

Sick of this crap. Telecom companies, it’s time to upgrade your standards. The standard should be 44.1KHz like it is for everything else. Digital compression (AAC-ELD, maybe?) is within the realm of possibility for all PSTN connections, so why hasn’t any effort been made to phase in a newer standard? We get crystal-clear audio through VoIP connections that aren’t tethered to this arcane 64kbps/8KHz PCM standard. So why the hell can’t we get digital compression between callers? If you’re worried about loss in digital signals over a PSTN connection (i.e. dial-up modems), then just use a lossless compression scheme to fit higher bandwidth into that 64kbps bandwidth! But FFS, don’t keep screwing real people to make old dial-up crap technology happy. You can detect those signals and automatically apply a different compression scheme to them. Look at what Skype does. It automatically adapts to bandwidth needs in real-time during a call. Why can’t you do that for phone calls?

Get with it, because I’m sick of being afraid of picking up the phone and hearing some garbled 8KHz crap.



  1. This. A million times this. Thank you.

  2. Ahh.. 44.1khz is a sample rate..
    Your ear can’t even hear 20khz so why demand bandwidth above it?
    Yes I do agree.. a higher fidelity is wanted by us all.. the piss poor audio you hear has nothing to do with the 8khz.. it is the piss poor data bandwidths the system is working with..and shit audio setups in many phones. slow cpus in the phones..or overtaxed phone cpu’s..data dropouts caused by the above and the lossy compression that is used in a-d conversion.. .. cheap in cheap out.. with expensive on either end..
    Understand what you are working with so you can complain intelligently..

    • Do not tell me to “understand what you are working with” when you only have invalid complaints about what I’m talking about.

      8KHz is a sample rate the same as 44.1KHz is a sample rate. An uncompressed 8KHz voice recording is very, very significantly degraded versus the same recording at 44.1KHz. The ear can’t hear signals over 20KHz, but fidelity is different from a single solid tone. You can hear the difference between 22KHz and 44.1KHz even though you aren’t technically able to hear the difference in tone. At that point, it becomes a difference of resolution, the same as you can tell the difference between a 96DPI screen and a 150DPI screen, even though 96DPI is “fine”. 150DPI gives you more cleanly rounded edges and less rounding needed in software to make objects appear smooth.

      With telephone audio, you could be on a pure land line with uncompressed audio, merely processed by the carrier to fit as much audio data as possible into 8KHz, and you still get shit unintelligible audio. Nothing whatsoever to do with “overtaxed phone CPUs” and “slow CPUs in the phones”. Phone processors are far faster than needed to process gutless, pathetic 8KHz audio signals, and that’s an absolute garbage excuse on your part. The problem is that the PSTN runs on a 8KHz base signal, and that’s where crap gets pushed into the system right off the bat – no option to give networks a chance to provide a better signal at even 22KHz.

      If you’ve ever even played with audio signals before, you’d know the cold, hard difference between 8KHz and 44KHz. Don’t give me this “you can’t hear over 20KHz” crap.

  3. i am not sure but i am going to guess that the 44khz you are speaking of is for cd audio. Is the 44khz of audio bandwidth split between 2 channels ? ( 22 khz bandwidth left, 22 khz right ? )
    I agree there is a big diff between 8kc and 44kc.
    The range of hearing is 20-20 khz or higher for some people.
    I dont think I can hear much past 16000 due to getting water in my ear at one point. You can kind of ‘feel’ the higher frequencies, at least in my expericence

    • It’s actually mostly due to being able to hear certain nuances in audio over 16-22KHz, even though you can’t hear those direct frequencies. The electronic speakers/drivers are given more flexibility to reproduce combinations of tones much more accurately when they have a higher sample rate than the highest single frequency they might be sounding out. Nothing in the world is a single solid tone (except, well… digital beeps and boops).

      So, the wrong principles are being applied to voice signals when they cap it at 8KHz, which cuts out a lot of information that’s needed to hear the more subtle features of voices.

      Oh – and 44KHz is actually per channel, so if you have stereo audio, each side gets a 44KHz sample rate. We don’t really need stereo audio for voice calls, so that’s actually even just *half* the bandwidth of a CD that’s needed. Compression, however, takes the duplication out of the two channels (since both sides are usually about the same sound, it stores information about the differences, instead of two separate signals), so it’d still take roughly the same amount of compressed bandwidth as CD audio none the less.

      • According to nyquist theorem’s, you must sample at a rate greater than 2x of the desired frequency. So yes, to stay true to original signal fidelity, higher is better. But why stop there? Why not make each sample stereo and increase the sample depth to 16-bits? Ahh.. there’s the original problem, but now it gets worsened by network congestion, since you would now require 176kB/sec instead of just 8kB/sec. The original post complains of garbled voice. etc.. Perhaps the author was unaware of IP packet queuing, and telecom’s, routers, and other network node-point’s that constantly make decisions to drop non-prioritized data-gram IP packets when it’s convenient? There’s more to it, when the box is fully open…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: