Another benefit of biased representation, is that it makes error values extremely noticeable.
It is common in automotive situations to reserve part of the representable range to indicate error or SNA (signal not available) conditions. If you wanted to reserve the largest positive numbers for error codes, then their hex representation would look something like 0x7FFF, which can be hard to spot in a data stream. If you used the largest negative numbers, then they would look something like 0x8000, which is also not very intuitive.
By using biased representation, you can use values like 0xFFFF to indicate error values.
Of course, you could use something like bit-flags to represent errors, but that wastes a whole bit of bandwidth. Much more efficient to amortize the enumerated value over the whole signal.
Most RTCs do use the same crystals that are in wristwatches, which is actually the source of their error.
The tuning fork crystal design used in wrist watches has a parabolic temperature coefficient, which means that the clock is only really accurate at room temperature. This isn't a problem for wristwatches, because, presumably your wrist is at approximately room temperature, but it does become a problem for electronics that operate with large temperature swings. (like the inside of a phone or computer, for instance).
The vast majority of quartz references in consumer devices could be a lot better, but they are not fine-tuned at the factory. They're just "close enough". So they drift a lot.
A simple tuning procedure could make almost all quartz clocks you own a heck of a lot better. But in many cases this is hard to do because the manufacturer never put the tuning components (a variable capacitor or the like) on the PCB.
> Embedded systems, IMO, must be deterministic, reliable and consistent.
This is the definition of a hard real-time system. In most of the literature, 'embedded system' is a broader term that just means there is some compute embedded in a device that performs a larger task.
It's looking more and more like mainstream embedded SoCs will combine the general-use HMI processor core (like an A8/A9) with a smaller real-time core for control tasks.
TI Sitara (Beaglebone family) does this via the PRU, and Freescale added a Cortex-M4 to the i.MX 6SoloX for a similar purpose.
Actually this article is talking about mild hybrid technology: Which is about upping the voltage for traditional non hybrid vehicles from 12V to 48V, giving them some hybrid like features on the cheap.
If you try to estimate power spectral density using the intuitive unbiased estimator (the DFT), you are going to have a bad time. A vanilla periodogram has very high sideband leakage, which means that the energy of the signal at a single frequency will look as if it "smeared out" across neighboring frequencies. The standard solution to this is to use the so-called 'modified periodogram' where the implicit rectangular window is replaced by a different windowing function with lower sideband leakage. In general, there is a direct tradeoff between sideband power and center frequency power, and in this application, you would do well to use a different window, such as the blackman-harris, or hamming window. See [1] for more details.
In addition even the modified periodogram discussed above has asymptotically nonzero variance [2], which means no matter how many samples you take, you will still have 'noise' in your PSD estimate. If you use biased estimators of the periodogram, such as the welch-bartlett method or the blackman-tukey algorithm, you will get much better results.
In practice, none of it matter in this case. The leakage of rectangular window is ~30dB at a distance of 30 bins, and they aggregate more than 30 adjacent bins together ("re-bin"), making the leakage no more than 1 bin. To make things worse, the dynamic range of their receiver barely scrapes 40dB (peak) or 30dB (SFDR/SINR) - rendering the use of a more sophisticated window a moot point.
Claiming that the FT is not necessary for digial audio is like claiming that the you don't need the rocket equation in order to build a missile. Sure, its technically possible, and yes, there were probably early pioneers that forged ahead before the mathematical theory was fully sketched out, but our understanding of the fourier transform has drastically increased our ability to design acoustical systems.
Those analog anti-aliasing and anti-imaging filters are designed using LTI systems theory, that fundamentally rely on the Fourier tranform to reason about their transfer functions. The Nyquist-Shannon sampling theorem was proven using the fourier transform. Without the fourier transform, you need to rely entirely upon time domain representations of signals, and perform your analysis using tedious convolutions. You can't use a spectrum analyzer to examine the signal to noise ratio of your CD player. While it's true that digital music could technically exist without the fourier, there is no way in hell it would be as pervasive as it is today.
The Windows 95 startup sound, created by Brian Eno, was coded ADPCM. Seemed to work okay. A billion people heard it. (8 bit MS ADPCM also sounded horrible.)
You surely mean 8 bit PCM (without "AD") sounded horrible? ADPCM encodes differences in just 4 bits but the decoded values are in the range of 16 bits.
"ADPCM stores the value differences between two adjacent PCM samples and makes some assumptions that allow data reduction. Because of these assumptions, low frequencies are properly reproduced, but any high frequencies tend to get distorted. The distortion is easily audible in 11 kHz ADPCM files, but becomes more difficult to discern with higher sampling rates, and is virtually impossible to recognize with 44 kHz ADPCM files."
I've already linked this article and it has even more details, highly recommended.
I wrote some of those original codecs. I'm aware of what they do. :) The original SoundBlaster card was 8-bit. Creative ADPCM is 8 bit. Dialogic ADPCM -- basically every recorded sound you've ever heard over a telephone -- is 12 bit. You are correct with the modern definition but I'm talking about 20 years ago so let's not stomp on history for sake of Hacker News karma points.
The Microsoft article gets a few things wrong. The distorted sound is not due to reducing the sample rate. The distorted sound comes from taking a perfectly-good 11k file and then ADPCM compressing it. This is obviously due to throwing away information on each sample as part of the encoding process, not anything due to sample rate. (Of course it sounds better at higher sample rates. More data, more better.)
ADPCM for telephony seldom even hit 11k rates. 6000 and 8000Hz ADPCM files are common. (And nope, not 16 bit either.)
I fully agree with you re 8-bit SoundBlasters and phones. I was talking about the music recorded for CDs, 16-bits. Converting that to ADPCM was certainly not a process that was guaranteed to automatically give the good results but it was at least possible to produce reasonably good sound and save some space.
I'd be of course happy to hear something more about the work you did.
Yes. Reformulate the systems transfer function into a state space representation, and then solve the algebraic ricatti equation to find an optimal gain matrix. This is know and the LQR problem.
In addition, there is no way in hell that the control algorithms it was using could have been developed without the use of computers. State-space control theory was specifically developed to take advantage of discrete-time control systems.
OSEK is a an industry standard rtos that is used by almost all automotive players. It is specifically designed for use in the automotive environment. Toyota actually claimed to use an OSEK compliant rtos, but it later surfaced in this lawsuit that they had written their own implementation that was never certified by an outside organization. OSEK is in the process of being superseded by AUTOSAR, which defines much more than just the os, and included a large hal that allows for plug and play middleware libraries. Unfortunately it isn't economical for every ecu to make use of AUTOSAR: it has heavy resource requirements (> 2Mbyte RAM) and so many applications don't use it.
Also on the horizon is ISO26262, which mandates quality assurance for automotive embedded code in the form of paper trails. Unfortunately due to the huge amount of work required by the standard, some automakers are choosing to ignore it and hope it doesn't become mandatory.
It is common in automotive situations to reserve part of the representable range to indicate error or SNA (signal not available) conditions. If you wanted to reserve the largest positive numbers for error codes, then their hex representation would look something like 0x7FFF, which can be hard to spot in a data stream. If you used the largest negative numbers, then they would look something like 0x8000, which is also not very intuitive.
By using biased representation, you can use values like 0xFFFF to indicate error values.
Of course, you could use something like bit-flags to represent errors, but that wastes a whole bit of bandwidth. Much more efficient to amortize the enumerated value over the whole signal.
You can see a real-world example of these types of signals as defined by SAE for heavy duty trucks here: https://www.scania.com/content/dam/scanianoe/market/au/produ...