Adpcm
Does anyone understand ADPCM? I'm trying to get a handle on so I can write some propeller code.
I get how it works, but when I run a test in Sysquake (great MATLAB-like program), the encoder seems to take a few beats to get going. That would be fine for a long transmission, but might cause problems for packetized transmissions that need to play even if some are lost(?).
Attached is a pdf showing the original, encoded, and decoded signals. The ADPCM code is from the MATLAB website and the author adopted it from Microchip AN643.
I get how it works, but when I run a test in Sysquake (great MATLAB-like program), the encoder seems to take a few beats to get going. That would be fine for a long transmission, but might cause problems for packetized transmissions that need to play even if some are lost(?).
Attached is a pdf showing the original, encoded, and decoded signals. The ADPCM code is from the MATLAB website and the author adopted it from Microchip AN643.
pdf

23K
Comments
In general, an ADPCM encoder will require several samples for the predictor to adapt to the incoming signal. However, it should be able to generate a coarsely quantized version of the signal while it is adapting. I'm not sure why your code generates a zero output during the startup -- It must be due to the way the encoder state is initialized.
Try modifying the input signal by inserting 16 zero-valued samples at the beginning, and see how it adapts to that signal. I suspect it's just an initialization problem, and the encoder should work fine after the first 10 samples.
There will be a mismatch between the encoder and the decoder if the decoder does not receive all of the packets. The encoder contains a virtual decoder within its prediction loop that tracks the decoder at the receiver. If the receiver misses data it's decoder will be out of sync with the encoder's predictor. However, G.721's predictor is very "leaky", which allows the encoder and decoder's predictors to converge very quickly. I think the time constant is on the order of 16 to 32 samples.
Dave
erco
Sorry if my response wasn't clear. I was responding directly to Jay's question, and I didn't provide much background on how ADPCM works.
ADPCM stands for adaptive-differential-pulse-code-modulation or adaptive-differential-predictive-coding-method. PCM (pulse-code-modulation) is just representing an analog signal as digital numbers. For voice, the signal is usually sampled at 8,000 samples per second, and represented by 13 or 14 bits. For telephone transmission a companding algorithm is used to compress each sample to 8 bits using a logarithmic mapping.
ADPCM was developed to compress the audio signal down to 4 bits per sample so that twice as many voice calls could be sent over the same digital transmission line. An ADPCM encoder predicts the value of the next sample based on the previous audio that was transmitted to the other side. It subtracts the predicted value from the input value, and transmits the difference instead of the original signal.
This works fine as long as the decoder receives all the information sent by the encoder. If some of the transmitted data is lost, then the decoder has to guess at the what the predicted value should be. The predictor only uses the past few samples for the prediction, so eventually the encoder and decoder get back in sync.
If your interested in more details you could check Wikipedia or do a web search on ADPCM.
Dave
Thanks, I was wondering if it had to do with initialization. I have no way of know if they guy who translated the original C code to MATLAB knew what he was doing. I'll try to figure out if there's some error there. The fact that this code takes in a floating point value between -1 and 1 show's there's already something different than the original ADPCM algorithm.
Any resolve?
I downloaded the ITU C code for G.726 and ran it on your test data. The results for the first 50 samples are shown in the attached graph. The original data is in blue, and the decoded data is in red. It takes a few samples for the encoder to adapt to the signal, but it looks pretty good after about 5 or 6 samples.
Let me know if you want to try the C code. I can zip it up and send it to you.
Dave
I realized that I might not be scaling the test data the same way you were. My original simulation used a 13-bit range. I changed this to a 16-bit range, and I now get the same results you got. It takes longer for the encoder to adapt it's gain for the larger amplitude signal.
Dave
I've a number of projects like that,
DJ