Digital audio anybody?
Rayman
Posts: 14,853
I'm feeling braver in my solder skills and am looking at these two chips for digital audio:
LM4930LQ
LM49450SQ
Both take I2S digital audio input and have internal DAC and headphone amp...
They are also controlled by an I2C bus.
I think it wouldn't be too hard to have the Prop act as an I2S slave...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
LM4930LQ
LM49450SQ
Both take I2S digital audio input and have internal DAC and headphone amp...
They are also controlled by an I2C bus.
I think it wouldn't be too hard to have the Prop act as an I2S slave...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
Comments
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Coyote-1 does it nicely.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lt's not particularly silly, is it?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
The obex actually has digital audio SPDIF output object in case you missed it. (it operates using HSS)
HSS
http://www.andrewarsenault.com/hss/
SPDIF
http://obex.parallax.com/objects/344/
I made my black box with both of these objects.
http://forums.parallax.com/forums/default.aspx?f=21&m=376422&p=1&ord=a
My sound card does 96Khz at 24-bit. It's a huge step up from 44.1 @ 16 when you are looking to reproduce stringed instruments. I'd assume 192k would buy you some cleaner top end harmonics but it'd use a metric crapload of storage!
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lt's not particularly silly, is it?
BradC: Seems there's a lot of music available at 96 kHz, 24-bit. Maybe I should first see if 192 kHz @ 24-bits/sample stereo is even possible... I think I just calculated something like a 2 MB/s data rate...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lt's not particularly silly, is it?
"I wonder if you can get music in 192-kHz, 24-bit format... You'd think that would be pretty good quality..."
Are you serious? Got a neurophonic interface with a 192 KHz bandwidth? How about 5 MHz?
It is kind of hard to listen to music that goes 10 or 500 octaves beyond natural human hearing range without one.
synthwise.com/2009/02/01/sr0-eternal-monolithic-pandoras-box-automatic-fractal-numeric-synthesizer-and-radio-transmitter
Otherwise how could it be better than actually putting your ears to the sound source,
or connecting the best microphones to the best amp and the best speakers?
Post Edited (VIRAND) : 9/26/2009 3:11:14 AM GMT
This is an argument I've had a few times.
Just because you can't hear them does not mean they are not there, and when you get two frequencies you can't hear beat against each other, suddenly you can hear the byproduct.
I can't hear the difference between 2.5mm power cable and gold plated, oxygen free, $25USD/ft speaker cable, but I can sure hear the difference between 48Khz 16 bit and 96Khz 24 bit when I record an acoustic guitar.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lt's not particularly silly, is it?
I think that's like a 4 MHz bitrate, but sounds doable...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
I wonder if there are really that much music harmonics up there or if it's just sampling artifacts.
What kind of speakers can make so much ultrasound that you can hear beats of those harmonics?
The transducers on the ping module.
Once I made ultrasonic AM with similar transducers and a receiver for it.
The doppler shift was very significant because when I walked around with the receiver then
the received ultrasonic music would speed up and slow down like vinyl does if you fool around with the turntable.
Otherwise it was very quiet up there... I didn't hear mice or bats or bugs, only TV sync harmonics.
Chip said you can really hear the difference for things like cymbals...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
Just looking at raw frequency over time data shows the 44.1 to be adequate. If phase is considered, then it's lame. This is why great vinyl has better imagery, for example. The subtle phase cues above 8Khz really shine on analog sources, and higher sample rate digital ones. The lower frequency beats from those are audible and show up in string instruments often.
The really great thing about 24 bit is the extra dynamic range. 16 bits is 100dB, and that's a lot --more than the average person needs. But, the actual resolution, in terms of dynamics, is kind of poor at the lower decibel levels. For ordinary room listening, this actually works out for us, mostly because we lack the sensitivity for this to really matter at lower levels. Turn it up though, and it's a whole different ball game. There is a reason why you will see people spinning vinyl in the clubs. Phase and dynamic range accuracy are two of those reasons.
24 bit can also capture overall sound level changes. A 12" single has about 80dB S/N on a good day. However, the actual signal level can be varied another 40Db or so, and that's not capture-able at 16 bit, without some compression, or severe resolution loss, or clipping. 24 bit can do this nicely. Other things, like pops, are largely captured without severe clipping, making it a lot easier to retouch with accuracy. (hint, if you do get a clip, just use the pencil icon to "draw" in what you think should have been there, based on a similar wave elsewhere --you will be so close it's scary.)
I cannot tell the difference between a 96Khz recording at 24 bit and the source vinyl after capturing it. I can tell the difference at the 44.1 @ 16. I've 40 year old ears too. Higher frequency roll off for me starts at about 16.5 Khz now, and I can no longer hear 20Khz unless it's at insane levels. Frankly, I kind of like this. Used to roll off some music anyway because they would record it too bright. Now, most things are sweet! Funny how that all works...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
Chat in real time with other Propellerheads on IRC #propeller @ freenode.net
Safety Tip: Life is as good as YOU think it is!
Post Edited (potatohead) : 9/27/2009 5:18:06 PM GMT
Here is a site that presents sound samples that differ only in harmonic phase:
····www.classes.cs.uchicago.edu/archive/2003/spring/29500-1/Student_work/Witherspoon/index.html
I can't tell them apart; but then, I don't possess a "golden ear" either.
-Phil
Found out the hard way about this...
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
My Prop Info&Apps: ·http://www.rayslogic.com/propeller/propeller.htm
Where I was going with the phase discussion was about the beats that happen between signals, and the harmonic content giving us location information. These things are degraded on lower digital sample rates, and my own experience is fairly consistent about hearing the sound not "fill" the room properly. Interestingly, those differences have always been much less to me when using headphones.
So, I've tinkered a bit and have observed that the shape of our ear acts as a mechanical acoustic filter. When our head is in motion, we can pick up on changes in what we hear, depending on where it's at. Additionally, the higher frequencies seem to play a significant role in how this occurs.
We had an older phone system with a ~2Khz modulated ring tone that was damn near impossible to locate. The tone had almost no higher frequency components, and was radiated toward the celing of the room. All of my co-workers had trouble with this system in that we could be in an open office area, hear the phone, then have to go looking for which one was ringing. When I replaced that system, I got one that had higher frequency harmonics, and that problem completely went away.
For fun, try taping your ears back against your head, then go and locate some sounds you are familiar with. If your experience is similar to mine, you will find your ability to do this significantly diminished.
IMHO, in audio playback on systems that are using loudspeakers in rooms with some sonic reflectivity, phase contributes to our perception of "imaging" where the sounds fill the room.
Also IMHO, the only phase relationships we can hear in a meaningful way, as in some direct sense of pitch or tone, are those where the phase is close enough to cause a beat, and we hear that beat, not the actual phase difference.
Going back to the shape of the ear, another great experience is to listen to some music that has lots of action in the 4-12Khz range, where there is additional higher frequency harmonic content. If you place your finger at the top of your ear, between your ear and your head, then move your ear slightly this way and that, you can focus the filter action on your inner ear and achieve some very significant equalization changes from that action alone. The same effect can occur when you place your finger right below the little flap of skin that partially covers your ear canal. Press inward, then pull your finger down toward your mouth, listening carefully. Similar effects occur, but not at all the same frequencies.
IMHO, we have a library of sound experiences that tell us what something should sound like. When we hear it for real, that filter modifies the sound in ways that are unique to being in different places around us, and that's how we locate things spatially. Echolocation works this way, in that higher frequency sounds, when reflected make lots of ambient sounds. In a quiet room, with a source of pink noise, turn your head and move your hands and larger objects in the room. That "ambience" will change in consistent ways.
I am saying that we do not directly hear phase differences. From what I can see, our ear is a frequency over time device. However, the mechanical shape of our head and of the ear lobes and their structures do impact sound in a way that is meaningful, and phase plays a part in that.
--->I do not have golden ears either. As I age, they continue to roll off, and I was very worried about this as a kid, but have found the whole affair slow, and largely pleasant as the little things just don't matter, while my appreciation of the sound experience remains unchanged. Wish I could say the same about vision...
Hopefully that clarifies my post above.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Propeller Wiki: Share the coolness!
Chat in real time with other Propellerheads on IRC #propeller @ freenode.net
Safety Tip: Life is as good as YOU think it is!
Post Edited (potatohead) : 9/28/2009 1:26:07 AM GMT
This I know; most people perceive the sound of "Now" as the frequencies in the past 40 milliseconds or so.
There is both really only volume per sample of time and volume per eternal frequency.
But all we hear is beats within 40 mS. Oops, this is not going in the right direction.
The FFT is bizarre, and it's relevant to what can be heard from 44.1 K samples per second.
Because, the highest frequency it can measure is 22.05 Khz but only in phase.
44.1K samples can make 22.050 only in phase, when each sample is exactly at the top and bottom of 22050 Hz waves.
If the frequency 22.000 is sampled at 44.1K samples then it will playback as a beat with 22050,
but the truth about the 22000 Might be hidden and recoverable with some kind of math, which we don't
know so we don't usually do it.
Now here is some weird stuff. At 11025 and below, the frequency itself is perfectly reproduced from
feeding the samples into DAC. The reason is that the quadrature (out of phase) part of 11025 is simply
represented by all the odd samples and the in phase part by the even samples and it just so happens
that these add together to give any phase angle which can be determined by ATAN2(odd/even).
The proof is that cos2(angle)+sin2(angle)=1 always, and
The graph of cos(angle),sin(angle) is a perfect circle. (edit:slightly confused by the first one)
Weird: 22050 sampled 44100/second the evens represent 22050 in phase
BUT the odds represent the out of phase which would be zero at that frequency always because
the samples are in the zero crossings, except that what really happens is frequency zero,
which is DC, and has the value of silence in unsigned samples, $8000 or 32768 expected for 16 bit sound.
I don't know why the evens would be 22050 only and odds would be DC only, and not a mixture,
I can only guess that imaginary numbers force it to be so. Maybe there is no beat at 22050 and
22049 must fall into 22050 bin, and also maybe the 22050 cancels out leaving the DC... but it
seems obvious to me that there is or should be a beat there.
I don't understand how (samples per second) / 2 can be the bandwidth when
you need (samples per second) / 4 for simple perfect quadrature, unless there's infinite integration.
But there is weirdness in calculus that says between 11025 and 22050 you can get the curve that
belongs on the three points you have there if you actually do that calculus.
Conclusion: CD's have 44100 real samples that can only preserve phase up to only 11025 samples,
and 48000/4=12000, so sure, now I believe that I can hear "real sound" samples only up to 11025 or 12000,
and I already know vinyl is better than CD, and anything above 11025 is $#!+ unless they do the hard math.
(And so I DO need 96000 sps to listen to real music.)
Post Edited (VIRAND) : 9/28/2009 8:46:27 AM GMT
First off, the brain does perceive phase. It just can't do so as well for all people as frequency goes down. In fact, those frequencies received DIRECTLY by the cilia inside the cochlea (about 200-2000 Hz) will have the least directionality and phase perception just due to how the auditory hardware functions. (Remember, the ears don't work like microphones.) That isn't to say that it is lacking entirely in phase data for these frequencies (it isn't), but just that most people do not perceive it as strongly as they do higher frequencies (which are interpolated in the brain). Many people cannot localize sounds below 200 Hz at all--which is why it is common for sound systems to have a single non-directional subwoofer (these are frequencies which are also interpolated in the brain and not in the ears, coincidentally enough).
However, just because the audio hardware (ears, brain) can handle phase data doesn't mean that it is consciously available (if that person's auditory cortex processes it at all). Remember, there are an awful lot of people out there whom are functionally tone-deaf yet can actually hear frequency and phase just fine (otherwise they'd be unable to distinguish speech of any one person in a crowded room full of conversations).
So, the quest toward higher audio quality is not useless--it just won't always be appreciated by the target audience of the product. That's the real reason for the 44.1kHZ compromise.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
--RvnPhnx
The ear/brain takes db levels and phase to create the impression of the left right position of the sound source. The ear perceives the phase change due to the time the sound wave reaches each ear.
When it is in phase the sound came from the center. If the left ear's phase was leading the ear/brain would interpret the sound as coming from the left.
In low frequencies however, the minor time delay between left and right ears represents a very small phase change due to the slow changing wave form and resultant sound pressure. That is why the sub-woofer can be placed just about anywhere within reason because we cannot discern the phase change at low frequencies.
So, yes, a higher data sample rate will result in a better rendition of phase relationships at higher frequencies and as stated, beats are real world things that we can hear even if they were generated by 2 frequencies which were above our range of hearing.
Here is an excellent article on sound localization: www.aip.org/pt/nov99/locsound.html. From the article,
"Like any phase-sensitive system, the binaural phase detector that makes possible the use of ITDs suffers from phase ambiguity when the wavelength is comparable to the distance between the two measurements. ..."
-Phil