Cog Economics
bradharper
Posts: 64
So, my feature-set (as coded with my level of experience) exceeds my processing budget. Following up on my previous post regarding multitrack audio, which is working for the most part, I'm now paying the cog penalty for how I chose to mix the samples.
I'm three cogs short and welcome any advice pertaining to opportunities to free up a cog, or two - or three.
Here's what I currently have: (arrowed ==> cogs are spawned by their parent)
Cog 0 - Management: user input, feature coordination.
Cog 1 - Multi-PWM LED Driver: maintains 4 independent pulses and responds to management interface.
Cog 2 - Audio Track #1 Buffer: manages an audio track buffer, applies volume, sample-rate and passes samples to track mixer.
==>Cog 3 - Audio Track #1 SPI driver: safe_spi SD access.
Cog 4 - Audio Track #2 Buffer: manages an audio track buffer, applies volume, sample-rate and passes samples to track mixer.
==>Cog 5 - Audio Track #2 SPI driver: safe_spi SD access.
Cog 6 - Audio Track Mixer: monitors and mixes sample data from tracks 1 and 2.
Cog 7 - Auxiliary LED Manager: applies dynamic sequencing for 8 LEDs.
Here's the functionality I *have* to find a way to integrate:
Cog 8 - Accelerometer Driver: triggers audio and LED events.
==>Cog 9 - Coordic Function: drastically simplifies interpretation of sensor data.
Here's the functionality I'd *like* to find a way to integrate:
Cog 10 - Audio Track #1 Player: Allows asynchronous invocation of audio events on track 1. Reads WAV from SD and sends to corresponding buffer. Without launching play calls into their own cog, I spend a lot of time waiting and audio events on track one can never interrupt each other.
To me, it seems the most likely place I can cut corners is the four cogs used by both track buffers. At one point I had both tracks managed by one buffer, but opted for the current route because it made the code much cleaner and intuitive. It does, however, free up two cogs, but at the risk of introducing synchronization and timing issues since both buffers will have to be managed sequentially, instead of in parallel (essentially) at present. Condensing into a single buffer class would also have an effect on some of the FSRW/SPI/SD contentions I'm currently seeing - not sure though if doing so would alleviate or exacerbate the situation though. Worst case, I have to scratch multitrack audio, which would significantly detract from the goal of the project.
Additionally, I might be able to somehow integrate the Auxiliary LED sequencing into the main PWM Management driver, although again, at the expense of cluttering the code and hacking two distinctly different purposes into one loop.
I know without seeing the code it's probably difficult to make extremely insightful suggestions, but I wanted to see if anything stood out as an obvious candidate for refactoring to developers with more experience on this platform.
I appreciate everyone's time.
I'm three cogs short and welcome any advice pertaining to opportunities to free up a cog, or two - or three.
Here's what I currently have: (arrowed ==> cogs are spawned by their parent)
Cog 0 - Management: user input, feature coordination.
Cog 1 - Multi-PWM LED Driver: maintains 4 independent pulses and responds to management interface.
Cog 2 - Audio Track #1 Buffer: manages an audio track buffer, applies volume, sample-rate and passes samples to track mixer.
==>Cog 3 - Audio Track #1 SPI driver: safe_spi SD access.
Cog 4 - Audio Track #2 Buffer: manages an audio track buffer, applies volume, sample-rate and passes samples to track mixer.
==>Cog 5 - Audio Track #2 SPI driver: safe_spi SD access.
Cog 6 - Audio Track Mixer: monitors and mixes sample data from tracks 1 and 2.
Cog 7 - Auxiliary LED Manager: applies dynamic sequencing for 8 LEDs.
Here's the functionality I *have* to find a way to integrate:
Cog 8 - Accelerometer Driver: triggers audio and LED events.
==>Cog 9 - Coordic Function: drastically simplifies interpretation of sensor data.
Here's the functionality I'd *like* to find a way to integrate:
Cog 10 - Audio Track #1 Player: Allows asynchronous invocation of audio events on track 1. Reads WAV from SD and sends to corresponding buffer. Without launching play calls into their own cog, I spend a lot of time waiting and audio events on track one can never interrupt each other.
To me, it seems the most likely place I can cut corners is the four cogs used by both track buffers. At one point I had both tracks managed by one buffer, but opted for the current route because it made the code much cleaner and intuitive. It does, however, free up two cogs, but at the risk of introducing synchronization and timing issues since both buffers will have to be managed sequentially, instead of in parallel (essentially) at present. Condensing into a single buffer class would also have an effect on some of the FSRW/SPI/SD contentions I'm currently seeing - not sure though if doing so would alleviate or exacerbate the situation though. Worst case, I have to scratch multitrack audio, which would significantly detract from the goal of the project.
Additionally, I might be able to somehow integrate the Auxiliary LED sequencing into the main PWM Management driver, although again, at the expense of cluttering the code and hacking two distinctly different purposes into one loop.
I know without seeing the code it's probably difficult to make extremely insightful suggestions, but I wanted to see if anything stood out as an obvious candidate for refactoring to developers with more experience on this platform.
I appreciate everyone's time.
Comments
If you are willing to have a go at assembler, then I suspect.......
Though not having seen your code, I would think that cogs 1 and 7 could be combined into one cog, as well as cog 3 and 5 be combined into one. I further suspect that the function of cog 8 could be combined with the one for 1 and 7.
In assembler with a scheduler you can have numerous simultaneous threads in one cog, all individually timed to 1 uSec, with (usually) little or no interference between them, provided none have really high performance timing requirements.
The resulting code is not a "hack job".
Cheers,
Peter (pjv)
@Andrey, I'm using 22Khz, but the mixing and auxiliary sequencing might very well merge peacefully.
@ericball, thanks... I'll keep that in mind.
Incidently, just this evening I was able to resolve via locks the "threading" issues with (the way I'm using) fsrw/safe_spi/sd and I'm now able to open, read, and cleanly mix multiple audio files simultaneously. This should make merging the two buffer objects a feasible goal, since one of the original reasons I separated them was in an attempt to alleviate their clobbering of each others sd access. Thanks guys.