I love to see people analyzing problems, breaking them down into their core components, and discovering that things don't have the limitations previously thought. (i.e. the limitation was really me, not the hardware!)
I'm thinking a gotcha on this is the amount of calculation and decisions being made. And, obviously, things (servo positioning) will change over time.
So, you have calculations, optimizing the data for an algorithm, and allowing for something to signal changes.
The good news: things won't be changing that often (especially when viewed from MCU mhz perspective), or at least you hope that's the case (see below).
Let's say, for the sake of argument, that you were to create three parallel arrays. One array for priority, one for on-time, and one for off-time. Then make an algorithm that will step through the arrays and set the pins accordingly.
The first array functions like a database index, so, rather than (re-)sorting your arrays everytime something changes, you just change the index.
A problem you'll run into is when managing multiple waveforms of different frequencies, you'll have to use something like a least-common-denominator concept to artificially time-slice/time-share. Hard to explain without a bunch of math; but simply draw a few different square waves of different frequencies on a whiteboard, one-above-the-other. Then, draw vertical lines that represent the action points. Conceptually, you'd need infinite resolution, but engineering is trade-offs, so you'll wind up picking (through calculations or trial-and-error) a minimum time increment to deal with.
Why? When will you take time to check for changes and then re-calculate the array values.
------------------
Where's the code!!!!!!!!!
I'll watch this posting to see how things go.
bcf
Bookmarks