In making sense of that i just realised i forgot about microseconds. Nano seconds are 1000 times faster than i thought.
So when a datasheet says "200ns instruction cycle" thats calculated for the highest possible oscillator you can use with it. In this case 20MHz.
If the serial data comes in at about 86.8us then i work that out to be once every 434 cycles. Now it sounds more realistic.
So if i have a main loop that doesnt pause or anything then would it make sense to only have an interrupt for the PWM and repeatedly check PIR1.5 in the main loop? That way the interrupt might still be run when serial data arrives but it wont have to do anything and will be straight out of the interrupt handler. The main loop should then be able to pick it up on its next iteration and do all the processing within 434 cycles.
I guess i dont really have 434 cycles to play with because the PWM interrupt will use up a few cycles. Using your example it should run almost twice between each byte of serial data. If we say it uses 100 cycles to be on the safe side then i should have over 234 left to do the processing.
Does that make sense or have i got it all mixed up again? Its starting to look just like my original code except for the timer. When i first started this thread i was trying to do the timer's job manually and i think thats where it all went wrong
Bookmarks