Interesting. I really thought that all those months of code-wrangling on the slow, slow, antiquated Pi had left me with an optimal chunk of code. Thus transitioning back onto iPad development - Cortex A9, 2 of them, 1 GHz - WHOO HOOO!!!! - would be easy. Insanely so.
Well, apparently not. I got the M3000 (see previous post) done, all-new code, big chunks of functionality based on PIANA's sample playback engine, 'Oscillotron' based on the Pi waveform display - and pre-announced to about 20 existing customers who had previously expressed dissatisfaction (!) at the lack of Core MIDI support. I anticipated joy and boundless pleasure and smiles at the news - AudioBus, Core MIDI, retina graphics, 100% software rewrite for iimproved betterness etc. But AudioBus means iOS 6.x only, is that OK? And a surprising number of folks still have only iPad 1, hence iOS 5.x, hence were distinctly unhappy.
So I set about making it work on iPad 1, which is way slower than iPad 2, single-core, Cortex A8, lower clock. And I found I had to throw in some more optimizations, as the sample playback was clicking occasionally.
So, now the M3000 works on iPad 1, just as well as it does on iPad 2. A few less voices of polyphony, a bit more audio latency, but you really can't tell. Which means that, come late September when I can get back onto Pi and get this damn project finished and released, I can make it go even faster. I still want to be able to get 8 note polyphony out of a stock Pi, no overclocking, with a USB keyboard to avoid optocoupled MIDI, so anybody can just use it without having to buy interface hardware. So, we shall find out in maybe mid-October how that has gone.