haha, i'm doing the write up and tbh it is a bit painful re-thinking what you've already thought, but here goes...
yeah that isn't a great amount of fun.
what is going on - ADC or File input, with the ADC you hit record and you're analysing, with the file you select a portion of it and then analyse it. the analysis is basically amplitude, goes a bit like [if sample greater than x then define start of segment] then its [if sample less than x then define end of segment] only with the end point it counts a few more values before it defines the end point. There are controls for sensitivity(which alters the x in the amplitude) and release time (which says only allow a new segment every x amount of milliseconds).
i then separate the segments by length, and then frequency, so you have large high freq, large low, small high, small low. in practice this is actually pretty redundant and I could really just divide them all by length. the whole probability states and granular playback is actually a bit too painful to cover, but its not anything groundbreaking, just different grain settings for different states, longer states are calmer, shorter ones more erratic.
I think that a LOT of Maxers go way down the garden path by trying to create something groundbreaking, instead of something useful, usable, and productive (of course, most folks who want something usable and productive have taken a shortcut and just bought Live). I'm not very interested in how yr sussing out yr granular synthesis (familiar with the techniques for years, and there are many usable granular engines out there for MSP), but what is the basis of your probability? most probability-based generators need some sort of preliminary weighting to get rolling...is this randomly assigned post-analysis?
Actually at the moment the probability is always going on in the background so its kind of whatever happens to be going on at the time. I have 6 states, 1 being the calmest, 6 being the most erratic. To travel to stage 6 from 1 the current state must go from 1,2,3,4,5 and then 6. Their is weighting so that from 1 you're more likely to go to 2, than stay on 1, and from 5 you are less likely to go to 6. Really all of this has been me listening to it, thinking hmmm this needs changing, and doing it, its a really great way of working with development because you hear the changes almost instantly. What I found was a really easy way of defining lengths of certain parameters was to divide the lenght of a state by a set amount. So for example state 1 is 50 seconds, this is divided by a given amount which gives the rate at which a new segment is recalled, there's a few other parameters as well which stop the playheads from getting stuck on one point for a while.
my goal - to play a few notes in, sit back and enjoy. i justify this laziness by the fact there is a lot of stuff going on under the hood.
aesthetic angle - don't really think about this too much, i like drones and ambient music, i try to make it easier to come up with ideas or material, no great master plan underpinning the whole thing.
this is pretty much the way...the crux of all my software is so that I can give it what I want, and let it do it's own thing within the parameters of "my music." is the video demo how you use this tool or really just a demo?
The video is a demo yes, I'm pretty happy with output when I played my guitar in, it can be found on soundcloud via the link from the youtube video. There is slightly more gone in to it than usual considering I've played the notes in and not taken any old sound file. The output I have found is mainly dependent on two things, 1. the source material 2. the amount of segments found. More segments = more gradual changes of the playheads position.
just my opinion here, but I think what would make these tools really valuable to users who are cool w/kicking in ~$15/tool is if there was an interactivity b/w the tools. totally off the top of my head there. but yeah. that.
Absolutely, MIDI control of playheads is the next task and I think this will really allow the user to 'play' the software, whilst the probability is going at the same time, all i'll be doing is unhooking the segment locations and letting the user decide where to play from. In a sense this is going against my original generative idea, but this will be an option, the default will be sit back and relax. Also its worth noting I have built a previous patch that is much more geared to live performance and user interaction, based on feedback loops, you can make some really great loops/textures/drones with it and it allows two inputs at once (for example my tutor has used it in conjunction with a clarinet, whilst he was doing some granular stuff on an audio file, but this is really another topic and one to wait till I have my site up and running)