Weeklybeats is a 52 week long music project in which artists compose and publicly release 1 song a week for the entire year.
Starting January 1st 2024 GMT each participant will have one week to upload one finished composition. Any style of music or selection of instruments are welcomed and encouraged. Sign up or Login to get started or check our FAQ for any help or questions you may have.

WeeklyBeats.com / Music / tatecarson's music / Here's the Story

Here's the Story

By tatecarson on June 6, 2014 10:34 pm

Worked with some more randomness this week and tried to make it seem a little less random.

Audio works licensed by author under:
CC Attribution Noncommercial (BY-NC)

Nice. Can you explain the process a bit? It sounds like randomly generated cells that repeat/are re-generated and, further, the order of playback of cells is randomised?

Have you ever explored markov chains for "controlled" randomness? If you were to generate a random string of pitches, say, you end up with a melody. Then you could randomly (or pseudo-randomly) create permutations of the material through augmentation and diminution of the pitch material. You could treat it as a tone row and work through a matrix of permutations. You could then create a rather large multi-dimensional array of musical material/parameters, and align different things at random (kind of like a randomly mutating rubix cube).

Or something.

Importantly though, I really enjoyed this work, and particularly the vocal part.

tate you loveable bastard. Track sent me into the murky lake that my kid brother always talks about. The end just shot me in the head. vocal edit really deserve its way in there. Would like to know how you arrange everything in this track

vinpous wrote:

Nice. Can you explain the process a bit? It sounds like randomly generated cells that repeat/are re-generated and, further, the order of playback of cells is randomised?

Have you ever explored markov chains for "controlled" randomness? If you were to generate a random string of pitches, say, you end up with a melody. Then you could randomly (or pseudo-randomly) create permutations of the material through augmentation and diminution of the pitch material. You could treat it as a tone row and work through a matrix of permutations. You could then create a rather large multi-dimensional array of musical material/parameters, and align different things at random (kind of like a randomly mutating rubix cube).

Or something.

Importantly though, I really enjoyed this work, and particularly the vocal part.


The first part that was generated was the piano, it's from a max patch, and I actually can't remember which one. To me it doesn't really matter, I like how it sounds and any number of them would do, it's not any specific type of randomness. After that I simply took phrases from the piano and chopped and then lengthened them after putting new instruments on them. So the next instrument down would play the first have of the phrase from the piano but twice as long. This happened over 8 bars or so through the whole piece. I did the process 4 times i think so the "slowest" melody is still generated from the top piano one. I got this idea from Miles Okazaki who in turn got it from Tom Johnson, check them both out, they're amazing.

I've looked a little at markov chains, I don't totally understand them but I think i've used them here and there. I really think about getting away from the theory involved or the generational process as soon as possible sometimes. I had that first piano part saved from a few weeks ago and the max patch gone, so it's like someone else had done it and sent it to me, I work like this a lot. I think it helps lessen the blow of starting a piece and limits judgement, it also brings back a bit of the mystery into music that made it exciting in the first place.

Mr Mort wrote:

tate you loveable bastard. Track sent me into the murky lake that my kid brother always talks about. The end just shot me in the head. vocal edit really deserve its way in there. Would like to know how you arrange everything in this track

The vocal samples are from a text to speech website I found online. The text was something I likeed, but it's not preserved so it doesn't matter, I don't remember what it was anyway. I sliced to midi first. The changing tempos you hear when the vocals first come in are just the natural speeding and slowing that a sample does when it's played at a higher pitch. It's following the midi information from another track. The second part with the vocals is the same as the first but with a max device called time stretch. I have no idea what it does but it sounds amazing and I use it all the time. If some of this didn't make sense to either of you i'll try to clarify. I wrote all of this without looking at my session. it's also hard to remember what I did because I bounce to audio and resample things often so I loose the original process. I also throw everything from ableton into logic so I don't have to look at any midi plugins or worry about anything but how the music actually sounds. Thanks for the kind words guys! 

I actually just rememberd the whole thing started form a max patch running on the monome, which i just bought, called just Arpeggiator: http://monome.org/docs/app:arpeggiator . So it really isn't random at all. Scratch all of that. I remember now, you can play the arp directions but it's fixed to a triplet and a certain tonality. I think modulated the tempo in live through the whole piece, which you can probably hear if you're listening for it. I'm not really interested in super obvious tempo shifting but I think it could also sound like more notes or less notes in a given period of time.

Also, the idea for using computer generated voices came from the amazing Oneohtrix Point Never.

And an interesting story it is -- if a bit disturbing.

Thanks so much for the breakdown and the revision of said breakdown! Really insightful. I completely understand about wanting to stay away from the theory too.

You need to login to leave a comment.
Login Sign-up