Something a bit different--I'd wanted to do something more abstract than I've been doing lately, with fewer referents. Late in the week I whipped up a few drum patterns with drum racks (with samples) that had an unsettled feeling that I liked, but they sounded too "drummy" until I processed them. First I went for bit crushing, which sounded OK, but then I thought they might sound better vocoded, and they did--in the end, I used three drum racks and one Impulse channel, all through vocoders, and all of which got their audio from separate channels of Analog. Some of those had tonal waveforms and low-pass VCFs; some used noise, and some used formant filters. I put LFOs on all the filter params at first, but then started really killing the processor with all that load, so I backed off and on a couple Analog channels, just used LFO on one filter's cutoff and resonance, while leaving the other static.
I played around with various pads in Operator and Analog, but wasn't all that happy with the result. I got a better result from sparser lines in Operator with much shorter notes. Then, for bass drum impact, I added another drum rack with a lot of Analog voices for wholly synthesized drums. (These are better than some of the ones I'd done before, but they aren't quite right yet.)
I didn't want to give up on the pad idea, so as an alternative to the not-so-satisfying pads in Operator, I tracked eight different clips of long notes and harmonics (held for different lengths) with the Moog Guitar, and arranged them into three channels, overlapping sounds with rests.
At this point, I kind of wanted to hear a low bass voice, so I added another channel of Operator, tweaking the Bass Housey preset to have less attack across the board. After writing some bass voices, I arranged that in with everything else.
Sends: two convolution reverbs with light modulation, filter delay, and my M4L-based modulation delay, which I fed into the two convolution reverbs. Drum voices got automation to send them to the filter delay at intervals. Guitar lines went to the modulation delay (mostly) and a touch to the convolution reverbs. The vocoded drums also went to the convolution reverbs, in different amounts, as did the sparse operator voice.
Everything got auto-pan, and all the drum voices and the lead Operator voice got M4L Humanizer, except for the bass Operator voice and the Analog carrier signals for the vocoder. Everything got full-chain master.
Title comes from some association on the idea of referents, since I wanted things not to sound like real things. This led me to Wittgenstein's so-called private language argument, since I was thinking about obscured or unknowable referents.
This submission is licensed by author under Copyright All rights reserved