The creative process is one that is extremely personal to everyone. With constant advancements in technology, artists are now able to create in more ways than ever before. For French artist Ouai Stephane, his production process involves a mixture of virtual synths and homemade controllers to create music that is not for the faint of heart. It's safe to say that, those who are willing to create their own hardware pay an equal amount of attention to the details of their music. Below, Ouai Stephane walks us through the process of his latest jungle-inspired work, Che Pas, and breaks down the homemade devices he created and how he implements them into his Ableton workflow.
Words by Ouai Stephane
Hi there! First of all, thank you very much for the proposal. Really excited to talk about my creation process!
So this EP was made using Ableton Live. Composition, sound design, mixing, and mastering was made within this piece of software. I really like to work with Ableton for its user-friendliness but also for its endless possibilities through Max for Live (M4L). This sort of « add-on » allows you to deepen production techniques through its modular interface. The possibility to create your own devices brings Ableton to a whole new level of digital creativity.
Furthermore, I own a pair of monitors Yamaha HS80, a Focusrite soundcard, a MacBook, a few microphones (but mainly the one built in the Mac), a mouse (I upgraded from the trackpad 2 months ago, a real evolution in my workflow) and a Caribbean poster. I'm a true defender of the « less is more » philosophy and find my creativity in the limits and boundaries of my surroundings.
Anyhow, the track called « Ché Pas » (which means « I dunno ») is a quite literal version of my state of mind during the recording of my voice. I first used the built-in microphone in my Mac and recorded some « I dunno what I'm doing with my voice and I dunno why I'm doing this ». At that moment, the idea was to test/experiment with some effects on my voice, whatever the content of my speech was. So, I stacked many effects such as the Izotope Nectar, Antares's Auto-Tune (but set to play only one note), diverse delays, phasers... It sounded ok but I thought it would be cool to put a beat behind it to make a track. So I time-stretched the voices to fit the grid and designed some breakbeat/jungle drums. The track's foundation was created!
Since it was just a simple draft, I left it for a few months. After listening back to it later on, I realized that I should keep the lyrics as they were and finish the track. I really wanted to be honest and spontaneous in the creation process. This is the moment I wanted to go a bit further and randomize the effects that were applied to the voice. For this, I used a controller I build called the « Père Fouras 3000 », which looks like this:
My latest creation! (look how happy I am) Explanations: The original idea of this controller was rather « down to earth » since I just wanted to create a controller to replace my Launch Control (simple MIDI controller that I bought). Since I built all my controllers/instruments for my live performances, it was such a pity to keep one that was manufactured and that I bought. So I started designing a box with a few knobs, sliders, and buttons. Being rather unexcited about this project, I researched weird buttons online and found some nuclear buttons, as well as an old telephone, joysticks, key electrical switches, hand spinners... that I included on the controller. The result is this weird interface that allows me to control numerous parameters within Ableton (effects, structure, instruments...).
In any case, the idea here was to activate and deactivate the voice FXs using those joysticks (going one direction would activate FX1, going the other way = FX2, etc...). So, I recorded the automation of this rather random performance in Ableton. This is how I did the main voices.
Furthermore, one technique I experimented with is with the built-in instrument called Wavetable in Ableton, which looks like this:
I noticed it was possible to drag and drop an audio source into it. So I tried to input the voices I recorded. The slider (marked red here) allows you to choose the position of the wave imported. So by automating this parameter in a linear way, I was able to recreate the original audio stream of the input source (here my voice), but with Wavetable's timbre and characteristics. You can hear in Ché Pas at 2:14 that the stream of the voice is recreated.
It is then destroyed on the drop at 2:22: the electronic cowbell-like sound is actually my voice being synthesized in Wavetable (even though I must admit it's hard to imagine).
Apart from the creation of the voices, a lot of the textures I used and added on top of the drums, for instance, is actually a very short delay with a lot of feedback. It is a technique often used to create a kind of metallic texture. My aim was to « tune » the pitched texture added by selecting the right delay time. The maths are quite easy: my voices were pitched in G. I checked on a website what the first G octave note (G1) was in Hz (49Hz), then did 1/G1, which is 1/49Hz = 0.0204 seconds (20.4ms). Using 20.4 ms as a delay time associated with high feedback creates/adds a texture (resonance) pitched in G to your original source. For the musical purpose and more « unstable » pitch and texture, I used 20.6 ms as delay time. I applied this effect on many elements (especially unpitched content like percussions/drums) to create a more unique sounding beat and groove.
I don't really know what else I can talk about, in this track apart from tiny details that no one really cares, so I'll move on with the track « I Have Feelings También ».
Same as « Ché Pas », I started this track by experimenting. I found this M4L Midi device called « Chord Generator » from Nordmann, which is quite cool. Its concept is straight forward and simple: to generate chords in midi, but its functionalities are very interesting. Chord inversions as well as notes disposition within the chord are modifiable. By tweaking these parameters and automating them with LFOs (Low-Frequency Oscillator creating a pulse used to modulate parameters). The result was a constantly moving set of complex chords arranged differently each time they were played. This means that I always heard different chord structures throughout the creation process (which is both exciting and disturbing I must admit).
I then played these chords through different VSTs (external plugins), such as Arturia's DX7 and a Buchla Easel V.
Another technique I used that could be interesting to mention is the way I created the pitched percussions (that you can hear in the intro). For this, I applied an EQ on a warped percussion loop with resonances at 110Hz, 220Hz, 440Hz, 880Hz, 1760Hz, etc as you can see in this screenshot :
The idea here was to « pitch » the loop by recreating the resonances of note A (A2: 110Hz, A3: 220Hz, A4: 440Hz, ...). I then exported this audio into a sampler and activated the Warp function, which locked the tempo characteristics of the loop. I could then apply the « Chord Generator » to the sampler, thus recreating the chord progression process I explained earlier. What happened here is that an unpitched loop of wooden percussions was now pitched (thanks to the EQ) and playing back chords (thanks to the sampler). This original technique helped me add more richness to the cloud of synths playing the chords by connecting the pitched elements (synths) to the unpitched elements (drums).
I don't really know what to add to this. I thought these techniques were interesting to share. I could explain more if you want, just let me know!
Grab Ouai Stephane's Che Pas here.