Abelton Live + Reaktor + Razor
Since I figured out how to use Traktor though osc, I’m thinking why not try some other musical software which can actually make music or customized instrument. So I decided to learn the most popular and professional musical software Abelton Live and the powerful instrument software called ” Reaktor“. After lots of trying with complicated and advanced instrument tools / ensambles that different musicians have built for Reaktor, I decided to faced the fact that I’m a beginner of this and have to start with basic. Among all the crazy I’ve tried such as twisted tools or Native Instruments, I found RAZOR my favorite virtual synth so far. It creates such much fun to modulate a sound on the computer. It’s such a small and inconspicuous synthesizer, you can easily create your own instrument from those gorgeous, experimental but brilliant sound resources in the library.
In additive synthesis, the sound is constructed from partials – single sine waves firing in parallel, changing amplitude and even frequency over time. The resulting sound remains clear and precise at all times, even when heavily modulated. The level of control in additive synthesis provides a host of new possibilities for shaping sound. RAZOR’s interface draws on familiar concepts from more ‘traditional’ synthesizers. You’ll recognize the oscillators, filter section, and many other parameters, but the options are new – the formant or the waterbed settings, for example. This combination of new and familiar makes creating sounds surprisingly simple.
For the Ableton Live, I spend a weekend to actually made a track which I think is the most effective way to learn software (make something works and meaningful out of it), So I successfully able to input RAZOR sound into Ableton Live, which can be remapped or adjust for live performance, like an external DJ controller. My first funky music track is here https://soundcloud.com/qiu-yi-wu (unexpectedly, some senior student passed by and saw I was making music and then offered me a job lol). During the process, I learned how to make separate tracks ( beats, bass, piano, melody, different instruments / soundFX) and adjust effect lines, transitions (work as the timeline in AE)
The most exciting part is that I figured out who to attach the interface to OSC midi output/input. Which means by assigning each button or function on the screen individually, you are able to control whatever you want from any other devices that can deliver midi message, in my case: is the phone / the visualization interface.
Therefore the process of studying Ableton and making an actual music helped me understand and recognize a much more clear way to connect sound to visual therefore implement with movement in a real live setting while the music can be both performed or generated.