Wednesday 2 March 2011

Envelopes and modulation synthesis

SuperCollider Blog 3


Envelopes


Envelopes describe a single event (as opposed to periodically repeating events) that changes over time. Typically they are used to control amplitude (a VCA), but they can be applied to any aspect of sound” Computer Music with examples, David Michael Cottle.


The amplitude of a sound decreases over time but at the transient of a sound there is a certain degree of attack. Different instruments have different attacks and small variations in attack time can make a big difference. All preset sounds on synthesisers use envelopes to create a change in volume over time.


There are fixed duration envelopes and sustain envelopes. Sustain envelopes could represent the length of time a key is held down on a piano and can be represented using a gate in SuperCollider. Fixed envelopes can represent percussive instruments such a cymbals as once they have been played and the sound is ringing out it is not always possible to know how long it will continue to ring out for.


ADSR stands for attack, decay, sustain and release. These are terms often used when using envelopes. They are also arguments in SuperCollider.



Env([1,0,1],[1.0,0.5]).plot //This makes an Envelope with three control points, at y positions given by the first array, and separated in x by the values in the second (see the Env help file)





This is the scope of the envelope created from the code above from the course help files.


The arguments for Env are:

Env(levels, times). There is one less number in times than levels.


There are different types of envelopes that can be used such as env.linen, env.adsr and env.perc. They have different arguments that are suited to certain sounds.


They are called envelopes as some of the classical shapes that can be viewed using .scope look like the shape of an envelope.


To use envelopes for synthesis we need to use EnvGen.


SC uses EnvGen and Env to create envelopes. Env describes the shape of the envelope and EnvGen generates that shape when supplied a trigger or gate.” Computer Music with examples, David Michael Cottle.


If we use an envelope and wrap it in EnvGen.ar we can make it run at audio rate. This starts at 1 and goes to 0 at 1 second. If we run this we won’t hear anything but we can see it on the scope, it is too slow for human ears to hear as our ears only pick up sounds between 16-20 hz. If we multiply it by a SinOsc we are able to hear it.


We therefore plug Env into EnvGen:


{EnvGen.ar(Env([1,0],[1.0]))}.scope


and then multiply with a SinOsc:


{EnvGen.ar(Env({1,0],[1.0))*SinOsc.ar}.scope].


A useful thing to note when working with envelopes is the doneAction argument is very useful when using envelopes. It stops the voice that we have finished with from continuing to run and therefore uses up less CPU.


{Saw.ar(EnvGen.kr(Env([1000,100,[0.5]),doneAction:2),0.1)}.plot



Modulation synthesis


Modulation in signal processing refers to the control of an aspect of one signal by another” Introduction to Computer Music, Nick Collins.


The signal that controls is the modulator and the signal that is controlled is the carrier. Modulation is nonlinear so the output signals may not be in a spectrum that was in either of the inputs. These new frequencies that did not appear in the inputs are called sidebands.

It is a good idea to explore frequency modulation with 2 sinusoids rather than using complex sounds though it is possible to use this to predict the effect of using modulation on more complex sounds using Fourier analysis.


Ring modulation


Ring modulation is a result of simply multiplying the two frequencies


carrier * modulator


though it could also be written modulator * carrier, it makes no difference.


Both of the signals are bi polar. This is when the amplitudes of the signals can take on both positive and negative values and can therefore oscillate above and below 0 amplitude.


For complicated waves we get lots more components we have much more of a spectrum of signals as more signals have been multiplied together .


If the carrier were a sum of three sinusoids, and the modulator a sum of five, ring modulation would create 15 multiplications and thus 30 output frequency components. This is a cheap way of getting more complicated spectrum out of simpler parts”. Introduction to Computer Music, Nick Collins.


Amplitude modulation


Using amplitude envelopes and tremolo are both examples of amplitude modulation.


Amplitude modulation is like ring modulation. The difference is that the modulator is unipolar. This means that is is always positive.


The carrier is usually bi polar and is therefore different to the uni polar modulator.


Stockhausen used ring modulation in many of his pieces including Telemusik, Gesand der Junglinge and Mixtur.

Ring modulation was also famously used by Brian Hodgson to create the sound of the voice of the Daleks in the TV series Doctor Who


I found it quite difficult to understand what the side bands were, I understood it a lot more after reading this:

In amplitude modulation there are two sidebands; the sum and difference of the carrier frequency (the audio frequency that is being modulated) and the modulator frequency (the frequency that is controlling the audio frequency). A carrier frequency of 500 and a modulating frequency of 112 could result in two sidebands: 612 and 388. If there are overtones in one of the waves (e.g. a saw wave being controlled by a sine wave), then there will be sidebands for each overtone.” Computer Music with examples, David Michael Cottle.


FM Synthesis


FM synthesis is similar to ring and amplitude modulation. In FM synthesis there can be more side bands. The number of sidebands depends on the modulation index. We get the modulation index from how far the modulating frequency is from the carrier (ratio between deviation and modulation frequency). It gives us an index that is independent of modulation frequency. The higher the value for I (modulation index) the richer the timbre.


In our class at uni we were given code for a GUI that had 3 sliders to change the carrier frequency, modulation frequency and modulation depth:


(

var w, carrfreqslider, modfreqslider, moddepthslider, synth;


w=Window("frequency modulation", Rect(100, 400, 400, 300));

w.view.decorator = FlowLayout(w.view.bounds);


synth= {arg carrfreq=440, modfreq=1, moddepth=0.01;

SinOsc.ar(carrfreq + (moddepth*SinOsc.ar(modfreq)),0,0.25)

}.scope;


carrfreqslider= EZSlider(w, 300@50, "carrfreq", ControlSpec(20, 5000, 'exponential', 10, 440), {|ez| synth.set(\carrfreq, ez.value)});

w.view.decorator.nextLine;


modfreqslider= EZSlider(w, 300@50, "modfreq", ControlSpec(1, 5000, 'exponential', 1, 1), {|ez| synth.set(\modfreq, ez.value)});

w.view.decorator.nextLine;

moddepthslider= EZSlider(w, 300@50, "moddepth", ControlSpec(0.01, 5000, 'exponential', 0.01, 0.01), {|ez| synth.set(\moddepth, ez.value)});


w.front;

)



There are an infinite amount of side bands in the spectrum with varying strength. With C M and D we can make either very think or very light spectrums.


C is carrier frequency

M is modulation freq (how quick it's wobbling)

D how far either side (modulation depth of frequency deviation)


Energy (the sidebands) turn up at carrier frequency +M and carrier freq –m


C, C+m, C-M, C+2M, C-2M (occurring symmetrically).


For musical purpose:


I = D/M


is a good way to control frequency modulation using the modulation index. If I is small then there is little audible fm effect. The higher that I is then the stronger the energy is in the side bands.












No comments:

Post a Comment