Wednesday 2 March 2011

Arrays, Functions, SynthDefs


Super Collider Blog 4- Programming SuperCollider

This week we looked more closely at the syntax of SuperCollider as a programming language. I have already talked a bit about variable assignments and arrays in previous posts.

I looked at the use of encapsulation of code within brackets and the use of functions.
When writing a function:
arg= argument,inputs (first thing stated inside the function brackets). The output of the function is the last line of code within the curly function brackets.

Basically the recipe for writing a function in SuperCollider:
1. Say what inputs are
2. Do calculation
3. Get output

Functions are written inside function brackets which look like this:
{}

Below is a code example that we were shown in class:
(
f = {arg input1,input2,input3;
{SinOSc.ar*0.1}.play; input 1*input2*input3;
}
)

Shortened to:

f = {SinOsc.ar*0.1}

and called with:

f.play

Programmatic ways of writing lists of data:
Here are a few examples of how lists can be created and filled with numbers in different ways:

Array.fill(10,{1})
In the above example there are10 things in the array and every thing in the array is the number one.

Array.fill(100,{1})
Gives us 100 1's in the array.

Instead of a fixed thing in array could give a function:

Array.fill(100,{10.rand})
(The random number in this array will go from 0-9 NOT 1-10 as we start counting from 0 in SuperCollider.

Array.fill(100,10.rand)
Without the function bracket is just gives us loads of the same number, it needs to be called over and over to give a different number.

Array.fill(100,{arg count; count})
This creates an array of squares of numbers.

Array.fill(100,{arg count; count*count*82})
This is an example of how SuperCollider can be used to fill arrays with results from difficult equations.

We then looked at how arrays can be used to create scales using midi notes in SuperCollider.

This is an example of an array containing a midi note scale starting at middle C:
[60,62,64,65,67,69,71]


(
var scale, current;

current=60;
scale= Array.fill(8, {var old; old=current; current=current+rrand(1,3); old});
scale.postln;

current=60;
scale= Array.fill(8, {var old; old=current; current=current+rrand(1,3); old});
scale.postln;

current=60;
scale= Array.fill(8, {var old; old=current; current=current+rrand(1,3); old});
scale.postln;
)

The code above was written by my tutor Nick Collins as an example of code that can be used for generating a random scale. The scale starts on middle C as the current variable is set to 60. An array is then created and filled with 8 notes. For each note in the array the function gets called. It has been copied twice more, this gives us 3 different scales and echos the last line. This can take a long time and we were therefore taught to put one of these blocks of code into a function:
(
var makescale;

makescale= {var current;
current=60;
Array.fill(8, {var old; old=current; current=current+rrand(1,3); old;});
};

3.do({makescale.value.postln;});
)

To call this function we use:
makescale.value

If we want more of them we can write:
10.do(makescale)

The number 10 can be replaced with however many times we want to call the function.

When using rrand we can change the numbers in brackets after to give us random numbers within a specific range, for example:

rrand(45,80)

Will output random numbers between 45 and 80.

From browsing around the internet I could see that Arrays are a very valuable part or the SuperCollider language and used often. I found that they were used to create drum machines and load samples amongst many other things.

I found: http://sc3howto.blogspot.com/2010/05/arrays.html a blog written by Charles CĂ©leste Hutchins gave a useful introduction to different ways to write arrays and their uses.

SynthDefs
SynthDef is short for Synthesiser Definition. SynthDefs are used to define networks of Ugens. Many synths can be created from a single SynthDef.

Before using SynthDefs we have just been using .play:

{SinOsc.ar*0.1}.play

This is the basic construct of a SynthDef:

(
SynthDef("mysound", {

}).add
)
If we look at the SynthDef count of the localhost server we can see that it has gone up.

This is the basic construct of declaring a SynthDef:

(
SynthDef("mysound",{Out.ar(SinOsc.ar*0.1)

}).add
)

The Synth can now be assigned to variables:
a = Synth("mysound")
b = Synth("mysound")

Here is an example (again written by my tutor) of an example of creating a SynthDef using a Sawtooth wave with frequency 440hz and shaped with an envelope which is then cut off with the use of a doneAction:
(
SynthDef("mysound2", {arg freq = 440; var env, saw;

saw = Saw.ar(freq);
env = EnvGen.ar(Env([0,1,0],
[0.01,0.4]), doneAction:2);

Out.ar(0,saw*env*0.1)
}).add
)

Synth("mysound2")

Synth("mysound2", [\freq, 1000])

When text is written with “” around it it is a String, the syntax colour is grey:

"name"

When text is written with a \ in front of it then it is a symbol and the syntaxcolour is green:
\name

"name" as a String can individually access the characters. With symbol it is not an array of characters but is a globally defined value, cheaper in storage than a string.

I attempted to change:
[Line.kr(1,0,1.0)*Blip.ar(550,10)*0.1}.play
into a SynthDef. This was my attempt, i'm not sure if it is right:

(
SynthDef("RosaSound", {arg freq = 550;

Out.ar(0,Line.kr(1,0,1.0)*Blip.ar(freq,10)*0.1)
}).add
)

Synth("RosaSound")

We then worked in pairs to answer some exercises that we we were set:

1.Imagine you have to generate a rhythm for one 4/4 bar (i.e. 4 beats). Write a short program which selects random successive numbers from [1.0, 0.5, 0.25] to fill up one bar's worth of beats. How do you deal with going past the end of the bar? (hint: what does .choose do on an array?)

answer:
(
var beats=[1.0,0.5,0.25];
var count=0;
while({count<4},{
var beat=beats.choose;
min(4-count,beat).postln;
count=count+beat;

})

)

2.Rewrite the following code as a series of nested ifs

i.e. if(condition1, {}, {if (condition2, etc.)})
Answer:
(
var z;
z = 4.rand;
switch (z,
0, { \outcome1 },
1, { \outcome2 },
2, { \outcome3 },
3, { \outcome4 }
).postln;
)


3.Now also rewrite it as a choice amongst elements of an array.
Answer:
(
var z;
z = 4.rand;
if(z==0,{\outcome1},{
if(z==1,{\outcome2},{
if(z==2,{\outcome3},{
if(z==3,{\outcome4},{})
})
})
}).postln;

)


[\one, \two].choose

4. Compare each of these lines by running them one at a time:

2.rand

2.0.rand

2.rand2

2.0.rand2

rrand(2,4)

rrand(2.0,4.0)

exprand(1.0,10.0)


Write a program which plots ten outputs from any one of these lines in a row. Advanced: actually allow user selection (via a variable for instance) of which line gets used to generate the ten random numbers.
Answer:
(
f={arg line;
switch (line,
1, { for(1,10,{2.rand})},
2, { for(1,10,{2.0.rand})},
3, { for(1,10,{2.rand2})},
4, { for(1,10,{2.0.rand2}) },
5, { for(1,10,{rrand(2,4)}) },
6, { for(1,10,{rrand(2.0,4.0)}) },
7, { for(1,10,{exprand(1.0,10.0)}) },

)
}
f.value(3)
)

To be honest I was lucky to be working with someone who turned out to be really good at programming and would have struggled to get through working through the problems alone. I found the exercise really useful as it allowed me to see the best way to logically tackle a problem starting first with setting out the structure of the code for the answer and then breaking down the problems into steps and finding the most logical and efficient way of coding it in SuperCollider by seeing how someone else would work through it.

Envelopes and modulation synthesis

SuperCollider Blog 3


Envelopes


Envelopes describe a single event (as opposed to periodically repeating events) that changes over time. Typically they are used to control amplitude (a VCA), but they can be applied to any aspect of sound” Computer Music with examples, David Michael Cottle.


The amplitude of a sound decreases over time but at the transient of a sound there is a certain degree of attack. Different instruments have different attacks and small variations in attack time can make a big difference. All preset sounds on synthesisers use envelopes to create a change in volume over time.


There are fixed duration envelopes and sustain envelopes. Sustain envelopes could represent the length of time a key is held down on a piano and can be represented using a gate in SuperCollider. Fixed envelopes can represent percussive instruments such a cymbals as once they have been played and the sound is ringing out it is not always possible to know how long it will continue to ring out for.


ADSR stands for attack, decay, sustain and release. These are terms often used when using envelopes. They are also arguments in SuperCollider.



Env([1,0,1],[1.0,0.5]).plot //This makes an Envelope with three control points, at y positions given by the first array, and separated in x by the values in the second (see the Env help file)





This is the scope of the envelope created from the code above from the course help files.


The arguments for Env are:

Env(levels, times). There is one less number in times than levels.


There are different types of envelopes that can be used such as env.linen, env.adsr and env.perc. They have different arguments that are suited to certain sounds.


They are called envelopes as some of the classical shapes that can be viewed using .scope look like the shape of an envelope.


To use envelopes for synthesis we need to use EnvGen.


SC uses EnvGen and Env to create envelopes. Env describes the shape of the envelope and EnvGen generates that shape when supplied a trigger or gate.” Computer Music with examples, David Michael Cottle.


If we use an envelope and wrap it in EnvGen.ar we can make it run at audio rate. This starts at 1 and goes to 0 at 1 second. If we run this we won’t hear anything but we can see it on the scope, it is too slow for human ears to hear as our ears only pick up sounds between 16-20 hz. If we multiply it by a SinOsc we are able to hear it.


We therefore plug Env into EnvGen:


{EnvGen.ar(Env([1,0],[1.0]))}.scope


and then multiply with a SinOsc:


{EnvGen.ar(Env({1,0],[1.0))*SinOsc.ar}.scope].


A useful thing to note when working with envelopes is the doneAction argument is very useful when using envelopes. It stops the voice that we have finished with from continuing to run and therefore uses up less CPU.


{Saw.ar(EnvGen.kr(Env([1000,100,[0.5]),doneAction:2),0.1)}.plot



Modulation synthesis


Modulation in signal processing refers to the control of an aspect of one signal by another” Introduction to Computer Music, Nick Collins.


The signal that controls is the modulator and the signal that is controlled is the carrier. Modulation is nonlinear so the output signals may not be in a spectrum that was in either of the inputs. These new frequencies that did not appear in the inputs are called sidebands.

It is a good idea to explore frequency modulation with 2 sinusoids rather than using complex sounds though it is possible to use this to predict the effect of using modulation on more complex sounds using Fourier analysis.


Ring modulation


Ring modulation is a result of simply multiplying the two frequencies


carrier * modulator


though it could also be written modulator * carrier, it makes no difference.


Both of the signals are bi polar. This is when the amplitudes of the signals can take on both positive and negative values and can therefore oscillate above and below 0 amplitude.


For complicated waves we get lots more components we have much more of a spectrum of signals as more signals have been multiplied together .


If the carrier were a sum of three sinusoids, and the modulator a sum of five, ring modulation would create 15 multiplications and thus 30 output frequency components. This is a cheap way of getting more complicated spectrum out of simpler parts”. Introduction to Computer Music, Nick Collins.


Amplitude modulation


Using amplitude envelopes and tremolo are both examples of amplitude modulation.


Amplitude modulation is like ring modulation. The difference is that the modulator is unipolar. This means that is is always positive.


The carrier is usually bi polar and is therefore different to the uni polar modulator.


Stockhausen used ring modulation in many of his pieces including Telemusik, Gesand der Junglinge and Mixtur.

Ring modulation was also famously used by Brian Hodgson to create the sound of the voice of the Daleks in the TV series Doctor Who


I found it quite difficult to understand what the side bands were, I understood it a lot more after reading this:

In amplitude modulation there are two sidebands; the sum and difference of the carrier frequency (the audio frequency that is being modulated) and the modulator frequency (the frequency that is controlling the audio frequency). A carrier frequency of 500 and a modulating frequency of 112 could result in two sidebands: 612 and 388. If there are overtones in one of the waves (e.g. a saw wave being controlled by a sine wave), then there will be sidebands for each overtone.” Computer Music with examples, David Michael Cottle.


FM Synthesis


FM synthesis is similar to ring and amplitude modulation. In FM synthesis there can be more side bands. The number of sidebands depends on the modulation index. We get the modulation index from how far the modulating frequency is from the carrier (ratio between deviation and modulation frequency). It gives us an index that is independent of modulation frequency. The higher the value for I (modulation index) the richer the timbre.


In our class at uni we were given code for a GUI that had 3 sliders to change the carrier frequency, modulation frequency and modulation depth:


(

var w, carrfreqslider, modfreqslider, moddepthslider, synth;


w=Window("frequency modulation", Rect(100, 400, 400, 300));

w.view.decorator = FlowLayout(w.view.bounds);


synth= {arg carrfreq=440, modfreq=1, moddepth=0.01;

SinOsc.ar(carrfreq + (moddepth*SinOsc.ar(modfreq)),0,0.25)

}.scope;


carrfreqslider= EZSlider(w, 300@50, "carrfreq", ControlSpec(20, 5000, 'exponential', 10, 440), {|ez| synth.set(\carrfreq, ez.value)});

w.view.decorator.nextLine;


modfreqslider= EZSlider(w, 300@50, "modfreq", ControlSpec(1, 5000, 'exponential', 1, 1), {|ez| synth.set(\modfreq, ez.value)});

w.view.decorator.nextLine;

moddepthslider= EZSlider(w, 300@50, "moddepth", ControlSpec(0.01, 5000, 'exponential', 0.01, 0.01), {|ez| synth.set(\moddepth, ez.value)});


w.front;

)



There are an infinite amount of side bands in the spectrum with varying strength. With C M and D we can make either very think or very light spectrums.


C is carrier frequency

M is modulation freq (how quick it's wobbling)

D how far either side (modulation depth of frequency deviation)


Energy (the sidebands) turn up at carrier frequency +M and carrier freq –m


C, C+m, C-M, C+2M, C-2M (occurring symmetrically).


For musical purpose:


I = D/M


is a good way to control frequency modulation using the modulation index. If I is small then there is little audible fm effect. The higher that I is then the stronger the energy is in the side bands.












Sound synthesis and Fourier analysis

Computer music blog 2:


This week I started to look at how SuperCollider can be used for sound synthesis. I began by booting the internal server. The internal server was used in order to create oscilloscope views of the synthesized sounds. I used online course tutorial material to get started.

I used:


FreqScope.new


This uses Lance Putnam’s frequency scope which is useful for visually plotting the spectrum of sounds explored.

I quickly recapped UGens and the fact that Super Collider uses them as building blocks connecting them together to create synthesizers and sound processors. Ugens have inputs and outputs though most UGens have just one output. After some practice I expect to get to know typical parameter values and inputs/outputs for different UGens.


I started to learn about Subtractive synthesis. This is where you start with a complex sound and subtract parts from it in order to sculpt a different sound.


The course material gives pure white noise as a sound source to subtract from:



{WhiteNoise.ar(0.1)}.scope


and then plugged it into a filter to give a ‘less raw’ sound:


{LPF.ar(WhiteNoise.ar(0.1),1000)}.scope


The LPF cuts out energy above its cutoff frequency which is currently set to 1000hz.


To plug the WhiteNoise UGen into the LPF I need to nest one into the other. The UGens inputs can be thought of as being the list inside the parentheses.


LPF.ar(input, signal, cutoff, frequency..)

If you are unsure about what the inputs are double click on the name of the UGen and press cmd+d which will bring up a help file showing you.


In our previous example we plugged the white noise generator in to the lower pass filter. This is therefore the input signal and must be the first thing contained inside the brackets. 1000 is therefore the next argument, the cutoff frequency.


I then (still using course material) looked at how to vary the cutoff filter over time. This can be done by using a UGen called a line generator.


Line.kr(10000,1000,10) // take ten seconds to go from 10000 to 1000


Instead of using the previous fixed value of 1000 the Line UGen can be plugged into the place of the second argument in the parentheses:


{LPF.ar(WhiteNoise.ar(0.1),Line.kr(10000,1000,10))}.scope



I tried adjusting the code slightly using a few of the example sources and filters.


I used the Resonz filter rather than LPF:


{Resonz.ar(WhiteNoise.ar(0.1),Line.kr(10000,1000,10))}.scope


The result sounded to me to be less noisy and cut out some of the high and low frequencies.


I tried using a different noise source in place of WhiteNoise. I used

PinkNoise to see what difference that would make:


{LPF.ar(PinkNoise.ar(0.1),Line.kr(10000,1000,10))}.scope


This gave a much less harsh sound than WhiteNoise and also sounded quieter. I looked at the help file to try and find out more about it and found that it:


Generates noise whose spectrum falls off in power by 3 dB per octave.

This gives equal power over the span of each octave.

This version gives 8 octaves of pink noise.”


It also had 2 arguments, mul and add.


I then joined them together and used Resonz as the filter and PinkNoise as the sound source. Together they created a much more tamed sound than the original sound. It was slightly flat sounding at first but as the frequencies changed over the 10 seconds it gave a sound that reminded me of the wash of the sea over a shore heard from a distance.


I was then taught about variables. Values are assigned to variables using the = operator.


For example:

a = 1


then 1 is stored in a.


Variables can be useful in many ways. One way that they are useful is as a syntactical short cut when using them to create the size of an array.


(1..10) would give us an array from 1 to 10. Could use any number and this could save us a lot of time, for example if we wanted a larger array such as (1..100) as we would not need to type it all out.



Letters a-z can be used as variable names but it is best to avoid using letter s. This is a default button and variable s is set to contain something that goes into the internal synthesiser.

Another danger is using global variables. It may have been previously set somewhere in another file and won't work. If you define it using


var n


This makes it a local variable instead of a global variable. Name of this can be anything, could change it to my name rosa and it would still work.



Sawtooth waves:


Saw tooth waves are much richer than SinOsc. They have a bright sound compared to dullness of SinOsc


{Saw.ar(440)*0.1}.play


To make a SinOsc sound like a sawtooth each harmonic needs to be divided by its harmonic number ½ ½ 1/3 ¼ etc


One of the differences will be CPU cost. Can see on the internal server a SinOsc is more than a single sawtooth. This is quicker as less sine waves are added up.


If you wanted to create a sawtooth and needed to know which SinOscs to add up you would need to use Fourier analysis.


Freq scope shows us Fourier analysis.


If we took the sawtooth wave and looked at it using the freq analyser (a little window with green lines that shows the frequencies) we would then have to measure the curve and see harmonics are evenly spaced. This shows the harmonic scale. Where it falls off of the straight line gives the harmonic number.

The reason Fourier analysis works is that you can align it with the period of the wave form, so if the frequency is 440 hz, take a snap shot of a period and do Fourier analysis on that period. Think of it like finding a SinOsc that fits that period.


Say root fundamental is 100hz. 1 seconds worth is a period. If 100hz fits 100 times into a second the width is 100th of a second.


This fits exactly on the plot. If this is compared with 100 hz sine wave, then 200 hz sine wave, 3 should fit exactly. The signal then correlates like a sawtooth, falling into a diagonal-ish line. When complex wave forms are broken up in terms of sines then that is Fourier analysis.


Some of the oscillators you can get hold of in SC are packed complex recipes. There is already a sawtooth UGen so don’t need to worry about making one. If you wanted to make your own one you would make a wavetable


Wavetable is one period of a waveform drawn out. It is like sampling but only with a single period not long sound files. If SinOsc hass 5 cycles SC can store shape of single sin cycle and keep repeating it.


Fourier analysis

I decided to do some more reading up on Fourier analysis in order to get a better understanding of it.


The Fourier transform decomposes a signal onto a basis of sinusoids. Given the frequency for a sinusoid, the analysis compares the input signal with both a pure sine and a pure cosine wave of this frequency. This determines an amplitude and phase that indicates how well the input matches the basis element”. Nick Collins, Introduction to Computer Music.


The first line of that quote explains the basic point of Fourier analysis. I read some more to see how this happens.


DFT stands for Discrete Fourier Transform. Sounds vary in their state over time, they can constantly change frequency rather than staying stationary. In order to analyse this changing signal and break it into sinusoids we must take a series of snapshots of the signal. Snapshots are better known as 'windows'. Windows are a number of samples long and each window is treated with a new DFT. Once in a snapshot, the signal is considered to be stationary. Windows can overlap one another but they are usually spaced evenly in time and are the same size.


FFT stands for Fast Fourier Transform. It is an algorithm that speeds up the process of DFT. Each FFT gives a FFT frame of spectral data.


A sequence of DFT's is carried out meaning that every sample appears in a window. This is called STFT (Short Term Fourier Transform). To do this a basic analysis frequency must be set. This is done by using the fundamental frequency corresponding to the period of the wave form. A problem occurs when we do not already know what this fundamental frequency is already. Other problems are if the signal contains a mixture of different periodic sounds or if the sound is inharmonic.


It is possible to try to analyse non periodic sounds using Fourier analysis. A large period which corresponds to a small fundamental frequency can be used. You have to hope the large period is larger than the component frequencies of the sound you are measuring. The Fourier measures the energy from multiples of the fundamental frequency. If we have a low enough fundamental frequency then we it is able to get actual practical use out of the harmonic multiples.


There can also be problems with parts of the frequency 'falling between the gaps' when analysing. The signal could be distributed through the analysis harmonics to give us an indirect perception of the spectrum of the sound.


Consider a sampling rate R of 44 100 Hz, and a segment of duration one second, which as a period corresponds to a frequency of 1 Hz. If this were the basis for Fourier analysis, we would measure at frequency multiples of 1 Hz, so at 1 Hz, 2 Hz, 3 Hz, … all the way up to … the Nyquist frequency of 22 050. Each harmonic of 1 Hz is called a frequency bin or band of the transform” Nick Collins, Introduction to Computer Music.


This helped me to understand the frequency analysis we did in class when looking at curves on the frequency analyser and how Fourier analysis actually used to analyse periods of a wave.


I also found that different types of windows that can be used with Fourier analysis which are often named by their creators such as Hann, Hamming or Kaiser Bessel. They are used for putting the signal into segments. Using different windows can affect the focus on the peak location on the spectrum as well as affecting the amount of spillage between spectral bins. The most popular windows are Hann and Kaiser Bessel windows.