Friday, 29 November 2013

Scheduling

Scheduling


Time is a very important thing in music. Supercollider uses clocks in order to schedule when things happen. Clocks can therefore get things to stop and start when needed. There are 3 types of clocks in Supercollider; SystemClock, TempoClock and AppClock.

The SystemClock schedules in seconds. If you are scheduling something then it means you are scheduling for it to happen at some point in the future, for example in this code that I got from course notes:
(
SystemClock.sched(0.0,//start at 0.0 sec from now, i.e. immediately
{//a function which states what you wish to schedule
Synth(\bleep);
1 //repeat every second
}
)
)

SystemClock is scheduled to start at 0.0 seconds from when the code is run. It then runs a SynthDef (\bleep) once a second after .sched has been run, this is called relative scheduling.
As well as relative scheduling there is also absolute scheduling. In order to get the current systemClock time you need to run the code:

Main.elapsedTime; //gives a time since the application started

schedAbs is used for scheduling in absolute time.

TempoClock is more commonly used than SystemClock as it allows you to schedule in beats and measures. There can be many different TempoClocks running at the same time to different measures but there can only be one SystemClock.

In Supercollider tempo is measured in beats per seconds (bps) as apposed to bpm.

1 bps = 60 bpm
1.6666667 bps = 100 bpm
2 bps = 120 bpm
2.4 bps = 144 bpm
3 bps = 180 bpm
etc

In order to go from bps to bpm you need to multiply by 60. To go in the other direction then divide by 60.

This code demonstrates the SynthDef \bleep being played followed by a pause (wait) of different lengths of time:
(
{
Synth(\bleep);
1.0.wait;
Synth(\bleep);
0.5.wait;
Synth(\bleep);
}.fork(TempoClock(2))
)

A regular TempoClock schedules at a clock tempo of 1 bps. TempoClock(2) is a clock running at a tempo of 2 bps.

The wait times are in beats. 1.0 is the equivalent to a crotchet, 0.5 is equivalent to a quaver and 0.25 is the equivalent to a semi quaver.



There is a default TempoClock: TempoClock.default, it is best to assign it to a variable to save typing:
t = TempoClock.default;

The TempoClock may have been running for a while. It is possible to check where in time it is at:

t.elapsedBeats; //what exact logical beat time are we at

t.bar; //which bar are we in (default assumption is 4/4)

t.elapsedBeats.ceil; //find next beat

t.elapsedBeats.floor; //find last beat

If you are using a different clock the ways to check are:
SystemClock.beats;
AppClock.beats

In class we looked at scheduling something more musical. We were shown how an array for midi notes could be played at a tempo of 120 bpm with a 1 second wait in-between:
(
{

[64,64,64,64,64,64,64,67,61,63,64].do{arg valuefromarray;
Synth(\bleep,[\note,valuefromarray]);
1.0.wait;
};
}.fork(TempoClock(2))
)

We then looked at simultaneously using quarter notes and eight notes:

(
{

[48,48,43,48].do{arg whichnote;

Synth(\bleep, [\note,whichnote]);
1.0.wait;
}

}.fork
)
(
{

[60,64,62,60].do{arg whichnote;

Synth(\bleep, [\note,whichnote]);
1.0.wait;
}

}.fork;(TempoClock(2.0));


{

[68,63,61,66].do{arg whichnote;

Synth(\bleep, [\note,whichnote]);
1.0.wait;
}

}.fork(TempoClock(3.0));

)

(
{
4.do{arg whattimeround;
{

4.do{
Synth(\bleep,[\note, 60+whattimeround]);


(TempoClock(2.0)); plays the notes in the array as quaver notes and (TempoClock(3.0));plays them as semi quavers.

.
In regards to the above code it basically means that they can run autonomously so we can therefore run the quarter notes and the eight notes at the same time using the same argument.

Here is an example of scheduling from code used in an algorithmic metal composition I made for uni coursework;

(
//Sets up the tempo clock in order to schedule the piece
// TempoClock(2); is a clock running at 2beatspersecond

var tempoclock = TempoClock(2);
var effect,effect2;

//add effects
effect = Synth.tail(Group.basicNew(s,1),\PlayBuffEffecta);
effect2 = Synth.tail(Group.basicNew(s,1),\PlayBufEffect);

{
//opening section

//part 1
5.do{
// vox- solo to begin with
{
Synth("PlayBuf",[\bufnum, a[20],\pos,1.0.rand,\dur,8.0,\loop,1,\amp,0.1]);
}.fork(tempoclock);


//16*spb = 8.0 IF spb = 0.5
// layer 2; backdrop accompaniment, guitar leit motif
{

8.wait;
Synth("PlayBuf",[\bufnum, a[8],\pos,1.0.rand,\dur,8.0,\loop,1,\amp,0.1]);

}.fork(tempoclock);

// layer 3 ; granular part, guitar
{
16.wait;
32.do{
Synth("PlayBuf",[\bufnum, a[18],\pos,rrand(0.2,0.4),\dur,0.1,\amp,0.05]);
0.5.wait;
}

}.fork(tempoclock);

Tuesday, 5 April 2011

Human interaction and GUI


This entry focuses on human interaction and the use of GUI in SuperCollider. Super Collider is an 'interactive' programming language meaning that new parts of the program can be written whilst the program is active and writing the program as you use it is therefor part of the program itself.
Here I am focusing more on user interaction with SuperCollider using things such as the mouse, the computer keyboard and midi controllers to create and change sound.

In previous posts I have demonstrated using the mouse as a controller using MouseX.kr and MouseY.kr. This was a quick and simple way for us to see how quickly we could start physically interacting with the program and changing the sounds in this way. MouseX and MouseY are both UGens with the arguments (minval, maxval, warp, lag). Minval and maxval are the screen ranges. Warp is a mapping curve which can either be linear (0) or exponential (1). lag is telling us how much lag there is to the cursor movement. I guess MouseX and Y act kind of like a knob would as a lot of values can be accessed quickly, it is just a quicker way of testing something rather than creating a knob GUI. MouseX and MouseY can be used to control different things at the same time such as amplitude and frequency. I will not focus to heavily on that now as I have explained this a bit in my first blog entry. We were shown a really interesting code example written by James McCarthy of a strummable guitar where you move the mouse to recreate the sound of strumming a guitar.

We were given example code in which you could use the computer keyboard to play notes. It uses doc = Document.current;
This means it calls back from the window that you are typing into which allows you to play sounds whilst you type into that window. Document is a class which “represents a text document within the context of your text editing environment” (help file).

I read about Laurie Spigel's 'Music Mouse' (http://retiary.org/ls/programs.html) in Computer Music by Nick Collins. Where “querty and mouse control sets in motion music sequences from single- to multi-note patterns”. This is a program that uses the computer keyboard and mouse as controllers as I have just described in terms of SuperCollider. I was interested by the fact it was created in 1986 as it shows this idea has been developed for a long time.

I found an interesting website online that shows how you can control SuperCollider using an android phone by sending OSC messages over the network:
https://github.com/glastonbridge/SuperCollider-Android/wiki/How-to-control-SC-Android-remotely

MIDI:

MIDI (musical instrument digital interface) are not actual sounds. MIDI is information about a sound that can only be heard once it has been synthesised. This is useful as sound can be synthesised to seem as though it is being played by a variety of instruments and also makes it fairly simple to edit information.

I tried connecting a midi keyboard using course material starting with:

MIDICLIENT.init which starts you off by giving a list of available MIDI devices.

MIDI notes range from 0-127 as there is usually a 7bit value range. This is converted into a 0.0 or 1.0 giving an appropriate amplitude control through the use of \vel.

We also looked at how a SynthDef can be created and then assigned to MIDIIn.noteOn. This means that you can define and shape the sound you want to create and then assign that to the MIDI information inputted.
I looked online for some inspiration as to some of the interesting outcomes of using MIDI with SuperCollider.

I found this video demonstrating a program written in SC called 'The Grid'. The MIDI is being controlled with a launch pad:



I also found this one:



His description of it: “Physically, it's a sheet of aluminium, suspended on rubber buffers in a
wooden frame, which uses piezo sensors to detect the position and velocity of strikes on the pad surface. The electronics is based on an arduino board, and there is some python code which runs on the host computer to map the raw incoming sensor data from the arduino into a stream of midi events, including the x and y coordinates of each strike.
These events are then picked up by supercollider and used to play sounds.”

GUI:
GUIs are user interfaces that allow users to interact with computers and other electronic devices through images (such as icons, scroll bars, windows) which are usually used in conjunction with the mouse click.

In SuperCollider the GUI class (which can be explored using [GUI] cmd+D) 'provides a means of writing cross platform GUI code'. We are given the power to code our own GUIs to create an interface for our SC projects. These can include things like sliders, knobs, buttons, drop down boxes, colour schemes and labels.
If you press shift + cmd+ N you can see a GUI which shows you the different widgets. You can click on the 'new window' button and then click on the different GUI options for example SCNumberBox and drag it into the window and see what that number box graphic looks like.

There are two different GUI implementations; OS X('cocoa') for mac and SwingOSC ('swing') which are java cross platform classes used with windows. The fact that the widgets are cross platform helps to give SuperCollider a distinctive look that is easy to recognise.

I spent a lot of time looking at code examples of different GUI code to get an idea of the possibilities for creating an interface especially when trying to create my own synthesiser. I found the widgets gave a lot of possibilities for different ways that someone could interact with something such as a synth. I found that there were a lot of colours that could be used for customising dials, backgrounds and sliders and that you could choose where about in the window you wanted to place different widgets and you could use SynthDefs to assign the sounds you want to be manipulated to the different GUI parts. I found it exciting seeing the code turn into something that could actually be manipulated by people using an interface but I also found it challenging especially placing things in the right place in my main window.
This is a print screen of my GUI:
\



The back ground changes colour. I liked the look of it but it still needs a lot of improvement. I didn't know how to add labels underneath the knobs and there is too much space at the bottom. I also wanted it to have a title and in general it needs to be more obvious what it does. The last 2 knobs on it don't work properly either. I am going to work on it and then repost it in a later entry.

I found the help files useful and think that I probably just need some more practice. I decided to look into why there is a fairly basic uniform GUI library.

I read the GUI chapter of David Cottle's ' Computer Music with examples in SuperCollider 3' and found that:

“When first building on an idea it is far more efficient for me to work with a non graphic (code based) synthesis program. For that reason (and also because I tend toward automation that excludes external control) GUIs are the last consideration for me. That's one of the reasons I prefer working with SC: GUIs are optional.”

It seems that he is quite a code purist and seemed quite negative when writing about GUIs. He does go on to give quite a few detailed GUI examples which are useful to see. He ends with “But I think often too much programming time is spent on user interface. Wouldn't you rather be working on the music?” which goes to prove that he doesn't want GUI to be too much of a focus.

I looked online to see if many people supported his view. I found a website www.creativeapplications.net and read “So far SuperCollider has a reputation for being primarily a text based language without a substantial GUI, and there have been many interesting works done that treat code as an art-form in itself and revel in the use of pure code as an interface.”

It made me think of a TOPLAP live code concert that I went to in which they projected the code they were writing live up onto a screen for everyone to see and it was really fascinating. By projecting it it showed that they wanted that process to be appreciated as well as the final outcome.

They had a link to a SuperCollider Flickr group: http://www.flickr.com/groups/supercollidergui/pool/

To this they were calling for people to send designs for standard elements in and then the online SC community would discuss the pros and cons. I clicked on the discussion link and couldn't see anything there so I am not sure if any of the designs were used but this may just be because I don't have a flickr account.

It seems a lot of Super Collider users do see their code an art form as well as the sounds that they output. I found a nice article on the internet in which the author writes:

“The thought process needed to imagine a desired result, being able to see beyond an existing solution and come up with something new, even the basic initial thought of "this is what I want the computer to do" is an entirely creative one. Inspiration is needed to want to create something. This has roots in creativity, something all the best programmers have, even require. What makes someone want to write a program? They want the computer to do something. That is pure creativity, often as a creative solution to a practical problem”

http://r.je/is-programming-art.html

I personally think it is important to have the option to use GUI as it increases the usability and makes makes interactivity more enjoyable for people that maybe have less interest in the code behind the sound but the person that wrote the code can still gain satisfaction that they have created something that someone else is able to use and gain enjoyment from.

Wednesday, 2 March 2011

Arrays, Functions, SynthDefs


Super Collider Blog 4- Programming SuperCollider

This week we looked more closely at the syntax of SuperCollider as a programming language. I have already talked a bit about variable assignments and arrays in previous posts.

I looked at the use of encapsulation of code within brackets and the use of functions.
When writing a function:
arg= argument,inputs (first thing stated inside the function brackets). The output of the function is the last line of code within the curly function brackets.

Basically the recipe for writing a function in SuperCollider:
1. Say what inputs are
2. Do calculation
3. Get output

Functions are written inside function brackets which look like this:
{}

Below is a code example that we were shown in class:
(
f = {arg input1,input2,input3;
{SinOSc.ar*0.1}.play; input 1*input2*input3;
}
)

Shortened to:

f = {SinOsc.ar*0.1}

and called with:

f.play

Programmatic ways of writing lists of data:
Here are a few examples of how lists can be created and filled with numbers in different ways:

Array.fill(10,{1})
In the above example there are10 things in the array and every thing in the array is the number one.

Array.fill(100,{1})
Gives us 100 1's in the array.

Instead of a fixed thing in array could give a function:

Array.fill(100,{10.rand})
(The random number in this array will go from 0-9 NOT 1-10 as we start counting from 0 in SuperCollider.

Array.fill(100,10.rand)
Without the function bracket is just gives us loads of the same number, it needs to be called over and over to give a different number.

Array.fill(100,{arg count; count})
This creates an array of squares of numbers.

Array.fill(100,{arg count; count*count*82})
This is an example of how SuperCollider can be used to fill arrays with results from difficult equations.

We then looked at how arrays can be used to create scales using midi notes in SuperCollider.

This is an example of an array containing a midi note scale starting at middle C:
[60,62,64,65,67,69,71]


(
var scale, current;

current=60;
scale= Array.fill(8, {var old; old=current; current=current+rrand(1,3); old});
scale.postln;

current=60;
scale= Array.fill(8, {var old; old=current; current=current+rrand(1,3); old});
scale.postln;

current=60;
scale= Array.fill(8, {var old; old=current; current=current+rrand(1,3); old});
scale.postln;
)

The code above was written by my tutor Nick Collins as an example of code that can be used for generating a random scale. The scale starts on middle C as the current variable is set to 60. An array is then created and filled with 8 notes. For each note in the array the function gets called. It has been copied twice more, this gives us 3 different scales and echos the last line. This can take a long time and we were therefore taught to put one of these blocks of code into a function:
(
var makescale;

makescale= {var current;
current=60;
Array.fill(8, {var old; old=current; current=current+rrand(1,3); old;});
};

3.do({makescale.value.postln;});
)

To call this function we use:
makescale.value

If we want more of them we can write:
10.do(makescale)

The number 10 can be replaced with however many times we want to call the function.

When using rrand we can change the numbers in brackets after to give us random numbers within a specific range, for example:

rrand(45,80)

Will output random numbers between 45 and 80.

From browsing around the internet I could see that Arrays are a very valuable part or the SuperCollider language and used often. I found that they were used to create drum machines and load samples amongst many other things.

I found: http://sc3howto.blogspot.com/2010/05/arrays.html a blog written by Charles CĂ©leste Hutchins gave a useful introduction to different ways to write arrays and their uses.

SynthDefs
SynthDef is short for Synthesiser Definition. SynthDefs are used to define networks of Ugens. Many synths can be created from a single SynthDef.

Before using SynthDefs we have just been using .play:

{SinOsc.ar*0.1}.play

This is the basic construct of a SynthDef:

(
SynthDef("mysound", {

}).add
)
If we look at the SynthDef count of the localhost server we can see that it has gone up.

This is the basic construct of declaring a SynthDef:

(
SynthDef("mysound",{Out.ar(SinOsc.ar*0.1)

}).add
)

The Synth can now be assigned to variables:
a = Synth("mysound")
b = Synth("mysound")

Here is an example (again written by my tutor) of an example of creating a SynthDef using a Sawtooth wave with frequency 440hz and shaped with an envelope which is then cut off with the use of a doneAction:
(
SynthDef("mysound2", {arg freq = 440; var env, saw;

saw = Saw.ar(freq);
env = EnvGen.ar(Env([0,1,0],
[0.01,0.4]), doneAction:2);

Out.ar(0,saw*env*0.1)
}).add
)

Synth("mysound2")

Synth("mysound2", [\freq, 1000])

When text is written with “” around it it is a String, the syntax colour is grey:

"name"

When text is written with a \ in front of it then it is a symbol and the syntaxcolour is green:
\name

"name" as a String can individually access the characters. With symbol it is not an array of characters but is a globally defined value, cheaper in storage than a string.

I attempted to change:
[Line.kr(1,0,1.0)*Blip.ar(550,10)*0.1}.play
into a SynthDef. This was my attempt, i'm not sure if it is right:

(
SynthDef("RosaSound", {arg freq = 550;

Out.ar(0,Line.kr(1,0,1.0)*Blip.ar(freq,10)*0.1)
}).add
)

Synth("RosaSound")

We then worked in pairs to answer some exercises that we we were set:

1.Imagine you have to generate a rhythm for one 4/4 bar (i.e. 4 beats). Write a short program which selects random successive numbers from [1.0, 0.5, 0.25] to fill up one bar's worth of beats. How do you deal with going past the end of the bar? (hint: what does .choose do on an array?)

answer:
(
var beats=[1.0,0.5,0.25];
var count=0;
while({count<4},{
var beat=beats.choose;
min(4-count,beat).postln;
count=count+beat;

})

)

2.Rewrite the following code as a series of nested ifs

i.e. if(condition1, {}, {if (condition2, etc.)})
Answer:
(
var z;
z = 4.rand;
switch (z,
0, { \outcome1 },
1, { \outcome2 },
2, { \outcome3 },
3, { \outcome4 }
).postln;
)


3.Now also rewrite it as a choice amongst elements of an array.
Answer:
(
var z;
z = 4.rand;
if(z==0,{\outcome1},{
if(z==1,{\outcome2},{
if(z==2,{\outcome3},{
if(z==3,{\outcome4},{})
})
})
}).postln;

)


[\one, \two].choose

4. Compare each of these lines by running them one at a time:

2.rand

2.0.rand

2.rand2

2.0.rand2

rrand(2,4)

rrand(2.0,4.0)

exprand(1.0,10.0)


Write a program which plots ten outputs from any one of these lines in a row. Advanced: actually allow user selection (via a variable for instance) of which line gets used to generate the ten random numbers.
Answer:
(
f={arg line;
switch (line,
1, { for(1,10,{2.rand})},
2, { for(1,10,{2.0.rand})},
3, { for(1,10,{2.rand2})},
4, { for(1,10,{2.0.rand2}) },
5, { for(1,10,{rrand(2,4)}) },
6, { for(1,10,{rrand(2.0,4.0)}) },
7, { for(1,10,{exprand(1.0,10.0)}) },

)
}
f.value(3)
)

To be honest I was lucky to be working with someone who turned out to be really good at programming and would have struggled to get through working through the problems alone. I found the exercise really useful as it allowed me to see the best way to logically tackle a problem starting first with setting out the structure of the code for the answer and then breaking down the problems into steps and finding the most logical and efficient way of coding it in SuperCollider by seeing how someone else would work through it.

Envelopes and modulation synthesis

SuperCollider Blog 3


Envelopes


Envelopes describe a single event (as opposed to periodically repeating events) that changes over time. Typically they are used to control amplitude (a VCA), but they can be applied to any aspect of sound” Computer Music with examples, David Michael Cottle.


The amplitude of a sound decreases over time but at the transient of a sound there is a certain degree of attack. Different instruments have different attacks and small variations in attack time can make a big difference. All preset sounds on synthesisers use envelopes to create a change in volume over time.


There are fixed duration envelopes and sustain envelopes. Sustain envelopes could represent the length of time a key is held down on a piano and can be represented using a gate in SuperCollider. Fixed envelopes can represent percussive instruments such a cymbals as once they have been played and the sound is ringing out it is not always possible to know how long it will continue to ring out for.


ADSR stands for attack, decay, sustain and release. These are terms often used when using envelopes. They are also arguments in SuperCollider.



Env([1,0,1],[1.0,0.5]).plot //This makes an Envelope with three control points, at y positions given by the first array, and separated in x by the values in the second (see the Env help file)





This is the scope of the envelope created from the code above from the course help files.


The arguments for Env are:

Env(levels, times). There is one less number in times than levels.


There are different types of envelopes that can be used such as env.linen, env.adsr and env.perc. They have different arguments that are suited to certain sounds.


They are called envelopes as some of the classical shapes that can be viewed using .scope look like the shape of an envelope.


To use envelopes for synthesis we need to use EnvGen.


SC uses EnvGen and Env to create envelopes. Env describes the shape of the envelope and EnvGen generates that shape when supplied a trigger or gate.” Computer Music with examples, David Michael Cottle.


If we use an envelope and wrap it in EnvGen.ar we can make it run at audio rate. This starts at 1 and goes to 0 at 1 second. If we run this we won’t hear anything but we can see it on the scope, it is too slow for human ears to hear as our ears only pick up sounds between 16-20 hz. If we multiply it by a SinOsc we are able to hear it.


We therefore plug Env into EnvGen:


{EnvGen.ar(Env([1,0],[1.0]))}.scope


and then multiply with a SinOsc:


{EnvGen.ar(Env({1,0],[1.0))*SinOsc.ar}.scope].


A useful thing to note when working with envelopes is the doneAction argument is very useful when using envelopes. It stops the voice that we have finished with from continuing to run and therefore uses up less CPU.


{Saw.ar(EnvGen.kr(Env([1000,100,[0.5]),doneAction:2),0.1)}.plot



Modulation synthesis


Modulation in signal processing refers to the control of an aspect of one signal by another” Introduction to Computer Music, Nick Collins.


The signal that controls is the modulator and the signal that is controlled is the carrier. Modulation is nonlinear so the output signals may not be in a spectrum that was in either of the inputs. These new frequencies that did not appear in the inputs are called sidebands.

It is a good idea to explore frequency modulation with 2 sinusoids rather than using complex sounds though it is possible to use this to predict the effect of using modulation on more complex sounds using Fourier analysis.


Ring modulation


Ring modulation is a result of simply multiplying the two frequencies


carrier * modulator


though it could also be written modulator * carrier, it makes no difference.


Both of the signals are bi polar. This is when the amplitudes of the signals can take on both positive and negative values and can therefore oscillate above and below 0 amplitude.


For complicated waves we get lots more components we have much more of a spectrum of signals as more signals have been multiplied together .


If the carrier were a sum of three sinusoids, and the modulator a sum of five, ring modulation would create 15 multiplications and thus 30 output frequency components. This is a cheap way of getting more complicated spectrum out of simpler parts”. Introduction to Computer Music, Nick Collins.


Amplitude modulation


Using amplitude envelopes and tremolo are both examples of amplitude modulation.


Amplitude modulation is like ring modulation. The difference is that the modulator is unipolar. This means that is is always positive.


The carrier is usually bi polar and is therefore different to the uni polar modulator.


Stockhausen used ring modulation in many of his pieces including Telemusik, Gesand der Junglinge and Mixtur.

Ring modulation was also famously used by Brian Hodgson to create the sound of the voice of the Daleks in the TV series Doctor Who


I found it quite difficult to understand what the side bands were, I understood it a lot more after reading this:

In amplitude modulation there are two sidebands; the sum and difference of the carrier frequency (the audio frequency that is being modulated) and the modulator frequency (the frequency that is controlling the audio frequency). A carrier frequency of 500 and a modulating frequency of 112 could result in two sidebands: 612 and 388. If there are overtones in one of the waves (e.g. a saw wave being controlled by a sine wave), then there will be sidebands for each overtone.” Computer Music with examples, David Michael Cottle.


FM Synthesis


FM synthesis is similar to ring and amplitude modulation. In FM synthesis there can be more side bands. The number of sidebands depends on the modulation index. We get the modulation index from how far the modulating frequency is from the carrier (ratio between deviation and modulation frequency). It gives us an index that is independent of modulation frequency. The higher the value for I (modulation index) the richer the timbre.


In our class at uni we were given code for a GUI that had 3 sliders to change the carrier frequency, modulation frequency and modulation depth:


(

var w, carrfreqslider, modfreqslider, moddepthslider, synth;


w=Window("frequency modulation", Rect(100, 400, 400, 300));

w.view.decorator = FlowLayout(w.view.bounds);


synth= {arg carrfreq=440, modfreq=1, moddepth=0.01;

SinOsc.ar(carrfreq + (moddepth*SinOsc.ar(modfreq)),0,0.25)

}.scope;


carrfreqslider= EZSlider(w, 300@50, "carrfreq", ControlSpec(20, 5000, 'exponential', 10, 440), {|ez| synth.set(\carrfreq, ez.value)});

w.view.decorator.nextLine;


modfreqslider= EZSlider(w, 300@50, "modfreq", ControlSpec(1, 5000, 'exponential', 1, 1), {|ez| synth.set(\modfreq, ez.value)});

w.view.decorator.nextLine;

moddepthslider= EZSlider(w, 300@50, "moddepth", ControlSpec(0.01, 5000, 'exponential', 0.01, 0.01), {|ez| synth.set(\moddepth, ez.value)});


w.front;

)



There are an infinite amount of side bands in the spectrum with varying strength. With C M and D we can make either very think or very light spectrums.


C is carrier frequency

M is modulation freq (how quick it's wobbling)

D how far either side (modulation depth of frequency deviation)


Energy (the sidebands) turn up at carrier frequency +M and carrier freq –m


C, C+m, C-M, C+2M, C-2M (occurring symmetrically).


For musical purpose:


I = D/M


is a good way to control frequency modulation using the modulation index. If I is small then there is little audible fm effect. The higher that I is then the stronger the energy is in the side bands.












Sound synthesis and Fourier analysis

Computer music blog 2:


This week I started to look at how SuperCollider can be used for sound synthesis. I began by booting the internal server. The internal server was used in order to create oscilloscope views of the synthesized sounds. I used online course tutorial material to get started.

I used:


FreqScope.new


This uses Lance Putnam’s frequency scope which is useful for visually plotting the spectrum of sounds explored.

I quickly recapped UGens and the fact that Super Collider uses them as building blocks connecting them together to create synthesizers and sound processors. Ugens have inputs and outputs though most UGens have just one output. After some practice I expect to get to know typical parameter values and inputs/outputs for different UGens.


I started to learn about Subtractive synthesis. This is where you start with a complex sound and subtract parts from it in order to sculpt a different sound.


The course material gives pure white noise as a sound source to subtract from:



{WhiteNoise.ar(0.1)}.scope


and then plugged it into a filter to give a ‘less raw’ sound:


{LPF.ar(WhiteNoise.ar(0.1),1000)}.scope


The LPF cuts out energy above its cutoff frequency which is currently set to 1000hz.


To plug the WhiteNoise UGen into the LPF I need to nest one into the other. The UGens inputs can be thought of as being the list inside the parentheses.


LPF.ar(input, signal, cutoff, frequency..)

If you are unsure about what the inputs are double click on the name of the UGen and press cmd+d which will bring up a help file showing you.


In our previous example we plugged the white noise generator in to the lower pass filter. This is therefore the input signal and must be the first thing contained inside the brackets. 1000 is therefore the next argument, the cutoff frequency.


I then (still using course material) looked at how to vary the cutoff filter over time. This can be done by using a UGen called a line generator.


Line.kr(10000,1000,10) // take ten seconds to go from 10000 to 1000


Instead of using the previous fixed value of 1000 the Line UGen can be plugged into the place of the second argument in the parentheses:


{LPF.ar(WhiteNoise.ar(0.1),Line.kr(10000,1000,10))}.scope



I tried adjusting the code slightly using a few of the example sources and filters.


I used the Resonz filter rather than LPF:


{Resonz.ar(WhiteNoise.ar(0.1),Line.kr(10000,1000,10))}.scope


The result sounded to me to be less noisy and cut out some of the high and low frequencies.


I tried using a different noise source in place of WhiteNoise. I used

PinkNoise to see what difference that would make:


{LPF.ar(PinkNoise.ar(0.1),Line.kr(10000,1000,10))}.scope


This gave a much less harsh sound than WhiteNoise and also sounded quieter. I looked at the help file to try and find out more about it and found that it:


Generates noise whose spectrum falls off in power by 3 dB per octave.

This gives equal power over the span of each octave.

This version gives 8 octaves of pink noise.”


It also had 2 arguments, mul and add.


I then joined them together and used Resonz as the filter and PinkNoise as the sound source. Together they created a much more tamed sound than the original sound. It was slightly flat sounding at first but as the frequencies changed over the 10 seconds it gave a sound that reminded me of the wash of the sea over a shore heard from a distance.


I was then taught about variables. Values are assigned to variables using the = operator.


For example:

a = 1


then 1 is stored in a.


Variables can be useful in many ways. One way that they are useful is as a syntactical short cut when using them to create the size of an array.


(1..10) would give us an array from 1 to 10. Could use any number and this could save us a lot of time, for example if we wanted a larger array such as (1..100) as we would not need to type it all out.



Letters a-z can be used as variable names but it is best to avoid using letter s. This is a default button and variable s is set to contain something that goes into the internal synthesiser.

Another danger is using global variables. It may have been previously set somewhere in another file and won't work. If you define it using


var n


This makes it a local variable instead of a global variable. Name of this can be anything, could change it to my name rosa and it would still work.



Sawtooth waves:


Saw tooth waves are much richer than SinOsc. They have a bright sound compared to dullness of SinOsc


{Saw.ar(440)*0.1}.play


To make a SinOsc sound like a sawtooth each harmonic needs to be divided by its harmonic number ½ ½ 1/3 ¼ etc


One of the differences will be CPU cost. Can see on the internal server a SinOsc is more than a single sawtooth. This is quicker as less sine waves are added up.


If you wanted to create a sawtooth and needed to know which SinOscs to add up you would need to use Fourier analysis.


Freq scope shows us Fourier analysis.


If we took the sawtooth wave and looked at it using the freq analyser (a little window with green lines that shows the frequencies) we would then have to measure the curve and see harmonics are evenly spaced. This shows the harmonic scale. Where it falls off of the straight line gives the harmonic number.

The reason Fourier analysis works is that you can align it with the period of the wave form, so if the frequency is 440 hz, take a snap shot of a period and do Fourier analysis on that period. Think of it like finding a SinOsc that fits that period.


Say root fundamental is 100hz. 1 seconds worth is a period. If 100hz fits 100 times into a second the width is 100th of a second.


This fits exactly on the plot. If this is compared with 100 hz sine wave, then 200 hz sine wave, 3 should fit exactly. The signal then correlates like a sawtooth, falling into a diagonal-ish line. When complex wave forms are broken up in terms of sines then that is Fourier analysis.


Some of the oscillators you can get hold of in SC are packed complex recipes. There is already a sawtooth UGen so don’t need to worry about making one. If you wanted to make your own one you would make a wavetable


Wavetable is one period of a waveform drawn out. It is like sampling but only with a single period not long sound files. If SinOsc hass 5 cycles SC can store shape of single sin cycle and keep repeating it.


Fourier analysis

I decided to do some more reading up on Fourier analysis in order to get a better understanding of it.


The Fourier transform decomposes a signal onto a basis of sinusoids. Given the frequency for a sinusoid, the analysis compares the input signal with both a pure sine and a pure cosine wave of this frequency. This determines an amplitude and phase that indicates how well the input matches the basis element”. Nick Collins, Introduction to Computer Music.


The first line of that quote explains the basic point of Fourier analysis. I read some more to see how this happens.


DFT stands for Discrete Fourier Transform. Sounds vary in their state over time, they can constantly change frequency rather than staying stationary. In order to analyse this changing signal and break it into sinusoids we must take a series of snapshots of the signal. Snapshots are better known as 'windows'. Windows are a number of samples long and each window is treated with a new DFT. Once in a snapshot, the signal is considered to be stationary. Windows can overlap one another but they are usually spaced evenly in time and are the same size.


FFT stands for Fast Fourier Transform. It is an algorithm that speeds up the process of DFT. Each FFT gives a FFT frame of spectral data.


A sequence of DFT's is carried out meaning that every sample appears in a window. This is called STFT (Short Term Fourier Transform). To do this a basic analysis frequency must be set. This is done by using the fundamental frequency corresponding to the period of the wave form. A problem occurs when we do not already know what this fundamental frequency is already. Other problems are if the signal contains a mixture of different periodic sounds or if the sound is inharmonic.


It is possible to try to analyse non periodic sounds using Fourier analysis. A large period which corresponds to a small fundamental frequency can be used. You have to hope the large period is larger than the component frequencies of the sound you are measuring. The Fourier measures the energy from multiples of the fundamental frequency. If we have a low enough fundamental frequency then we it is able to get actual practical use out of the harmonic multiples.


There can also be problems with parts of the frequency 'falling between the gaps' when analysing. The signal could be distributed through the analysis harmonics to give us an indirect perception of the spectrum of the sound.


Consider a sampling rate R of 44 100 Hz, and a segment of duration one second, which as a period corresponds to a frequency of 1 Hz. If this were the basis for Fourier analysis, we would measure at frequency multiples of 1 Hz, so at 1 Hz, 2 Hz, 3 Hz, … all the way up to … the Nyquist frequency of 22 050. Each harmonic of 1 Hz is called a frequency bin or band of the transform” Nick Collins, Introduction to Computer Music.


This helped me to understand the frequency analysis we did in class when looking at curves on the frequency analyser and how Fourier analysis actually used to analyse periods of a wave.


I also found that different types of windows that can be used with Fourier analysis which are often named by their creators such as Hann, Hamming or Kaiser Bessel. They are used for putting the signal into segments. Using different windows can affect the focus on the peak location on the spectrum as well as affecting the amount of spillage between spectral bins. The most popular windows are Hann and Kaiser Bessel windows.











Friday, 21 January 2011

Week 1

Super Collider blog- week 1

This week we were introduced to SuperCollider. According to course notes SuperCollider is a 'interpreted programming language for audio synthesis’. On opening up SuperCollider I was presented with a post window that already had some code in. There were also 2 GUIs one representing the local host server and one representing the internal server. In starting a new edit window I was presented with a blank slate. This could be considered as being slightly intimidating as you are not able to automatically move sliders or interact with graphics on the screen straight away to make sound. This is because the user needs to use a programming language that tells the machine to create and shape sounds. We were told that although it would take practice and perseverance SuperCollider is a very powerful tool for creating sound and it is nice to have a lot of control over the sounds you create.

In our first session our aim was simply to make the computer go "beep" by creating a sine tone and running it. We learnt that to run a line we needed the cursor to be on the line of code we wanted to run. Shift and enter held down together runs the line of code on a mac (which I was using) and cmd + period stopped the line running. Before we could run any sounds we needed to turn on the local host server by clicking 'boot’. We then made the computers talk by running: "I am SuperCollider 3".speak and then changing the words inside the speech marks. We then ran example code:

{Pan2.ar(SinOsc.ar(440,0,0.1),0.0)}.play

which plays a concert A tone. We had a go at changing the 440 (hz) to different frequencies. Each person in the group played their tone at the same time and we discussed how this was an example of additive synthesis which we will be learning more about in coming weeks.

We also played around with this example code:

{Pan2.ar(SinOsc.ar(MouseX.kr(440,880),0,0.1),0.0)}.play

This causes the tone to change when the mouse is moved which gives an initial demonstration of human interaction with Super Collider and how this can be used to change the sounds.

In our second session we started by running the code:

{SinOsc.ar(440)*0.5}.play

This runs a concert A tone. We noticed that the sound only plays through the left ear and is therefore monophonic. We then wrapped it in a Pan UGen (I’ll write more about UGens later) using:

{Pan2.ar(SinOsc.ar(440)*0.5)}.play

This caused the concert A tone to be in stereo and in the centre.

We then used:

{SinOsc.ar(MouseX.kr(440, 880))*0.1}.play

This meant that the mouse could be used to change the frequency of the note from between 440 hz and 880 hz.

I then changed the capital X in MouseX to a Y. This changed so that the mouse then used the Y axis to control the frequency of the tone. This meant that instead of moving the mouse left to right to change the tone (X axis) the tone was changed by moving the mouse up and down.

The *0.1 part of the code represents the volume of the tone. We then worked on adapting the code so that when using the mouse as the controller the Y axis changes the frequency of the tone and the X axis changes the volume:

{SinOsc.ar(MouseY.kr(10,10000))*MouseX.kr(0.0,1.0)}.play

We then talked about what .ar and .kr actually mean. Our tutor Nick got us to see how many times we could tap the table with our hand in a second. This was to give us an indication of haptic rate, of how fast we as a humans can do actions. If a machine was to track human actions it would be wasteful to have too much of a high sample rate.

Kr stands for Kontrol rate (spelt with a K because of Max Matthews music systems which used Kr). Ar stands for Audio rate.

The standard sample rate is 44100 hz. Kr is 1/64 of 44100 which equals 689 hz. The correct sample rate for your situation needs to be twice the highest frequency that can be represented. 689 represents up to 350 hz. When tapping the table we found it was only possible to do a certain amount of tapping (about 50hz) so it is therefore better to use the lower sample rate (Kr). As this is more efficient for the machine many different kontrol rates could be run on the machine at the same time.

We had a look at multi line programs; this is an example we were given:

(

{var n;

n=34;

Resonz.ar(

Mix.arFill(n,{

var freq, numcps;

freq= rrand(50,560.3);

numcps= rrand(2,20);

Pan2.ar(Gendy1.ar(6.rand,6.rand,1.0.rand,1.0.rand,freq ,freq, 1.0.rand, 1.0.rand, numcps, SinOsc.kr(exprand(0.02,0.2), 0, numcps/2, numcps/2), 0.5/(n.sqrt)), 1.0.rand2)

})

,MouseX.kr(100,2000), MouseY.kr(0.01,1.0))

;

}.play

)

We found that if you try running code with just the cursor on the line then SC will post an error message. This is because the computer thinks you are just running that one line of code, which without the other lines would be incomplete. Everything inside the main brackets needs to be highlighted in order for it to run, this can be done buy clicking just inside the first bracket, this highlights the block of code for you. This also makes it easier to see how the code is nested together.

We then started learning more about UGens. On the localhost server GUI it shows you the number of UGens that are being used in a synth. UGens are sound synthesis building blocks that are ‘plugged’ or ‘patched’ in together. SinOsc, Pan2, MouseX and MouseY are all examples of UGens. They all start with capital letters. There are 100s of Ugens in the SuperCollider system.

The paradigm of patching lots of building blocks together comes up in lots of software such as Audiomulch, MaxMSP and PD extended to name a few. In SuperCollider they are nested into one another.

In our example earlier:

{SinOsc.ar(MouseY.kr(10,10000))*MouseX.kr(0.0,1.0)}.play

UGens act as arguments to SinOsc.ar. To view the available arguments of a UGen, double click on the name of the UGen and press cmd+D.

We discussed why SC uses 2 servers. When you write code such as instructions for a synthesizer the local host server is on the same machine as the one you are running SC from and can be viewed in the activities monitor as SCsynth. If you used the internal server you would be running the synth inside the text editor so it wouldn’t be a separate application. We don’t often use the internal server because if we crashed the synth then we would also crash the text editor. This is quite hard to do but it is generally safer to use the local host. If you are using the local host server then it is also possible to run code from the local host server on other machines in the room or even a machine on the other side of the world. One machine could be used to control all the other synths in the room.

In order to get more familiar with SC I then looked at some example Super Collider code and tried adjusting the code and looked at the UGens used and how the code was nested together. I started by looking at an example called ‘babbling brook’ by James McCartney:

(

{

({RHPF.ar(OnePole.ar(BrownNoise.ar, 0.99), LPF.ar(BrownNoise.ar, 14)

* 400 + 500, 0.03, 0.003)}!2)

+ ({RHPF.ar(OnePole.ar(BrownNoise.ar, 0.99), LPF.ar(BrownNoise.ar, 20)

* 800 + 1000, 0.03, 0.005)}!2)

* 4

}.play

)

I used cmd+D to see what arguments the RHPF UGen had. I saw that these are (in,freq,q,mul,add). I asked my tutor about the nesting of the expressions and he said that it was maybe not the best example at the moment for nested expressions. This is because there are a lot of things nested into each other so not obvious straight away though part of what makes the code interesting is trying to find out how he plugged different Ugens together and structured the code.

I played around with tweaking some of the numbers such as changing the volume of the BrownNoise UGen (BrownNoise.ar, 0.99) by changing 0.99 to 0.10. This caused the babbling brook sound to get louder and much more distorted.

I then tried to get familiar with which part of the code controlled different aspects of the sound for example I found that when I changed the 400 and 500 values from:

* 400 + 500, 0.03, 0.003)}!2) part of the code to 800 + 900 the sound got higher and when I changed it to 100 + 200 the sound got much lower. These numbers must therefore have affected the frequency of the sound.