Tuesday, August 24, 2010

Five things...

... that I have learnt from my children.

1. ‘What’s the worst thing that can happen…?’
Well, now that I know what the worst thing that can happen is, nothing else seems to matter. What other people think.  Public humiliation.  Career Interruptus.  Professional Rejection. Poverty. Middle Age. Once I had children, a whole new world of terrible possibility opened up – that one thing that no mother ever wants to even contemplate.  So really, nothing else comes close.  As long as my babies are safe and healthy, all the rest is incidental. 

2. Love

3. There is no place for cynicism.
How can one continue to be cynical in the presence of a child experiencing something for the first time? Life is beautiful and amazing and constantly surprising.

4. Shed some skin.
Having a child is like having a gaping wound or losing one’s epidermis. As well as a terrifying new vulnerability (see 1,) and an increased inclination to cry, there is the openness that shedding a layer of armour brings.  I can now talk quite happily to total strangers, people in supermarket queues, at railway stations, just crossing at the lights – something my previous self was not inclined to do.

5. Life is full of new beginnings.
Rather than see the ends of things, I am now more able to see the beginnings.  A different approach to the next piece of work, a new phase in my life, a fresh way of looking at something, a radical change in direction.  All possible. 

Sunday, August 1, 2010

Why I listen to music

fractal image by Peter Raedschelder

“Is music written by a computer still music? Can it make us feel?” That’s what caught my eye this morning as I had a quick flick through the Sunday papers.  For a start, it’s not often a whole page of the weekend news is given over to discussions of contemporary music aesthetics.  The article in question by Tim Adams, which originally appeared in The Observer , discusses the work of American composer David Cope who for the last 30 years has been exploring the possibilities of ‘computer aided composition’.  The original publication of this article opens with David Cope: 'You pushed the button and out came hundreds and thousands of sonatas'” - an opening almost guaranteed to raise the hackles of classical music lovers and musicians.  Tim Adams’ article is not about how David Cope programs his computer to write all his music so he can have more time to do whatever composers wish they were doing when not composing.  David Cope uses a computer as a tool to generate musical material that he, the composer, can then use, or not.  Plenty of composers use technology for various aspects of the compositional process. Some composers use computers to generate material that they then translate into sounds and textures.  Some composers toss coins to decide which notes to use.  Some composers leave many of the ‘compositional’ decisions to the performers that play their music.  I myself have been known to use games with numbers to work out the order of notes.  Each to their own.  It is all part of the bag of tricks at our disposal and as with any ‘tool’, it is what you do with them that counts.  David Cope’s work seems to sit at the blurry edge between computer programming and composition but I don’t have a problem with that either.  He is very open about what he does and how his music is created – it is not as though his computer is whipping up “hundreds and thousands of sonatas” that he is then passing off as his own.  It is an interesting field of study and no one is seriously suggesting that composers are going to be out of a job because of it (assuming there are jobs for composers in the first place…).
This article and the ideas it raises are interesting to me on a number of levels.  I can remember discussing the role of Artificial Intelligence in Music with some AI post graduates, trying to understand why getting a computer to write music is anything more than a high tech party trick – this coming from the post graduate composer with her own ideas about aesthetic value.  The answer that I got back was quite enlightening:  (and I paraphrase) … in finding out how to make a computer compose music, what you are in fact finding out about is how people compose music.  The project of AI in music is not about creating computers to write music as a substitute for people-written music, it is about understanding human compositional techniques: how humans think and create.  So, that was me told, and I will never make disparaging comments about AI music nerds again.
So I asked myself, in response to the questions posed and implicit in the article about Cope’s work – what is it that interests me about someone else’s music? Is it an analytical appreciation of the way they have generated and constructed a piece of music, the systems they have used and the rigor with which these systems are applied? No. Is it an appreciation of the agility with which they have mastered various musical styles or compositional techniques? No.
What interests me in listening to someone else’s music is what the composer had to say.  I listen to the way they chose to arrange sounds through time to evoke a particular idea or concept.  I listen to music because I am interested in how some other person, who has lived and experienced and felt happiness and sorrow and many things in between, has attempted to express something of themselves in sound.  I don’t listen critically, trying to find holes in their technique or inconsistencies in their music language.  What I actively listen for are these things - the bits of grit or glitches or quirks that throw things off centre, the imperfections, the unpredictable moments of something strange and unimagined.  I am listening for the traces of the person who wrote the music because the fact that a person composed a piece of music is what interests me.