Funny how deflated I feel after sending a newsletter – like, OK back to work now. Another week in the bag. Writing this is quite useful to me as I get to pause and reflect, review what I’m doing and progress I’ve made, rather than just being in it. But sometimes the work ahead is daunting. Looking at my song list wondering where the hell to start. My songs get done bit by bit, over time. Having a lot of songs means I can usually find one that suits my current mood.
At the moment my focus is getting all the songs tracked – so with a full 3-minute-ish musical track. I usually find a lot of work is in the first verse and chorus. Once I have that the track is mostly written. In modern song arrangements there are 3 choruses, so a typical song structure for me is:
intro (optional), verse, chorus, verse, chorus, bridge, chorus.
The chorus is largely repeated so once I’ve got one, I’ve basically got three. Another song part is the pre-chorus (lift or rise) which leads from the verse to the chorus, preparing the listener for what is coming.
The bridge is musically different to the other pieces, providing a break and contrast for the listener before the final chorus. Not every song has all these components – the song usually decides what it needs but that’s the way I approach the track: verse, pre-chorus, chorus, and bridge.
Once the structure is laid out then I try and add variation to keep interest. So I’ll add something to verse 2 (new percussion, a different beat or drumming pattern), or take something away. Just to keep it fresh and new, while the vocal melody is basically the same. What I’m trying to do is to add enough to stay interesting, but not so much that it becomes hard to follow, to understand the song. A blend of newness and familiarity.
Sounds so simple in theory, doesn’t it? Of course, the trick is applying it. As with anything – the more I do the more natural it becomes. But don’t listen to me – I might be doing everything wrong!
I’ve been thinking about this for a while so I thought I’d commit my thoughts. I’m not much of a musician. I can play chords on a guitar, very rough lead (slow and fat fingers), find chords on a piano. And that’s about it. I can’t play drums, or bass, and my musical theory is all self taught and through trial and error.
But I love creating music. To me it’s like colouring in: coming up with a rough idea in black and white and then adding colour, texture, and detail, hear something come to life. Hearing an idea.
Before digital instruments I was restricted to what guitar I could play, and my songwriting reflected that. I got plenty of songs I could not translate to music – could hear them in my head but had no way of realizing that sound and feeling, because I wasn’t a good enough a musician, and didn’t have access to a band or musicians who could interpret and bring it to life. Even if I could play what I heard, as an indie artist I don’t have the equipment to record it. My studio is a bedroom and a Mac. I don’t have amps, leads, mixing desks, high quality microphones, etc.
Technology makes music available to the musical, not just the musician. With virtual instruments we can create a realistic simulation of a real player, and create music that would otherwise be literally impossible. Take this song as an example: Never Been Away.
I got this one on guitar: Never Been Away (original).
The timing is all out. There’s a nice riff in the intro which I wanted to keep but on the equipment I have (a cheap guitar and a compressor mic) getting a good take would’ve been near impossible. Even if I had got the guitar, then what? Drums, bass, lead? How to bring the song to life?
I hired Andrew Timothy, a guitarist on AirGigs (a freelance service for musicians) to record the acoustic for me, straightening out the timing and transitions. Andrew did a fantastic job but there was a lot more needed to full out the song.
I used virtual instruments – software that emulates an instrument, driven by digital signals often input by a keyboard. So, playing a drum from a keyboard, for example. It’s also possible to “program” the instrument using MIDI (digital) commands laid out like a piano scroll. I added drums using UJam’s Virtual Drummer, bass using UJam’s Virtual Bassist, an extra acoustic guitar using Native Instruments’ Session Guitarist 2, and lead guitar from Shreddage from the same company – all virtual instruments.
Here it is (in draft): Never Been Away
BTW, the person I have in mind for this song is a mate of mine – Tim who has been a great supporter of my music and a good friend for many years. The stories in the song are based on some of our adventures.
It’s no exaggeration to say I could never have produced this song without the digital tools. I don’t have the skill, the instruments, or the equipment. I’m a writer not an accomplished musician. But I know that I love the sound, feeling the song come alive.
Which leads to my existential question: what is “real” music? Does it have to be played by a real, live, sweaty human to be “real”. I still struggle with this, even though I have no choice. Is music created with technology artificial, or does it enable and empower people like me to express themselves? I suppose we could say the same of the guitar – a piece of technology that allowed us to move from our voices to more complex music. I didn’t invent the guitar, just as I didn’t invent digital music technology. It’s a tool to use, not a replacement of humanity. A painter still needs paints, canvasses, easels etc. s/he probably doesn’t make themselves.
So, perhaps self servingly, I argue that digitally created music is just as valid as human played music. I still had to imagine the lead, bass, and drums and realize them with the tools I have available. If I could afford just to hand over my rough draft to acomplished musicians then I probably would. It would save me time, money, and work, and humans add unexpected depth and flourishes to music a dumb machine will probabbly never get. I hired Andrew, a human, for the main acoustic track primarily because I don’t know how to replicate this with a virtual instrument. Then I built the rest of the song around it with virtual instruments.
Taking it to an extreme: what about a song written by a computer – the whole thing, chords, track etc.? Not unfeasible. As this clever video sardonically points out, most music is built around 4 chords: 4 Chords | Music Videos | The Axis Of Awesome. Those chords (in Nashville notation) are 1, 4, 5, and 6. In a major chord, the 3 majors and the minor version of the tonic. It’s a great video, but a little disingenuous: after all, what other chords is a musician to use? And it can’t be said that after production all these songs sound the same – they’ve been reduced here to single beat and pattern, but the arrangement and instrumentation of the actual songs is such that without analysing them it would be impossible to know they are built on the Big 4 chords. It’s a clever video, but misses the point of music. Who cares what the chords are?
So I think a computer could easily produce a listenable track: generate the chord pattern, structure, and program the instruments. And this could be enough. Music, after all, is primarily about projection – how the listener reacts to the song. What the writer says, particularly in the verses, is entirely secondary to how the listener absorbs it, identifies with it, personalizes it. The chorus is the key part: if it strikes a chord (pun intended) with the listener they will like the song, even if they know or care nothing about the back story of the song – the song becomes theirs, not the writers. I love that. For example, “my favourite song is…”. The song belongs to the listener, not the writer. The writer is a midwife, delivering something others will care for.
But that’s just the music. I’ve yet to see a computer capable of delivering vocal melody, lyrics, and texture. Perhaps they never will because that seems to me a sense peculiar to humans – literally our own language. The music and the lyrics are a complex dance, feeding off each other, or in opposition to each other to heighten dramatic effect (a happy-sad song.) Imagination is a human super power.
So, in my view, technology is a wonderful enabler for musical people. It is very good, but not as good as a real human musician. It has a steep learning curve and many limitations but can free people from physical limitations. My Dad, Keith, was taught piano at school. A “tutor” sat next to him and whenever he hit a bum note would rap him over the knuckles with a ruler. He learnt to play to a very high level (despite having outrageously small hands) but once he left school never touched a piano again. Perhaps a gifted musical spirit lost to a brutal and painstaking system of learning a (single) physical instrument. Technology has, to some extent, freed us from that. But I doubt it will ever replace human-ness. Virtual instruments get better and better at sounding “real” and implementing the technical elements of physical playing, but they will never have a heart. An equivalent, not a substitute.
I saw the documovie Merchants of Doubt about the conspiracy by Big Oil to spread climate change denial. It reminded me of the song ‘Didn’t Know Better’ and I listened to it and decided to remix it. I like the album Ecocide but wish I knew then about mixing and production what I know now – what I learned from doing the album! Maybe one day I’ll remix and re-release all the songs, and maybe the album.
Finally, here’s the song I’ve been working on actively. First pass of lyrics almost finished and in the description in SoundCloud:
Not sure about the title but quite like the concept. It’s an escape song – get away from the lights and endless day of the city and live a more independent, self-sufficient life. It’s always been a dream of mine and with climate change coming I figure it’s a good time to start planning to avoid the inevitable disruption and panic.