― Oliver Sacks, Musicophilia
Recently I’ve had several dreams which involve largely automatic musical composition. Typically I’ll come across an unknown character who will be playing new music – there may be familiar fragments of melody, but the overall compositions seem unique. This phenomenon isn’t uncommon among musicians or non-musicians, but does raise some very interesting questions around how our dreaming minds model frequency interactions – the emergent properties that give all chords their colour and tension.
I can guide the direction of the music to some degree, although the orchestration of all lines apart from the main melody seems to just happen, without any meaningful conscious intention. The complexity and quality of the harmony often far outstrips what I can manage while awake – I’ve dreamed a quartet of jazz horns playing melodies with suspended chord counterpoint, and ghostly piano-like instruments modulating through extensions far beyond what my (pretty average) waking harmonic ear is familiar with.
One striking example involved walking into an unfamiliar room decorated with ornamental Arabesque writing, and seeing a veiled woman facing the far wall with her back to me, playing some imagined harp-like instrument. In sound and appearance it was somewhere between an Indian santoor and a Malian kora, but mounted on the wall with long strings facing vertically.
The restlessly floating music sounded a little like this, but with a slower ebb and flow. It was distinctly microtonal – I have some familiarity with microtones through studying Hindustani sruti and listening to North African music, but am certainly no expert and would struggle to compose microtonal music while awake. It was a surprise to realise I had intuitively improvised with it in polyphony while asleep. I think we all already understand a lot more about music’s fundamentals than we might think.
One of the most distinctive elements of all these musical dreams (microtonal or otherwise) is the rich layer of harmonic frequency interaction present in the sound. This might sound like a technical term, but is essentially just describing the ‘character’ of a chord – i.e. the various harmonic tensions that create its distinct colours and moods. Any combination of notes will give rise to some form of these interactions, and so they are present in virtually all music you ever hear.
In the waking world, whenever two or more frequencies are played together, vibrating through the same air space, they will interact. This gives rise to elements which would not be present if you were to hear the same pitches in isolation – these elements are therefore emergent properties. Imagine wearing stereo headphones, and comparing two recordings of the exact same two-note interval – one where the notes have been recorded separately and hard-panned left and right, and another where they were recorded in the same acoustic space and panned together. Only the latter would properly reveal the interval’s distinct mood.
These layers of frequency interaction are found in all polyphonic live sound (i.e. almost everything we hear), but are demonstrated particularly clearly by choral music. Listen to the restless and textured complexity of the sounds in Whitacre’s Sleep, particularly at the end of these two lines (chords held on ‘soon’ and ‘bed’). And see the playlist at the bottom for more particularly vivid examples.
This restless harmonic frequency interaction is central to how we experience pretty much all music, but dreaming frames it in a very different light. When we dream of music, there is no ‘real’ sound happening, i.e. no actual air vibrations to interact and give rise to these emergent properties. So – if we perceive this restless frequency interaction in dreams, then our minds must be constructing it.
So my central question is this:
- When we dream of music, and our dreams include this restless harmonic interaction, then to what extent is this phenomenon being simulated accurately – and why do we do it in the first place?
In other words, if we perceive that emergent frequency interaction is present in a dream, then are we somehow modelling the physical interactions of air vibration in a way which is close to waking life? Or is our mind recognising that there should be some form of frequency interaction, and adding a restless quality to the sound which doesn’t reflect the interactions should the ‘same’ harmonies be played in real air. Is this action part of a larger phenomenon? And perhaps more importantly – why do our minds bother to do this at all? I’ll be fascinated to hear your ideas.
Initial Ideas and Sub-Questions:
- When can we be confident in distinguishing between ‘accurate’ and ‘inaccurate’ frequency interaction in dreams?
- What other situations involve complex mental modelling of the emergent properties of physical phenomena (dreamed or otherwise)?
- How can we help to create situations where examining frequency interaction is easier? For example could we use lucid dreams to conjure up extremely familiar musical actions, such as tuning a guitar using natural harmonics, or playing a C major triad on a piano?
- As a simpler case, would the dreamed vibration of a slack guitar string change as it is tuned upwards?
- How could we build up from these simpler cases to learn about more complex phenomena? For example by playing a single interval on a piano, and building up to more complex and restless chords one note at a time (e.g. 1st, 3rd, 5th, b7th, 9th, 13th, or even the harmonic series itself, etc).
- How consistent is the waking perception of frequency interaction from person to person anyway (i.e. what is the mind adding and constructing even when we are awake)?
- What existing research might there be that is relevant this question (musicology, psychoacoustics, neuroscience, philosophy of perception) – and am I even using the right terminology to approach it?
Further Explorations (Aug 2016):
I posted the above in January. Some very interesting ideas have been coming in since then – I’ve spoken to musicians, scientists, and friends, and have further explored things myself. Here is a somewhat structured wander through the most interesting and most promising ideas (for a more structured wander then see the Article Map).
Oliver Sacks: Psychopharmacology, Dream Composition, & Hypnopompic States
Musicophilia, Oliver Sacks’ extraordinary book on the neurology of musical perception, devotes a chapter to dreaming. It includes personal recollection of unusually vivid music induced by dreaming while under the influence of the sedative chloral hydrate, which sometimes continued as quasi-hallucinatory states after waking (‘on one such occasion, I dreamed of the Mozart horn quintet, and this continued, delightfully, when I got up’). The difficulty of comparing dreaming and waking states is a clear stumbling block for our investigation – perhaps the unusually strong perceptual bridge provided by hypnotics such as chloral hydrate could help.
Worlds of further speculation exist around the effect of drugs on creativity and dreaming, and we should not fall too far down this rabbit hole here. One particular account does however deserve mention. A Bristol musician, while on LSD, described ‘experiencing’ the colour purple while looking at a patterned orange sari, even though he could not pinpoint any actual instances of purple in his vision. Sober eyes confirmed that the sari was in no way purple, but the fact that some non-literal property of colour can cast its light into conscious experience hints at some powerful associative machinery at work in the mind. The necessity of describing this phenomenon in vague terms also highlights something we already know – our terminology is not fit to describe what lies behind some of the less typical doors of perception.
Elsewhere in Musicophilia, Sacks describes a spell of several weeks he spent recovering from surgery in a tiny windowless room with no radio signal, with the only available music being a single cassette of the Mendelssohn Violin Concerto. Repeated listening led to the music being heard in the ‘hypnopompic’ states following waking – without the cassette player being on, but vividly enough for him to reach over to try to turn it off. This lends weight to the idea that ‘supersaturating’ the mind with particular musical phrases could embed them to the point where emergent property differences could be be more easily compared.
Several of Sacks’ correspondents describe dream composition. Stan Gould experiences this while under the influence of gabapentin (‘loud, highly dramatic symphonic music…the music is in me’), and the librettist Melanie Challenger sometimes hears ‘very loud, very vivid orchestral music’ just after waking from her afternoon siesta. The intensity of these experiences is of particular interest – Melanie describes ‘individual instruments and their combinations…a richness and a realness that she does not have with her normal musical imagery’. This shows that the dreaming mind can enhance harmonic colour rather than just recreating it.
In fact, this alteration and construction is happening all the time, whether we are asleep or not. We experience sound rather than just air vibration – our minds are already constructing the phenomenological essence of the experience itself. We do not each hear things just the same way. As Sacks notes, every act of perception is to some degree an act of creation.
Also mentioned in Musicophilia are cases of waking hallucinatory composition in languages not spoken by the perceiver (‘he heard someone singing songs in Spanish for a couple of weeks, no one else heard this. He did not speak Spanish, [but] we live in a heavily Hispanic neighbourhood…’). Though dreamed examples are not mentioned, accounts such as this hint at our mind’s deep intuitive knowledge of sonic syntax and structure – even when the languages of these structures are not consciously understood, and even if the constructive processes cannot be unravelled.
Pioneering composers such as Ravel, Stravinsky, and Aphex Twin have claimed inspiration from dreams – in fact Brahms described a recurring symphonic dream vivid enough to involve elements of matching physical simulation (‘I seemed to see it written…’). 18th century violin virtuoso Giuseppe Tartini reminds us that the old blues masters were not the first musicians to claim the sale of their soul to Satan, saying his Devil’s Trill Sonata originated from meeting Lucifer in his dreams:
“I dreamed I had made a pact with the devil for my soul. Everything went as I wished: my new servant anticipated my every desire. Among other things, I gave him my violin to see if he could play. How great was my astonishment on hearing a sonata so wonderful and so beautiful, played with such great art and intelligence, as I had never even conceived in my boldest flights of fantasy. I felt enraptured, transported, enchanted: my breath failed me, and I awoke. I immediately grasped my violin in order to retain, in part at least, the impression of my dream. In vain! The music which I at this time composed is indeed the best that I ever wrote, and I still call it the ‘Devil’s Trill’, but the difference between it and that which so moved me is so great that I would have destroyed my instrument and have said farewell to music forever if it had been possible for me to live without the enjoyment it affords me.”
Open data consultant, programmer, and golden age hip-hop expert Rory Scott points out that deep sub-conscious awareness of complex systems and their outcomes has been widely noted in other fields. Inventor Nikola Tesla spoke of ‘feeling’ the solutions to intricate problems of motor engineering well in advance of being able to clearly articulate them:
“I know I have really solved the problem…The actual solution is in my mind subconsciously, though it may be a long time before I am aware of it consciously…[then] without ever having drawn a sketch, I can give the measurements of all parts to workmen, and when completed these parts will fit, just as certainly as though I had made accurate drawings. It is immaterial to me whether I run my machine in my mind or test it in my shop. The inventions I have conceived in this way have always worked. In thirty years there has not been a single exception.”
Although Tesla’s is intuitive engineering is of a different type, it again illustrates that we should not underestimate the capacity of the unconscious to model highly complex systems of physical interaction – musical or otherwise.
More on the Fundamentals of Dreaming:
My friends have provided many other fascinating musings. Composer, sound artist, and professor Joseph Hyde wonders whether music in dreams in fact resembles some sort of ‘inner stage’ – akin to removing the final layers of usual perceptual construction, and getting closer to the fundamentals of how the brain reacts to sound. This seems intuitively plausible, although considering how necessarily constructive phenomena such as dreamed frequency interaction fit into this is complicated to say the least.
Audio engineer, software developer, and musical man of mystery Mark Claydon also posits possible links to fundamental brain processes:
‘A neuron may be modelled as an oscillator…If a note in a dream has some correspondence to a physical oscillation in your brain, then physical/acoustic-seeming effects from their interaction might be expected’.
Mark also has a related and intriguing idea around why we might dream of microtonal music, flipping the question of how unfamiliar tuning systems arise on its head: ‘If you posit harmony as an emergent phenomenon from interacting oscillations, then maybe microtonal harmony could emerge in your dreaming mind, just as it has in everyday waking life? Maybe this relieves pressure of trying to trace its provenance to the waking world when it turns up in a dream”.
This is enticing, but if taken as an assumption then the next logical step is to again invert the question – if the tonality of dreams is strongly influenced by neuron oscillation rates, then why are do the vast majority of our dreams not have this microtonality? Also – the point may rest on the assumption that neuronal oscillations could enter our perception despite bypass our usual hearing channels. Then again – I simply do not know enough neuroscience here.
Professor Irving J. Massey (also mentioned in Musicophilia) notes that dreamed music differs less from its waking equivalent than is the case for many other phenomena, citing the examples of conversation, character, and visual consistency. In fact, he goes so far as to say that ‘music is the only faculty not altered by the dream environment’. Although I think this overstates things, the thrust of it seems accurate – our dreaming minds are particularly good at recreating music.
The fact that rapid shifts and fragmentations are far more common for our visual rather than aural lives may help to explain why this is the case (along with a fundamental necessity for patterned repetition in music). Hearing is relatively continuous compared to other senses – it is difficult to truly block out a sustained sound, whereas completely removing an object from view is as easy as closing your eyes or looking the other way. Turning your head towards a clock face twice in quick succession while dreaming often reveals inconsistent times (this has been useful for me as a ‘soft’ way to induce lucid dream awareness), but reports of similar jumps in dreamed music appear to be much less regular. It is a more consistent phenomenon.
Analysing which waking phenomena are under- and over-represented in dreams offers valuable insight into their fundamental nature. Academic research here notes that activities such as reading, writing, and numerical calculation rarely feature in dreams, which is put down to their status as more effortfully learned skills. This also appears to hold true for playing musical instruments, but crucially not for spontaneously perceiving novel music – hinting that this process primarily arises from deep intuitive understanding of sonic structure rather than from recall of consciously directed processes. Speaking of music as a language makes sense here – we dream of talking more than writing, and composing more than performing.
Another key difference between waking and dreaming needs mentioning: the influence of perceiving physical frequency rather than just audible sound. It is hard to overstate the importance of this – the physical dimension of music has allowed Evelyn Glennie to become one of the classical world’s finest solo percussionists despite being functionally deaf (in the ordinary sense of the word at least). She points out that for all of us, ‘hearing is a form of touch. You feel it through your body, and sometimes it almost hits your face’. I’m sure many of us will never forget our first experience of standing right beside a big club sound system with real bass. Then again, I’ve never dreamed of a heavy jungle drop (which isn’t surprising – they’re not really designed to sleep to).
Systematised Evidence – Florence Sleep Lab Study:
A major stumbling block for our investigation is a lack of organised evidence – it is harder to notice regularities in the absence of consistently presented data. One of the few resources here is a 2005 study by the University of Florence’s Sleep Lab, examining the structured dream logs of 35 people, with analysis split between musicians and non-musician groups.
There is no explicit mention of the emergent frequency interaction we have identified, but the paper brings plenty of other insights. As well as noting that music in dreams has rarely been studied in scientific literature, they add weight to the over-representation point raised above, noting that even non-musicians do dream of music far more than of reading or writing. They also confirm beyond all doubt that automatic dream composition is a regular phenomenon for many people:
“In the musicians group, 135 out of the total 244 musical dreams (55%) were factual reproductions of known musical pieces, 41 (17%) were unusual versions of known pieces and 68 (28%) were reported to contain unknown musical pieces”
Intriguingly, they found no significant relationship between the rate of dream music recall and the average amount of waking time spent listening to music. And no significant link was found between dream music recall and hours of daily instrumental practice, or years of musical training. These disconnections are surprising, and have implications for how we may go about investigating the links between waking and dreamed music. The lack of a strong bond between daily listening habits and dreams suggests that some of our quasi-experimental ideas (such as ‘supersaturating’ the waking mind with music to influence our dreams) may not reveal so much.
The paper did however find one significant relationship. The earlier the age at which a subject started musical training, the more common it was to dream of music – even when controlling for other factors such as total years of training. At first glance it seems curious that this variable has more of an influence than daily activities like playing or listening to music. But perhaps it isn’t so surprising. We are have already started to form a picture of musical dreams as being a phenomenon based on deep intuitive processes, which due to their fundamental nature are learned early in life.
Their findings are in line with other psychoacoustic research in suggesting that our ability to intuitively break down complex music is not reliant on musical training, but instead develops spontaneously. They speak of ‘independent cognitive processes committed to analyse each dimension of music’, but also note the challenges inherent in unpicking such a complex part of our mind’s operation. Overall then their work brings plenty of insights – but again, more questions are raised than answered (for starters – how can we ever find out how the process of keeping a dream log affects things?).
La Monte Young – Drones, Colours, & Emergent Frequency Architecture
Minimalism pioneer La Monte Young has spent a lifetime striving to understand the fundamentals of sound, and to inspire reimagination of what we mean by musical performance. It’s no surprise that his work is relevant to the questions posed here.
Play his experimental Drift Study 1969 through stereo speakers
(headphones won’t work properly), and see how even the slightest movement of your head can radically alter the shapes of the cascading frequency
interactions. Close your eyes, and Drift Study’s tight synaesthetic interweaving of vibration and spatial awareness may even tempt intuitive insights into Nagel’s question of what it is like to be a bat:
A similar focus on the textures of a single interval is also found in the tanpura drones of Indian and Pakistani music – a profound influence on Young’s ambient and microtonal leanings. Precise refinement regarding the tunings and overtones of sustained notes is central to the tradition of the Kirana gharana in particular, a culture Young studied with Pandit Pran Nath. Listen to the rising tides of sub-bass in this drone (a Young/Nath collaboration). There are no instruments other than the acoustic tanpuras – the low notes we hear are an emergent phenomenon.
In fact, dreaming itself is a recurring theme in Young’s works, with its ultimate manifestation being the immersive sound and light experience of his Dream House installation in NYC:
“In each corner is a tall white speaker…as intimidating in the bare space as Stanley Kubrick’s monolith. These monoliths are vibrating with the 32 frequencies of Young’s composition, and though the music itself stays constant no matter how long is spent inside the House, the sound’s relationship to its listeners can change drastically with the slightest movements.”
Young’s Dream House is frequency architecture in a literal sense, and the 32 tones in play must give rise to some of the most complex interactive properties anywhere in music (who can imagine what harmonies would be conjured up by sleeping there?).
La Monte Young’s wife and long-time collaborator, Marian Zazeela, uses similar ideas of unexpected emergent interaction in visual art. Microtonality guru Kyle Gann describes her Ruine Window piece:
“Since she’s working with colored shadows instead of colored surfaces, and light behaves differently from pigment, the colors combine opposite to the way we expect (you only learn light-color theory in art school, Zazeela says, if you go into television.) Stand in front of Ruine Window 1992 for a while, and let your eyes move up and down the verticals: not only will the colors take on a deep intensity, creating an illusion of two-dimensionality, but the edges will flicker in your peripheral vision…Both the sound and light sculptures are static entities that move wildly within your eyes and ears.”
(There are plenty of other ways to experience the hidden shapes present in sound and light too. Again we could get very distracted here, but you may want to have a look at how Chladni plates visually reveal the ordered physical beauty of harmonic vibration.)
Hindustani drones, La Monte Young, and Marian Zazeela have taught us much about emergent properties – most importantly that that remarkably detailed interactive layers can arise from just two tones. This is the case for Young’s Drift Studies (two sine waves) and the Indian tanpura (two scale notes).
The fact that such high levels of complexity can emerge from such basic elements hints at the sheer scale of challenge faced by the dreaming mind, especially when it comes to modelling more polyphonic music. However, considering the interaction of just two tones points to a particularly promising line of enquiry for our central question – why is the mind trying to do this at all? The existence of binaural beats has fascinated physicists, acousticians, and hippies alike, and may be the clearest alternate case of emergent frequency interaction being created by the brain rather rather than air vibrations in the outside world.
Binaural beats are a curious phenomenon, created by playing a subtly different pure tone into each ear through headphones. The interference pattern between the slightly differing frequencies creates the perception of a ‘beating’, of a frequency equal to half the difference between the two input tones. Have a listen (with headphones):
Quantum researcher, bassist, and local Tyskie fan Jeremy Adcock explains in more detail: ‘If tones at 205Hz and 200Hz are played simultaneously, then you can perceive a ‘beat’ at a rate of half the difference between them (i.e. 2.5Hz). This is way below the 20Hz low threshold of what we can hear as pitched sound” (for reference the lowest key on a grand piano is 27Hz).
In my original post, I noted the following: ‘Imagine wearing stereo headphones, and comparing two recordings of the exact same two-note interval – one where the notes have been recorded separately and hard-panned left and right, and one where they were recorded in the same acoustic space and panned equally’.
While still true that the two cases would sound different, binaural beats show that there is another dimension to how they can differ. Sometimes it is not interacting air vibrations that provide the emergent layer, but the mind itself. Binaural beats do not interact by vibrating through the same air space – the two tones are isolated from each other by the fact we are wearing headphones. So the interaction must occur somewhere in our own aural processing. In some sense binaural beats are the inverse of La Monte Young’s drone studies – their interactive properties only appear properly when played through headphones, whereas the experiencing the cascading architecture of Young’s drones requires external speakers.
The basic concept behind binaural beats is relatively simple – i.e. that playing slightly different tones into each ear can cause the emergence of strange properties. They have certainly captured many imaginations – although admittedly the jury may still be out on whether they can open your chakras, cause hands-free orgasms, recreate the feeling of smoking Orange Kush, or even make you rich.
Anyway, these aren’t the interesting questions – the fact they exist at all is curious to say the least. It’s unclear what adaptive purpose they could serve (what aspect of navigating the external world could they possibly be useful for?), and we didn’t even notice they existed until a couple of centuries ago. We have many other facets that are not adaptively useful – they may not serve a direct purpose, but are instead byproducts of various other mechanisms.
To work out what is at play here, we should examine the physical fundamentals. The word binaural refers to the fact we process sound through two ears – binaural hearing is the equivalent of binocular vision. Having two ears may be in part caused by embryological constraints (it seems easier for broadly symmetric bodies to grow), but it’s adaptively useful too. Firstly, it’s less of a disaster to lose an ear when you still have one left, but having two distinct inputs brings subtler advantages as well. We register the tiny differences in arrival times and intensities for sound at each ear – these interaural time differences usually amount to only a fraction of a millisecond, but provide us with a huge amount of extra information regarding where things are. Neuroscientists Schnupp and Carr explain:
“Binaural hearing greatly improves our ability to determine the direction of a sound source. Without binaural cues, we must rely solely on monaural ‘spectral cues’ provided by the directional filtering of sounds by our outer ears to judge the direction of a sound source. Relying only on spectral cues results in much-reduced localization ability, whereas combining spectral and binaural cues results in remarkably accurate sound localization.”
Dual-input hearing (and the mental processing it undergoes) helps us to understand our physical environment – where we are makes a huge difference to how we hear. Moving your head around in La Monte Young’s drone architecture from earlier has shown us this already, and studio musicians know how much more vivid it is to record instruments in double-microphone stereo.
(As an aside – ears do not even function symmetrically in the first place. Perceptual psychologists Tommasi and Marzoli note that the right ear shows a dominance when listening to spoken language, and that perceiving music is a speciality of the left. Their three studies in this area include innovative experimental methods (‘the researchers approached 160 clubbers and mumbled an inaudible, meaningless utterance and waited for the subjects to turn their head and offer either their left of their right ear. They then asked them for a cigarette…’). This asymmetry is corroborated by studies of newborns, and even other species – a general preference for hearing with the right ear is found in dogs, harpy eagles, sea lions, and Japanese macaques.)
But we may be getting a little sidetracked – what do binaural beats teach us about why the mind may be simulating dreamed frequency interaction? I think the mechanisms for processing binaural hearing may point to a particularly promising line of enquiry.
Binaural beats are often considered to be an odd offshoot of neural activity in the brain’s central auditory pathways – part of our processing architecture for binaural hearing. As we have seen, our brains analyse the dual-nature of the input information, and use it to help us understand our spatial environment. So perhaps it makes sense to see other mentally constructed phenomena involving the perception of sound as arising from similar mechanisms.
The act of listening to binaural beats through headphones can be seen as a particularly odd input – certainly not one our minds will have evolved to understand. Our neural pathways are ‘confused’ by the strange binaural signal, but still produce information from it, resulting in the illusory beating effect. This new information is an emergent frequency interaction, and, as with dream harmony, one that is is constructed by the mind rather than provided readymade by the outside world.
Rory points out that optical illusions are a far more familiar instance of the same essential concept. Odd inputs to our visual processing system lead to the perception of illusory phenomena – again, emergent properties constructed entirely by the mind. MIT’s Professor Edward Adelson explains why we do not see A and B as being the same colour in the checker shadow illusion:
“As with many so-called illusions, this effect really demonstrates the success rather than the failure of the visual system. The visual system is not very good at being a physical light meter, but that is not its purpose. The important task is to break the image information down into meaningful components, and thereby perceive the nature of the objects in view.”
The adaptive processes for visual illusions seem much easier to unpick, but they are still firmly in the class of odd emergent mental phenomena created as our perceptual apparatus are confused by ‘odd’ inputs. Both headphoned binaural beats and structured polyphonic music are certainly very odd inputs – they have not been around for the vast majority of human evolution, and our processing architecture struggles to easily comprehend them.
They do however differ in one key respect. Headphones are a direct and real-time input, whereas the inputs for dreamed music are both a patchwork of ideas from our intuitive musical memory, and incorporations of real-time stimuli from the waking world (for example sound from speakers left on, our pulse, or the rhythm of our breathing). Despite this difference, I think the core of this ‘offshooting’ idea is the most promising insight yet.
If binaural beats (and some optical illusions) are ‘odd offshoots’ of the brain’s spatial location architecture, then perhaps dreamed harmonic interactions are a product of similar mechanisms.
Binaural beats and dream harmony are both clear cases of mentally constructed frequency interaction. If our the architecture for processing binaural hearing is behind one case, then we should expect it to have strong role in the other. I do not imagine the mechanisms behind each pinpoint to precisely the same place, but still they must still be intricately interwoven. The imprints left by our waking spatial location architecture are central to how we perceive all sound.
Seeing things this way casts mentally modelled frequency interaction (dreamed or otherwise) as one aspect of a complex relational property, linking space and sound in ways that are useful to us when understanding our environment. Both binaural beats and the dream harmony could well be strange offshoots of this relationship. This is far from certain, but may be our best guess.
If this is the case, then we would expect dream harmony to respond to perceived physical movements in a dream. If we shift position in relation to an dreamed sound source, then this should meaningfully alter interactive layers in the overall sound texture. It’s unclear how realistic this relationship might be compared to its waking equivalent, but if the phenomena are closely linked then we should at least expect sight and sound to noticeably co-vary. This has the added benefit of being to some degree testable via lucid dreaming (e.g. move around and see how this changes what you hear). And if this is not the case, then further questions are raised in many directions.
The wider idea of seeing music as a highly atypical input is also important. Harmonic music is the most highly structured and intricately patterned sound we will hear anywhere – I think this ordered oddness is the core of why we find it so beautiful and interesting.
Conclusion – What Do We Know?
So, how close are we to answering our central question: how and why do our dreaming minds conjure up emergent frequency interaction? In short – not very. The mechanisms underlying binaural beats may suggest a reason for the phenomenon’s existence, but even they are not understood with much clarity.
All in all, the insights above shine only limited light onto questions of profound difficulty. Many other tantalising issues have not been considered (perhaps certain facets of dreamed experience cannot by their nature be written to accessible memory, leaving whole realms of our lives forever hidden from our waking selves? And how do cultural factors come into play). There is a lot we cannot know, and our limited terminology hinders the clear expression of what we can.
We should also remember that developing technology has the potential to upend our understanding of questions such as these, and even to fundamentally change how we perceive sound itself. Researchers Schnupp and Carr glimpse into the future:
“In an age where many personal stereo systems already pack powerful microprocessors, future cochlear implant processors and hearing aids could become more sophisticated and incorporate various spatial filtering and preprocessing techniques, not necessarily modeled on designs normally found in mammals. Future designs could incorporate pressure gradient receivers, as used by some insects… Perhaps bionic ears of the future will interface to elaborate cocktail-party hats that sport as many miniature microphones as there are guests at the party.”
The future may provide us with more answers, but the fundamental disconnect between dreaming and waking states means that true clarity will always remain elusive. Besides, the extraordinary variety of perceptual experience – human or otherwise – means that even definite answers for one case could not be extrapolated from to infer too much about others.
It is clear, and probably has been from the outset, that this is an ultimately irresolvable question – but keep sending the ideas in. Just as with experiencing music itself, the fun is in the process of finding out, rather than in the silence after the final bar.
Dedicated to Oliver Sacks – whose unashamed embrace of the profound and expansive spirit of enquiry have led us to learn more about who we are.
“There are moments, and it is only a matter of five or six seconds, when you feel the presence of the eternal harmony…a terrible thing is the frightful clearness with which it manifests itself and the rapture with which it fills you. If this state were to last more than five seconds, the soul could not endure it and would have to disappear. During these five seconds I live a whole human existence, and for that I would give my whole life and not think that I was paying too dearly”
More things (click to expand):
Playlist: Emergent Frequency Interaction
Harmonically colourful things to listen to: 9-track YouTube playlist here.
Sleep – Eric Whitacre
- Whitacre’s choral compositions are renowned for their captivating cluster chords, and Sleep seems like a fitting title to start with. Lux Aurumque, A Boy And A Girl, and Water Night use similar clustering ideas of at differing levels of density, with harmony often being diatonic but essentially unrooted (‘pandiatonic’).
- Whitacre has also sought to recreate the experience of a rainstorm, and wondered what it might be like when Leonardo Dreams Of His Flying Machine. And he also has an 8,000 strong Virtual Choir.
Matamani – Djeli Moussa Diawara/Kora Jazz Trio
- All notes played on harp-like instruments such as the Malian kora will continue to resonate unless they are deliberately dampened (as as if you always had the sustain pedal down on a piano). The freely floating harmonic colour is therefore a ‘moving average’ of all recently played notes, as opposed to the more tightly delineated approach of typical jazz.
Avaz-E Dashti – Silk Road Ensemble
- This is a traditional Iranian melody, written in Dashti harmony – similar to the Western natural minor scale but with some microtonal alterations (e.g. the 2nd tone of the scale is lowered to create an interval between major and minor). The original dream that sparked all this had a similar microtonal feel to this piece, but with a calmer ebb and flow.
Glynnaestra – Grumbling Fur
- “Grumbling Fur make me want to take drugs. And I don’t mean drugs like a few puffs on a spliff before bedtime or on a lazy Saturday afternoon, or a cheeky dabble at a rave to keep the energy flowing – I mean proper, don’t-eat-for-18-hours-beforehand, make-sure-you’ve-got-a-couple-of-good-people-around-you, psychically prepared voyaging, preferably on a warm and sunny but slightly overcast afternoon in a field somewhere in the West Country, or in a friend’s house cluttered to the rafters with fascinating and peculiar objects.” (Rory Gibb, The Quietus)
Eyes Above – Flying Lotus
- The synth tone’s relative purity emphasises the restless tension that can be created from clustered harmony. FlyLo’s influences include Dilla, Adult Swim cartoons, and the Coltrane family (of which he is a part).
J.J.D. – Fela Kuti
- Few bands have ever possessed the sheer power of Fela’s, and Brian Eno has described Tony Allen as ‘perhaps the greatest drummer who has ever lived’. Afrobeat is all about the rhythms, but Fela also knows how to layer up a few baritone saxes to create unusually thick low harmonies.
Weathered Stone – Aphex Twin
- Richard D James is one of electronic music’s most idiosyncratic pioneers. Both his Selected Ambient Works albums are frequently regarded as works of genius, and he claims lucid dreams as the source for for 80% of the tracks from Vol. II. He explained, ‘I go to sleep, dream I’m in my studio with imaginary bits of gear and do a track. Then I wake myself up and recreate it. I can do this in about 20 minutes’ (what I would give to get him involved in this exploration…).
- Again, frequency interaction clarity is aided by the purity of synth tones, which also form the centre of We Are The Music Makers. He uses subtle microtonality in many pieces, including for the electronically-prepared piano of Jynweythek Ylow.
Rãag Gaoti (Alap) – Gundecha Brothers
- The dhrupad tradition of India stretches back hundreds of years, and invites you to focus on the textures and fine tunings created by long sustained notes and heavy drones. The Gundecha brothers studied under the Dagar brothers, part of a musical lineage said to stretch back 20 generations.
The Well-Tuned Piano (Excerpt) – La Monte Young
- Young spent decades designing and refining his own tuning system, and building a piano that could realise it – including by adding most of an extra octave to the bottom end. The tuning incorporates elements of Gregorian Chant, Hindustani music, Indonesian Gamelan, Pythagoras-derived harmony, and more. His work seems to occupy a separate listening space to anything else, perhaps because he is improvising indirectly with emergent frequency interactions rather than with notes in the usual sense. It is as if he is devising a pattern for throwing stones into water, with the aim of creating the most interesting ripple patterns as the waves cross paths.
- Also try out the free-flowing and extraordinarily strange saxophone playing of his Bb Dorian Blues, and experience the tight synaesthetic interweaving of vibration and spatial awareness in his Drift Study 1969 from above.
Further ideas, and some ways to investigate them
Amateur lucid dream experiments have been used to investigate questions such as whether dreaming can induce temporary perfect pitch. For this investigation then we could try things like:
- Seeing how frequency interactions change when you ‘physically’ move around in a dream. As noted above, if they are an offshoot of the brain’s spatial location architecture then we would expect sight and sound to co-vary in some way – although analysing the nature of how this relationship may differ from its waking equivalent is challenging to say the least.
- ‘Supersaturating’ the brain with particular pieces of music before sleep. This may cause them to appear more often in dreams, again helping us gain insight into the nature of how the waking and dreamed pieces differ.
- Considering how emergent properties arise from the elements which underlie them involves explicitly thinking of phenomena as being arranged in levels. So – we could try layering them up again, by combining two distinct emergent properties to form a meta-emergent layer. What would being around La Monte Young’s drone architecture sound like to someone who is already listening to binaural beats through (non-silencing) headphones? What would their dreams be like? (or more to the point could anyone actually fall asleep in this odd situation?)
- How can we involve other altered states of consciousness in our general process of inquiry?
Some gonzo science here would at minimum be fascinating.
References, citations, and other internet things
- Analysis of EEG activity in response to binaural beats with different frequencies (2014) – Gao et al, International Journal of Psychophysiology 94(3), p.399-406.
- Cortical evoked potentials to an auditory illusion: Binaural beats (2009) – Pratt et al, Clinical Neurophysiology 120(8), p.1514–1524.
- Detection Thresholds for Amplitude Modulations of Tones in Budgerigars, Rabbits, and Humans (2013) – Carney LH et al, in Basic Aspects of Hearing (ed. Moore BCJ), p.391-399
- The Effect of Binaural Beats on Working Memory Capacity (2015) – Kraus J & Porubanová M, Studia Psychologica 57(2), p.135-145.
- Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing (2014) – Ross et al, Journal of Neurophysiology 112(8), p.1871-1884.
- Music in Dreams (2006) – Uga V et al, Consciousness and Cognition, 15, p.351-357.
- The Musical Dream Revisited: Music and Language in Dreams (2006) – Massey IJ, Psychology of Aesthetics, Creativity, and the Arts S(1), p.42-50.
- Musicophilia (2007) – Oliver Sacks (in particular p.303-311)
- Neuromagnetic responses to binaural beat in human cerebral cortex (2006) – Karino S, Yumoto M, Itoh K, Uno A, Yamakawa K, Sekimoto S, Kaga K, Journal of Neurophysiology 96(4), p.1927-38.
- On Hearing With More Than One Ear: Lessons from Evolution (2009) – Schnupp JWH & Carr CE, Nat Neuroscience 12(6) p.692–697.
- On the Frequency Limits of Binaural Beats (1950) – Licklider JCR, Webster JC, & Hedlun JM, J. Acoust. Soc. Am. 22(468).
- Pathways: The Dangers of eDosing With Binaural Beats (2012) – Musiek F, Atcherson S, Kennett S, Warren S, Nicholson N, Hearing Journal 65(10), p.9-10.
- The Reinterpretation of Dreams: An Evolutionary Hypothesis of the Function of Dreaming (2000) – Revonsuo A, Behavioral & Brain Sciences 23, p.793–1121.
- Side Biases in Humans (Homo Sapiens): Three Ecological Studies on Hemispheric Asymmetries (2009). Marzoli D & Tommasi L, Naturwissenschaften 10.1007
- What is it Like to be a Bat? (1974) – Nagel T, The Philosophical Review 83(4), p.435-50.
- Are Optical Illusions Cultural? – Colin Schultz, Smithsonian
- Binaural Beats: Digital Drugs – Brian Dunning feature, Skeptoid
- Checkershadow Description – Professor Edward Adelson, MIT
- Dream House – Ed Howard review, Mela Foundation
- Dreaming of Wu-Tang loops and spitting bars – Reddit user SeQuenceSix
- Emergent Properties – Stanford Encyclopedia of Philosophy
- I Dreamed a Symphony – Stuart J Sharp, Guardian Experiences
- Left And Right Ears Not Created Equal As Newborns Process Sound – UCLA/ScienceDaily
- Making Your Imagination Work for You (1921) – Tesla interview, The American Magazine
- Most People Prefer Right Ear for Listening – Robin Lloyd, LiveScience
- Music + Math: Chladni Plates – demonstration by the Santa Fe Institute
- Original background music in a lucid dream – Reddit user just_a_bucket
- Passive Recovery of Sound from Video – demonstration of MIT’s Visual Microphone
- Relative/Perfect Musical Pitch in Dreams – dreamviews.com forum user Graysong
- Sonata in G minor: ‘Il Trillo del Diavolo’ – Hyperion recording notes
- The Tingle of p x mn – 1 – Kyle Gann, Mela Foundation
- The world’s ugliest music – Scott Rickard @ TEDxMIA
- Whenever I lucid dream I hear beautiful music – Reddit (username deleted)
My friends’ other explorations
As you have seen, they have provided plenty of insights. I wholeheartedly recommend you check out their various explorations:
- Professor Jo Hyde is a composer and sound artist, although I’m not sure I can summarise things beyond this. Fortunately you can pretty much cut-and-paste text from his website at random and know it will be intriguing: (“danceroom Spectroscopy at Z-Space…the Understanding Visual Music conference in Brasilia in June and the continued proliferation of my modular synthesiser, now an 8-channel digital/analogue hybrid that produces visuals as well as sound…my Christmas present was a Gieskes 3TrinsRGB+1C kit”).
- Mark Claydon has produced albums for Peter Gabriel at Real World Studios, and currently conducts mysterious research into the mathematics of neural networks and audio processing. Discussing his work with him makes me both excited and confused.
- Rory Scott works as a data consultant for the International Aid Transparency Initiative (IATI) – a UN-backed global campaign to create transparency in the records of how aid money is spent.
- Jeremy Adcock researches quantum computing at the University of Bristol, but still finds time to play bass.
And much love to my fellow resident at the Church of St. John Coltrane in Bristol – junglist, dreadlocked dentist, and bin salvage expert Dylan Rakhra. His discussion, proofreading and extensive knowledge of lively music was invaluable to this entire process, and he deserves praise for tolerating the sounds of me learning konnakol and listening to experimental drone music long into the night.
Credit is also due to my friend Louise Sellars: linguist, pole dancer, and Beethoven of Bristol origami – although her influence was indirect, it was notable. Her dream alter-ego appeared to me one night, and handed me some (presumably fictional) research papers. We talked over them, and though the general discussion was not particularly lucid in any sense, one of the ideas did make into those detailed above (that of fundamentally seeing both musical and linguistic syntax as being understood through deeply intuitive subconscious mechanisms). I remembered no dreams involving harmonic frequency interaction from that night, but recall no dreams involving discussion of the phenomenon from any other night. When your field of study spontaneously causes your methods of study to start replacing your original objects of study then it becomes unclear just where the looping self-reference ends.
Original (Jan 2016)
Update (Aug 2016)
- Hypnopompic states
- Automatic dream composition
- Meeting the devil
- Tesla – intuitive physical modelling
- Perceptual inner stages
- Dreamed microtonality
- Overrepresentation of dreamed music
- Subconscious syntax
- Evelyn Glennie – Physical perception
- Systematised dream logs
- Musical correlations
- Drift Study 1969
- Tanpura drones
- Dream House
- Zazeela’s colours
- Chladni Plate oscillations
- What they are – Jeremy explains
- Outlandish claims
- Why they exist
- Binaural hearing
- Interaural time differences
- Linking space and sound
- ‘Odd’ perceptual inputs
- All in all
- Future technology
- The process of finding out
- The eternal harmony – dedication
- Ideas for gonzo science
- Citations etc
“The act of writing is an integral part of my mental life; ideas emerge, are shaped, in the act of writing… a special, indispensable form of talking to myself.” (Oliver Sacks)
This is my two cents on how to understand the endlessly fascinating rhythms of Shakti – jazz guitar legend John McLaughlin’s collaboration with the musicians of India. Their music provides landscapes you can get lost in and keep on exploring, and what I can see and describe here is just a fraction of what is there. That being said, the ideas in this article are readily translatable, and understanding them requires no prior knowledge of music theory.
The original Shakti group was one of the first East-West musical fusions, and the dual-lead combination of McLaughlin’s rapid acoustic guitar and Lakshminarayana (‘L.’) Shankar’s sliding violin broke new musical ground. They also experimented with instrument construction – McLaughlin wields a heavily modified acoustic with sitar-like sympathetic strings and scalloped frets, L. Shankar went on to pioneer the 10-string doubleneck violin, and Vikku Vinayakram is credited with popularising the use of ghatam clay pot in music (what an excellent one-line biography).
Along with their successor band Remember Shakti, they have a real mastery of keeping a strong groove in irregular time signatures, and not allowing high degrees of rhythmic complexity to make patterns too difficult to follow. There is one particular trick they use to achieve this – centered around accenting the ‘oddness’ of the final bar in each cycle.
Essentially, they will take a long rhythm cycle (often with an odd number of beats), divide it into regular length bars (often 4s), and play through these, until they reach a final bar of a shorter length. It’s all about the remainder – they play through the cycle using a familiar groove until there aren’t enough beats left to play it again in full, leaving a final remainder bar which breaks with this groove and is heavily accented:
This emphasised final bar is often the key to how Shakti’s unfamiliar time signatures can flow even to uninitiated Western ears, as it provides a marker for when the cycle will finish and repeat over again, and so anchors us to the overall groove.
Making the very end of a cycle prominent is found throughout the world of music (for example the turnaround in a 12-bar blues), but Shakti break new ground by putting this particular odd-time spin on it.
A large part of Shakti’s unique rhythm sound comes from their powerful two-part percussion section. Both Zakir Hussain and the Vinayakrams have a propensity for full and sonically dense playing, and their prominence is increased by the lack of a melodic bass instrument, meaning resonant tabla and kanjira strokes occupy an otherwise empty part of the frequency spectrum. Zakir and Selvaganesh even mimic western basslines at times:
This busy combination provides the speed and textural density to readily power a groove all the way up to 11, even when using a complex 11-beat cycle – as with 5 in the Morning, 6 in the Afternoon from the clip earlier. This McLaughlin composition is a perennial feature of Remember Shakti’s live set, and perfectly demonstrates the accented remainder bar idea in action. The 11 beats are broken up into a 4-4-3 structure: two 4-beat bars, and then a prominent 3-beat turnaround – listen again:
Hear how the last bar of 3 beats is louder and fuller than the cycle before it, and how this quickly locks your mind into the overall rhythmic structure. Here is the same groove with added intensity:
The song’s title gives a clue to decoding its cycle – as well as describing the timezone difference between John and Zakir’s homes in California and India, 5 and 6 sum up to 11. The syllable groupings either side of the comma also reference the title’s own structure – ‘Five in the Morning’ has 5 syllables, and ‘Six in the Afternoon’ has 6. This, intentional or not, is dope (even though the cycle is not in fact subdivided as 5 and 6).
5 in 6’s remainder bar rhythm structure is one of many in evidence. A similar 4-4-3 cycle is used in Bridge of Sighs by the original Shakti group, and the band’s whole catalogue is full of this idea being applied to a whole host of different rhythm cycles.
Ma No Pa, a Zakir composition, is a work of true genius which explores many interlocking rhythmic ideas, including a 10-beat cycle broken down roughly into a 4-4-2 (this is a beautiful game after all).
If you zoom in further, the piece can be seen as a 20-beat cycle, divided with more nuance as 8-9-3 rather than 8-8-4 as above:
The entirety of the piece showcases the electrifying three-way interaction between McLaughlin’s Western jazz guitar, Zakir’s North Indian tabla, and Selvaganesh’s South Indian kanjira, and is something special to witness.
Isis uses a more dense and complex set of groupings, but the complexity is more evident on paper than in the flow of the music itself. An initial 9-beat cycle (divided as 4-5) is ‘tripled’ – and broken down into more detailed subdivisions when it comes to backing McLaughlin’s solo. The original 9 can now be seen as a fast-counted 9*3=27-beat cycle, broken down as 8-8-8-3 with an accented final bar:
The Wish also evidences the concept in action, turning the relatively common cycle length of 9 beats into something new by subdividing it into 4-4-1 instead of the usual 3-3-3:
In Anna, the idea is used more subtly, and the later stages include Zakir turning the familiarity of the final bar on its head, by introducing new rhythms into the space it brings. The long composed melody of the piece establishes several distinct 9-beat patterns, before settling into a swung 3-bar pattern, with a subtler final bar emphasis.
The influence of jazz can be seen more clearly here. Zakir swings over a groove familiar to Western ears and which does not contain an odd remainder, and even imitates jazz brush-kit drumming by sliding across the right-hand dayan tabla.
He then uses this familiarity to introduces more complex ideas, with more angular shapes and cross-rhythms entering into the mix:
Compositions like Face to Face take these ideas a step further, and open up new musical landscapes by constructing subtly linked structures out of rhythmic subdivision. After a free-time alap section, the piece opens with a 15-beat cycle divided initially as 4-4-4-3. This is almost a regular 4*4=16-beat cycle, and the length of the 4-beat section (4-4-4) makes it particularly easy to settle into and feel intuitively – try it:
The composition then switches to a different way of subdividing the same 15 beats – splitting them into 5 bars divided equally into 3 beats each, to give a 3-3-3-3-3 pattern which is essentially a Western 12/8 but with an extra bar added onto the end:
The later stages include a third rhythm, maintaining the familiarity of a 5-part split, but now counting the 5 as single beats rather than 3s:
Finally, the piece concludes by revisiting the 3-3-3-3-3 pattern, and then resolving back to the original 4-4-4-3, having constructed a symmetrical set of interdependent rhythms which allow for soaring solo phrasing from L. Shankar’s perennially breathtaking violin.
This remainder-based approach is similar to thinking in modulo arithmetic, where the modulo is the remainder when dividing numbers into each other. The clearest example of counting this way is in how we read a clock face – we use a basic cycle of 12, and calculate time differences (remember them?) from the remainder. If adding 13 hours to a given time, we realise that 13 is 1 more than 12, and so use this remainder to keep time, rather than the original number itself.
We may or may not notice these links and patterns consciously, but our subconscious mind will always absorb them, and these details will colour the way we hear the music even if we don’t know why.
Looking at rhythm this way is underpinned by the system of konnakol – a method of breaking rhythms down into vocalised syllable patterns which comes from the ancient Carnatic classical tradition of South India.
The Vinayakram family provide Shakti’s direct link to the world of konnakol – Vikku plays kanjira tambourine and ghatam clay pot in the original band, and his son Selvaganesh brings several instruments to the stage in Remember Shakti. They, along with Vikku’s father T.R. Harihara Sarma, run the Sri Jaya Ganesh Tala Vadya Vidyalaya school of rhythm in Chennai, and are probably the leading exponents of konnakol today.
McLaughlin’s introduction to konnakol was however earlier than this – he studied it way back in the 70s, under the instruction of Ravi Shankar. Apart from being a tantalising ‘what-if’ recording collaboration, this partnership is also interesting in that it highlights a seldom-noticed dimension to Pandit Shankar’s already illustrious legacy – he is from North India’s Hindustani tradition, which uses the tabla-based bol system rather than the South Indian konnakol.
The two approaches are markedly different – although they are both syllable-based, konnakol places more emphasis on the numeric subdivisions (compared to bol’s focus on the drum timbres themselves. North and South Indian classical music are distinct traditions which, despite a common source, have been relatively separate for at least the last four centuries.
North Indian music has absorbed more influence from Persian and Islamic colonisers than its Southern counterpart. Pandit Shankar was one of the first to bridge this ancient gap, along with his tabla player – the legendary Ustad Alla Rahka, whose son, Zakir Hussain, continues to explore new territory inside and outside both Shakti’s incarnations.
Konnakol assigns a particular set of syllables to each number from 1 to 8, and uses combinations of these groups to break any rhythm cycle into manageable blocks. As well as being highly addictive, this approach introduces a more nuanced understanding of how rhythms and melodies are structured, and insight into where their emphasis lies. The basic syllabic groupings are as follows, with usual accents underlined:
Konnakol deserves an article to itself, but the concepts which underlie it are not difficult to understand, and can be followed easily once the syllables are intuitive (in McLaughlin’s words: ‘it’s incredibly easy, but it goes to the most sophisticated heights’). The approach’s strong emphasis on subdivision aids in composing or decoding any complex rhythm including those detailed above.
Konnakol patterns are also explicitly vocalised in several Shakti compositions, including the introduction to La Danse de Bonheur, and the percussion breakdown in Get Down and Sruti (sampled by Chinese Man). For anyone who is interested I recommend McLaughlin’s and Selvaganesh’s excellent instructional video series, The Gateway to Rhythm.
Shakti’s world of rhythm is a unique fusion of three complex traditions – Hindustani, Carnatic, and jazz, and the patterns I have noticed will only be a glimpse into all that is there. Nevertheless, I hope I have introduced some new ideas into your world of listening – Shakti have many to offer.
Update (June 2015): thanks John! https://twitter.com/jmcl_gtr/status/613306739195297792
- Shakti & Remember Shakti discography
- John McLaughlin 2005 Jazz in Japan interview
- The Gateway to Rhythm konnakol instruction series, McLaughlin & Selvaganesh
- The Reinvention of a Tradition: Nationalism, Carnatic Music & the Madras Music Academy 1900-1947 (1999). Subramanian L., Indian Economic & Social History Review, 36(1), p.131-163.
- David Courtney’s tabla bols introduction