Yeah, we’ve all heard the age old riddle, “If a tree falls in the forest and no one’s around to hear it; does it make a sound?” Well, apparently here in America there is some one around nearly everywhere. What I mean by that is that according to the below story there is virtually nowhere left in America that isn’t affected by the ambient sounds provided by us humans:
I can kind of attest to a part of this. I spent much of the summer of 2011 hiking and camping in the mountains of Rocky Mountain National Park in Colorado; and even alone on the top of a mountain I was greeted by the distinct sounds of modern human origins, those being the sounds of passenger jet airplanes flying over head. Please don’t take this as a complaint about humanity, because I am, after all, a part of humanity; rather it is just a matter that it is getting harder and harder to get away from it all – sort to speak.
Sound, after all that is what this blog is really about; not just music, but sounds. Music is sound, it is just that the sound(s) are organized by notes, which then makes the “sound” more appealing. Music is like language, in that a language is organized sounds coming from our mouths that others will recognize as words and speech. So music and speech are sounds, but obviously not all sounds are music. Some sounds are really quite annoying, and dare I say the worst sounds to a human ear – well at least my human ears,
As I was lying in bed last night, somewhere between sleep and consciousness, I began to recall a documentary I saw as a child about the making of the original Star Wars film. It wasn’t so much about the creation of the special effects that I was remembering though; it was how they created all of those sounds in that movie, and the subsequent sequels. I was fascinated by the fact that much, if not all, of the sounds were created by experimenting with striking ordinary objects, recording of quite Earthly things, and even by stumbling upon the sounds by accident. Hey, I know, and knew that there were no such things a light-sabers™, blasters, land-speeders, TIE Fighters, etc. etc., but I was a kid, so I just kind of figured there was a more high tech way that much of these sounds were created; even back in the dark ages of the late 1970’s and early 1980’s.
Anyway, when I woke up this morning and started going about my business, that semi-conscious thought popped back into my mind and I decided to see if I could find a video of what I remembered. Well, I didn’t find it, though I admit I wasn’t exactly scouring the world wide web for it, but what I did find out was what many a Star Wars/sci-fi/movie nut probably already knows, and that is it was a man named Ben Burtt who was charged with coming up with those sounds in the Star wars franchise, as well as many other popular movies, like Wall-E, Super 8, the 2009 reboot of Star Trek, the Indian Jones franchise, and many more, including the obviously not science fiction, and current film, Lincoln.
For whatever reason; call it a continued naivety carried forward from my childhood; I was still rather struck, that even in today’s computer driven world, so much still goes into the creation of sound in movies. I guess, and maybe it is still just me, we just kind of take sound for granted; probably because we are inundated with sounds of all sorts all day, every day. Music on the other hand requires a creative hand, logical thought, and the harmonious blending of musical notes, and instruments. Sound is just noise, but when you are creating a movie, or television show, or play, sounds are also needed to create a feel, an environment, and in the case of science fiction, the sound of something that doesn’t exist. Sounds are as important in the telling of a story as the musical score is, yet because sounds are so commonplace, and pedestrian they are easily overlooked, and ignored.
There are a lot of videos out there about sound design, but since it was the Star Wars documentary that initially got me thinking about this I thought I should share a snippet of an interview with Ben Burtt on how he came about creating the sounds of light-sabers™, Imperial Walkers, and explosions:
You’ve probably seen the commercial. The one about those wireless speakers? The one with R&B artist Janelle Monae and her friends dancing to that catchy song? Yeah, you’ve seen it. How many of you were like who in the world is the artist that is performing that song? I know I am one of those, and for whatever reason I just now decided to find out. They are a duo calling themselves Deep Cotton, and the song is called “We’re Far Enough From Heaven Now We can Freak Out.” Oh and the name of those wireless speakers that this commercial is supposed to be selling are called Sonos. Below is that song:
And as a sure sign that I need to get out more, I am now more aware of Ms. Monae and her music too (a big thank you Sonos for helping me out in “finding” “new” music, I guess I should buy their speakers now) here is one from her newest album called ArchAndroid. The song is called “57821,” it is a definite change of pace from Deep Cotton’s above, but I really do like the serene feeling of it:
Thank God that contentious election is over. I think we all need to get a little Hapi now. No I didn’t spell that wrong, and while I do mean the emotion, I am specifically referring to the Hapi Tones HAPI Drum. What is a HAPI drum you might ask? Well it is a steel drum where you strike a tuned tongue of steel with either a mallet or your fingers; and it looks like a UFO. Speaking of UFO’s the largest of the HAPI drums is actually called the UFO. Below is a promotional/demonstration video of each of Hapi Tones 3 models.
Today’s post is a bit of a cop out, as I am simply passing along the recent press release (about 15 minutes old as of this writing) called “Music in our ears: The science of timbre.” For those who may not know, timbre is what makes a particular musical sound different from another, to put it in the simplest of terms. The below video explains it further:
So, anyway, below is the press release about how researchers at Johns Hopkins University have developed a mathematical model that simulates how our brains are able to identify the different musical sounds, hoping that such research will lead to advancement in hearing devices and computer hardware..
New research, published in PLOS Computational Biology, offers insight into the neural underpinnings of musical timbre. Mounya Elhilali, of Johns Hopkins University and colleagues have used mathematical models based on experiments in both animals and humans to accurately predict sound source recognition and perceptual timbre judgments by human listeners.
A major contributor to our ability to analyze music and recognize instruments is the concept known as ‘timbre’. Timbre is a hard-to-quantify concept loosely defined as everything in music that isn’t duration, loudness or pitch. For instance, timbre comes into play when we are able to instantly decide whether a sound is coming from a violin or a piano.
The researchers at The John Hopkins University set out to develop a mathematical model that would simulate how the brain works when it receives auditory signals, how it looks for specific features and whether something is there that allows the brain to discern these different qualities.
The authors devised a computer model to accurately mimic how specific brain regions transform sounds into the nerve impulses that allow us to recognize the type of sounds we are listening to. The model was able to correctly identify which instrument was playing (out of a total of 13 instruments) to an accuracy rate of 98.7 percent.
The model mirrored how human listeners make judgment calls regarding timbre. The researchers asked 20 people to listen to two sounds played by different musical instruments. The listeners were then asked to rate how similar the sounds seemed. A violin and a cello are perceived as closer to each other than a violin and a flute. The researchers also found that wind and percussive instruments tend to overall be the most different from each other, followed by strings and percussions, then strings and winds. These subtle judgments of timbre quality were also reproduced by the computer model.
“There is much to be learned from how the human brain processes complex information such as musical timbre and translating this knowledge into improved computer systems and hearing technologies”, Elhilali said.
FINANCIAL DISCLOSURE: This work was partly supported by grants from NSF CAREER IIS-0846112, AFOSR FA9550-09-1-0234, NIH 1R01AG036424-01 and ONR N000141010278. S. Shamma was partly supported by a Blaise-Pascal Chair, Re´gion Ile de France, and by the program Research in Paris, Mairie de Paris. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
COMPETING INTERESTS: The authors have declared that no competing interests exist.
CITATION: Patil K, Pressnitzer D, Shamma S, Elhilali M (2012) Music in Our Ears: The Biological Bases of Musical Timbre Perception. PLoS Comput Biol 8(11):e1002759. doi:10.1371/journal.pcbi.1002759
Sr. Media Relations Rep. – Social Sciences
This press release refers to an upcoming article in PLOS Computational Biology. The release is provided by journal staff, or by the article authors and/or their institutions. Any opinions expressed in this release or article are the personal views of the journal staff and/or article contributors, and do not necessarily represent the views or policies of PLOS. PLOS expressly disclaims any and all warranties and liability in connection with the information found in the releases and articles and your use of such information.
PLOS Journals publish under a Creative Commons Attribution License, which permits free reuse of all materials published with the article, so long as the work is cited (e.g., Brinkworth RSA, O’Carroll DC (2009) Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology. PLOS Comput Biol 5(11): e1000555. doi:10.1371/journal.pcbi.1000555). No prior permission is required from the authors or publisher. For queries about the license, please contact the relative journal contact indicated here: http://www.PLOS.org/about/media-inquiries/embargo-policy/
About PLOS Computational Biology
PLOS Computational Biology features works of exceptional significance that further our understanding of living systems at all scales through the application of computational methods. All works published in PLOS Computational Biology are open access. Everything is immediately available subject only to the condition that the original authorship and source are properly attributed. Copyright is retained.
About the PLOS
PLOS is a non-profit organization of scientists and physicians committed to making the world’s scientific and medical literature a freely available public resource. For more information, visithttp://www.PLOS.org.
Everything published by PLOS Computational Biology is open access, allowing anyone to download, reuse, reprint, modify, distribute, and/or copy articles, so long as the original authors and source are cited. Please mention PLOS Computational Biology in your report and use the link(s) below to take readers straight to the online articles. Thank you.
***EMBARGO: Thursday 1st November 2012***
2pm Pacific Time/5pm Eastern Time
Is there anything that music can’t do? As it turns out, one of the things that music can do is help people with pretty heavy brain damage.
A new study has shown that listening to a favorite song could boost the brain’s ability to respond to other stimuli in people with consciousness disorders. Not only does music seem to have a beneficial influence on cognitive process in healthy people, but it also seems to be able to help those with with brain damage as well.
Researcher Fabien Perrin at the University of Lyon, France, recorded brain activity in four different patients. Two of the patients were in a coma, one of them was in a minimally conscious state, and one was in a completely vegetative state.
First, each patient was played their favorite music (chosen by family or friends) or they were played “musical noise”. For example, one patient listened to The Eagles’ ‘Hotel California’, another ‘heard’ the Blues Brothers’ ‘Everybody Needs Somebody to Love’. Then, they were all read a list of people’s names, including their own name.
The same experiment was repeated with ten healthy volunteers.
What they found was that in all four of the brain damaged patients, the music (as opposed to the musical noise) enhanced the quality of the brain’s subsequent response to their own name, bringing it closer to the brain response of the healthy volunteers.
There are two theories that Perrin has about the effect of music on the brain:
“Listening to preferred music activates our autobiographical memory – so it could make it easier for the subsequent perception of another autobiographical stimulus such as your name. Another hypothesis is that music enhances arousal or awareness, so maybe it temporarily increases consciousness and the discrimination of your name becomes easier.”
The findings of the study were presented in July at the Association for the Scientific Study of Consciousness meeting in Brighton, UK. This research definitely seems like something Oliver Sacks might be into.
If we ignore the actual source sounds, a bog part of the sound of analog devices are the inherent differences between repetitions of the same note. In other words, a repeated sound on an analog device will often change somewhat over time, whereas a repeated digital sample will often have a “machine gun” effect.
Here’s a really good, rel-life example of this concept, created by first recording a repeated riff on a Boss DR-100 drum machine, then simply repeating a sample of one of the sounds:
“A short comparison between the always moving analog sound vs a sample of the same source. I used my Boss DR-110, analog drum machine that lacks any sound controls other than main volume, balance and accent level (used in mid position in this example). First you can hear the 16th snare pattern, recorded straight from the DR-110; then I took one of the recorded snare hits and pasted it several times to create the same pattern.”
The same could be done on any number of analog drum boxes, most famously on the TR-808.
Synotec is a company in Germany that specializes in sound design for products. Whether it’s a vacuum cleaner, or a mixer or anything else, they feel that just the right sound will get you buying more of a product.
In this video, they discuss what goes into the perfect sound of a beer bottle being opened and poured. Fascinating – if only they knew how to pour a beer properly.
Greg Ball designed this chair, hoping to recreate the feeling of “sitting on a rocket”. With 1000 watts of power, 2 16″ car audio subwoofers and 2 mid drivers, this thing looks like it would take a house down.
The design is beautiful- Greg describes the chair, and his design constraints here:
Here’s what you were probably waiting for in the video above – a demonstration of the chair in action. It’s so loud it rattles the ceiling track.
This thing would make a great gaming chair, don’t you think?