Algorithms, concerts, noise: Understanding the new dimensions of sound.
► Thanks to artificial intelligence, machines can now imitate human voices and identify personality traits by listening to someone
► The science of acoustics has made major progress in recent years. Two examples: the stunning new concert halls in Paris and Hamburg.
► The need for hearing aids increases as the population ages. Researchers are working on intelligent and connected devices.
► Urban noise pollution increases the risk of cardiovascular disease. But solutions exist: silent asphalt, vibration dampers and green roofs.
A new frontier for artificial intelligence
Using algorithms to process sound is a booming field. Here are four promising innovations.
Machine-learning is increasingly expanding to acoustics. Engineers feed data to an algorithm, which can then recognise an object or system and act on that knowledge. “Big brands like Google and Microsoft have been very interested in this over the last few years”, says Sebastian Tschiatschek, researcher at the Institute of Machine Learning at ETH in Zurich. Four innovations are particularly promising:
By recording someone’s speech for 15 minutes you can determine if they are, for example, optimistic or self-confident. Developed by Germany’s Precire, the algorithm uses more than 5,000 voice samples to spot personality traits and conditions like stress. Some 20 companies now use Precire to filter candidates for jobs. In the process, the number of recorded voice samples keeps increasing. This is just the beginning, says CEO Dirk Gratzel, as the innovative technology could also be used by marketers to target audiences in the right tone. But is it accurate? A recent study and several experts say yes.
Improving orientation for robots
A recent innovation at MIT could help robots understand their surroundings. Researchers fed an algorithm 1,000 videos containing 46,000 sounds, most of which came from hitting objects with a stick. When the algorithm was fed a video without sound, it could still determine which noise applied to which video. “If robots can predict the acoustical properties of an object, they can better understand their surroundings and adapt their interactions”, explains Tschiatschek.
Imitating someone else’s voice
Whether it’s your GPS, your smartphone or the smart fridge updating you on your groceries, any gadget will soon be able to speak in the voice of your choice. This is the aim of CandyVoice, a French start-up working on an algorithm with Microsoft. “The technology isn’t very complicated”, says Matthias Althoff at the Technical University of Munich. “And there’s real demand because the fun factor is high.” Reading 160 short sentences out loud gives the algorithm enough material to imitate your own voice on different devices. This could help people who have lost their voice after an accident or operation, if they created a voice file beforehand. Still, Tschiatschek worries about the potential for abuse by people who might pretend to be someone else on the telephone.
By classifying audio files according to musical style, artificial intelligence can compose new songs. You just have to select the desired style and add such criteria as length and instruments. UK start-up Jukedeck already uses this technology to write songs for Coca-Cola. But, according to Althoff, it’s hard to say whether artificial intelligence would make music for the mass market. “I see this more as a tool for professional composers.”
Tracking aircraft with sound waves
Where radar struggles with interference from electronic gadgetry, “acoustic prisms” could offer a new solution
Interference from electronic devices disturbs the tracking of aircraft on or near the ground, says Hervé Lissek from the École Polytechnique Fédérale de Lausanne. That is why he and this team have invented the audio equivalent of the optical prism, which splits light into its component frequencies (seen as different colours). Lissek notes that devices functioning like an acoustic prism already exist, but that they rely on signal processing. In contrast, his group’s invention acts purely through the physical manipulation of sound waves. That, he argues, is a good thing because the transition from acoustics to electronics inevitably distorts the signal. “There’s some advantage to going back to old-school engineering”, he says.
How does it work? The “acoustic prism” is a rectangular aluminium tube about 50 cm long that is divided into a series of chambers, each with its own hole to the outside and separated from the neighbouring one by a membrane. When a multi-frequency sound wave is generated at the open end of the prism, it travels along the tube and emits sub-waves from each side hole. Because the delay between emissions from
successive holes depends on frequency, the way those emitted waves relate to one another is likewise frequency dependent. Waves of different frequencies are in fact emitted at different angles, which means that microphones dotted around the prism pick up sounds of varying pitch.
This principle can also be turned on its head, so that the prism acts like a direction finder. When a source of sound, such as a plane, travels overhead along the axis of the tube, the sound waves arrive at the prism with a slight delay from one hole to the next. Because the amount of delay will depend on exactly where the object is relative to the prism, the object’s position can be tracked by measuring the changing frequency of the wave set up inside the tube.
By Robert Gloy @robertgloy
The architecture of sound
Towards a more intimate musical experience: Hamburg and Paris introduce innovative acoustics to their spectacular new concert halls.
Two new European concert halls have become the talk of musical circles: the Elbphilharmonie in Hamburg, inaugurated at the beginning of this year, and the Philharmonie de Paris, opened in January 2015. Both have been highly anticipated for their acoustics.
Their spectacular design did not make the acousticians’ work any easier: more than 2,000 people can be seated in each of the halls, which was the first challenge. In addition, they are intended to attract all types of music – not only classical symphony orchestras, but also jazz combos and pop, world music and rock concerts.
Neither of the two resembles a traditional concert hall. Instead, they are a hybrid of two forms that acousticians know well: the shoebox style and the vineyard style, which are designed specifically for symphonic music (others might be designed for opera, for example). The shoebox style is the oldest and most common. Boston’s Symphony Hall, Vienna’s Musikverein and Amsterdam’s Concertgebouw are all excellent examples. The Berlin Philharmonic, inaugurated in 1963, is built in the vineyard style, which was revolutionary for its time. The audience surrounds the stage on different levels, like terraced vineyards, breaking with the cold formality of the shoebox.
The new halls in Paris and Hamburg artfully combine the two traditional styles in a single structure. “They form an interesting hybrid that incorporates the advantages of both styles”, says acoustician Eckhard Kahle, professor at the University of Music Karlsruhe in Germany, who helped design the Philharmonie de Paris. “The shoebox style is appreciated for its high ceilings and reasonable width, which are optimal for lateral reflections and an even distribution of sound. The vineyard style brings the audience closer to the musicians, providing a more intimate, enveloping concert experience.”
Antoine Pecqueur, a French musician and journalist, prefers the acoustical experience in Paris. “It’s more generous and transparent. The sound is very precise for both listeners and musicians.” But he also appreciates the dynamic shades of the concert hall in Hamburg. “It gives the musicians a lot of liberty. Visually the Elbphilharmonie is appealing, too – inside and outside, offering a splendid view over the city.”
Ensuring sound balance
Collaboration between architects and musicians is now essential. Before the 20th century “acousticians weren’t called upon for their expertise”, explains Gerhard Müller, professor at the Technical University of Munich. “Architects took an empirical approach, copying the shape of a hall where they already knew the acoustic capabilities.” Previously, music was composed for and adapted to the acoustics of a building.
It wasn’t until 1900 that the first concert hall – Symphony Hall in Boston – was designed with acoustics in mind. Ever since, acoustic architectural demands have become increasingly complex: halls must be able to draw larger audiences and bigger orchestras. Musical instruments also have more powerful sounds now.
Any concert venue has a few basic requirements. The first is reverberation time, which is the time it takes for sound to decay until it stops. This is a function of sound richness and of a hall’s effects. “Reverberation was actually the only acoustic requirement until the mid-20th century”, explains Kahle. Ideal reverberation time is between 1.9 and 2.3 seconds for symphonic music, depending on the volume of the hall, the size of the audience and the type of music.
Favouring the overall experience
The challenge for acousticians is to strike the right balance. First, they must control the reverberations in the hall so that the music comes across to the audience as clear and powerful. When a sound is produced, it goes directly to the listener (direct sound) but also bounces to various places such as walls, the ceiling, balconies, decorations, reliefs and even the chairs. A concert hall needs these sound reflections: early ones followed by late ones, and the much-valued lateral reflections.
“How the musicians sand audience feel has become increasingly important over the last 10 years. We don’t hear acoustics, we feel them.”
Acousticians also consider the level of sound that the audience will hear, the distance between the audience and the orchestra, and the volume ratios in the concert hall. “For optimal sound, we calculate 10 audience members per musician and 10 cubic metres per audience member, so 100 cubic metres per musician”, says Kahle. “The more musicians and audience members there are, the bigger the hall must be. And when more than 2,000 people are in the audience, there are even more challenges. The bigger and taller the hall, the further the walls are from the orchestra and the audience. You need to get creative and find ways to bring the walls closer, by using reflectors or playing with the front of the balconies.”
Beyond the scientific criteria, “how the musicians and audience feel has become increasingly important over the last 10 years”, says Constant Hak, assistant professor of building acoustics at the Eindhoven University of Technology. “This is a phenomenon called psychoacoustics: when musicians are playing, they must feel good and be able to hear their own play and that of their co-musicians correctly. As a result, they will play better. The same goes for the audience: when people enter the hall, their experience will be more special if they feel enveloped by the music and the space, almost as if they were in a cocoon. Appreciating acoustics is very subjective. We don’t hear good acoustics – we feel them. The history of a certain concert hall will also have an effect on some audience members, causing them to evaluate their concert experience in different ways.”
The next generation of concert halls will have to integrate new technologies, says Hak. There is increasing interest in musical centres, either with several halls that can host various types of music or one large hall with variable acoustics. Indeed, not all types of music have the same acoustic requirements. “With electroacoustic solutions, like artificial reverberation and reflections using integrated microphones and loudspeakers, or with mechanical solutions, like curtains or rotatable panels, a single hall can be suitable for a variety of events.”
By Céline Bilardo @ClineBi
Avoiding the sound of silence
With Europe’s ageing population, hearing loss will become a major concern for public health. A new generation of technologies can slow the process.
“At the age of 80, about 80% of people are hard of hearing, so most people will need a hearing device in old age.” For Werner Hemmert of the Technical University of Munich (TUM), this is a growing issue for society. EU projections reinforce his point: in 1960, just 1.4% of Europeans were over 80, but by 2060 the proportion is projected to rise to 11.5%. Directly or indirectly, hearing impairment will soon affect everyone.
Thomas Behrens, head of audiology at Danish hearing-aid manufacturer Oticon points out the wider significance. “The consequences of hearing loss are underestimated, especially with regard to accelerated cognitive decline”, he says. At least 12% of the population over the age of 70 will suffer from either mild cognitive impairment or dementia, and among them the proportion of hearing-impaired people is much higher than average.
Connected hearing aids
About 90% of diagnosed hearing loss is sensorineural. This means impairment occurs when sensitive hair cells in the inner ear are damaged or die, limiting the amount of acoustic information that can be transmitted to the brain. Consequently, high-frequency sounds, such as female or children’s voices, may become difficult to hear. It may also be harder to hear sounds such as “s”, “f” and “th”. Torsten Dau of the Technical University of Denmark (DTU) explains: “Modern hearing aids attempt to compensate by pre-processing the signal – amplifying only one particular voice in a noisy restaurant – before the information bottleneck, reducing the brain’s workload”, he says.
In the area of connectivity, Behrens and his colleagues are leading the way. “Our Oticon Opn devices have two different wireless radios: one optimised to communicate with other devices using a low-energy version of Bluetooth, and another optimised for communicating over short distances using near-field magnetic radio technology”, he explains. The latter allows hearing aids on either side of the head to communicate with one another. Oticon Opn is also compatible with the “If This Then That” platform, which allows synching of functions of connected devices. For instance, a notification might be set to sound when someone arrives at the front door.
Researchers in the US have shown how neural networks can be used to remove much of the noise from incoming sound, allowing people with poor hearing to better comprehend what they hear. The networks are trained to recognise a multitude of features within short slices of sound that distinguish speech from background chatter and other sources of noise. However, as pointed out by Jan Larsen of DTU, people with hearing aids want to do more than simply have conversations. They also want to watch TV and listen to concerts – environments that demand different settings on the hearing aid. To address this need, he is developing what he calls “user-in-the-loop” systems, which means involving patients in setting up their own hearing aids.
Larsen’s approach is to use a machine-learning algorithm designed to work out which of a pair of sound bites a patient is likely to hear better, given a certain mix of their hearing aid’s various parameters (such as frequency response and noise reduction). The sound bites are duly played to the patient, and if the algorithm guesses correctly it remains unaltered; if not, it is tweaked and then presented with another pair of sound bites. The process is repeated until the optimal setting has been found.
Algorithms in cochlear implants
For all their wizardry, hearing aids have real limitations if neurons and receptor cells are already dead. By contrast, cochlear implants circumvent the damaged hair cells entirely, stimulating different parts of the auditory nerve via electrodes. Bernhard Seeber and his colleagues at TUM are particularly interested in studying how well people with these implants can work out the direction of specific sounds when surrounded by lots of noise.
The ability to pinpoint where a sound is coming from allows us to pick out that one voice in a crowded room. Usually, we do this by having two ears positioned on opposite sides of our heads. The signals arriving at each ear will differ slightly in terms of arrival time and intensity. But Seeber and his team have shown that people wearing cochlear implants struggle to make out these differences in a noisy environment. To overcome this problem, the group has been developing a new algorithm to convert incoming sound into the electrical signals sent to the brain. The algorithm shapes the signals so that they have a sharper leading edge preceded by a slight delay, allowing the brain to pick up the difference in timing. The researchers have successfully tested the algorithm on people with normal hearing and on a small number of patients with implants; they are now preparing for a larger study.
While some groups espouse the benefits of one type of device over another, the key to good results lies with audiologist’s skill in identifying an individual’s needs. In fact, the next trend may combine hearing aids and cochlear implants. “The challenge here is that acoustic stimulation (hearing aids) vs. electric stimulation (cochlear implants) lead to different representations of information and we need to know more about how this ‘mismatch’ is combined and processed by the brain”, explains DTU’s Dau.
“Researchers have recently demonstrated that combining the two device types in the same ear can lead to substantial performance improvements.”
By Benjamin McCluskey and Edwin Cartlidge
Connected reading: New technology helping hearing impaired people to locate voices
Sound from all directions
The latest innovations provide listening experiences that are more immersive and realistic than ever. Some technologies even use bones to transmit sound.
The Batband heralds the arrival of the newest generation of headphones. What makes this device remarkable is that it directly transfers sound waves to the inner ear through the skull bone, bypassing the eardrums and thereby avoiding earbuds. The Batband was created by designers from Lausanne-based Studio Banana.
“It’s the same technique Beethoven used to compose music when he became deaf”, says Key Kawamura, co-founder of Studio Banana. “He bit down on a piece of wooden rod attached to his piano.” With Batband, listeners have their ears free and can therefore be aware of surrounding noises. The sound is full and enveloping, providing a 360° listening experience. The device, which will be available in mid 2017, foreshadows the immersive sound of the future.
Will this new technology convince us to abandon traditional headphones? “It’s just a new way of listening. The sound quality is not better – just genuinely different”, says Kawamura. AfterShokz, an American brand, also uses bone conduction to conduct sound. Its lightweight streamlined design was developed for athletes. Several “listening” glasses (Zungle, Soundglasses Buhel) use the same technique, with sensors on both stems of the glasses, close to the ears. Now you can bike, run, cross the street or work while listening to music, without cutting yourself off from the world.
Sound on the move
The other up-and-coming revolution is 3D sound, which reproduces what we hear in real life – from up close, far away, behind and above. This is a departure from stereo, where audio comes only from the left or right. Voices, music and sound can move around, resulting in unparalleled depth. “Germany, Belgium, France and Denmark are a step ahead of other countries”, said Eske Bo Knudsen, head of Projects and Internationalization at the Technical University of Denmark. “But the big Asian and American brands (Samsung, HTC, Apple, Sony and Oculus) aren’t far behind.”
Three-dimensional sound is already being used in cinemas: the Dolby Atmos requires up to 64 speakers. It also paves the way for smart headphones, which are packed with sensors and monitor head movement to provide immersive, realistic audio. The Sound One, sold by French start-up 3D Sound Labs, is one of the first customisable headphones. Users take three selfies with their smartphone, which then calculates their head measurements (distance between ears, for example). As a result, listeners enjoy bespoke sound that is far more immersive than with regular headphones. The latest headphones from Danish company Jabra and US-based Ossic also use 3D audio.
“We are now experiencing a true renaissance of immersive sound. In a few years, this augmented reality listening experience will be absolutely everywhere”, says Knudsen. “In the future, this technology will be used in other fields, such as in hospitals to improve patient well-being and in courts to analyse where a gunshot came from. It can also reduce the irritating noises of everyday city life.”
By Virginie Langerock @vlangerock
How noise kills
Sound pollution has become one of the main health hazards in European cities. New technologies may provide some solutions.
Environmental noise is considered a serious pollutant in urban areas, and is known to be a cause of various ailments ranging from stress to heart disease. With three out of four Europeans living in or near cities, the issue affects hundreds of millions of people. Ines Lopez Arteaga, a researcher at the Department of Mechanical Engineering of the Technical University of Eindhoven, discusses potential solutions.
TECHNOLOGIST Where does the noise in cities come from?
INES LOPEZ ARTEAGA The main sources are transportation and industrial sites. In urban environments, road traffic is the main culprit, followed by rail traffic and aircraft. Then there are construction sites and noisy neighbours, which are mainly a source of annoyance.
TECHNOLOGIST Why is urban noise a serious health risk ?
INES LOPEZ ARTEAGA Long-term exposure to transportation noise increases the risk of hypertension, ischaemic heart diseases, tinnitus, cognitive impairment in children, sleep disturbance and annoyance. Studies have shown that noise levels above 65 decibels – think heavy traffic at a distance of 50 to 100 m – increase the risk of stroke by 20–40%. At these levels the risk of heart disease is also 20% higher than for people living in quieter areas. By comparison, a typical conversation takes place at about 60 decibels.
The World Health Organization has calculated that in Western Europe more than one million healthy years of life are lost every year due to traffic noise. This means that long-term exposure to traffic noise is, after air pollution, the main environment-related health stressor, compromising quality of life and indirectly the life expectancy of millions of people across Europe.
TECHNOLOGIST What can be done about it?
INES LOPEZ ARTEAGA First, you need measures to reduce the amount of noise produced by transportation and, second, measures to minimise noise levels at the target location. To achieve significant noise reductions, you need a combination of measures at the source and reduction of transmission to the receiver. Obvious examples at the receiving end are double-glazed windows and noise barriers like those you see along many highways across Europe.
An example of a reduction measure at the source is the use of double-layer porous asphalt – or “silent” asphalt – on roads, which reduces the actual noise made by the tyre, while also absorbing the sound due to the tyre-road interaction. This type of surface is already in use in many European cities, including Paris, Amsterdam and Madrid.
Another example – on railways this time – is vibration dampers on wheels and rails. Elastically embedded track systems can be found in the tram networks of The Hague and Antwerp. Studies have shown that such vibration dampers, as well as the use of asphalt roads, can reduce noise levels by between 2 and 5 decibels, which is significant.
TECHNOLOGIST What solutions might be put in place in the future?
INES LOPEZ ARTEAGA Two examples are rubberised roads and green roofs. Rubberised road surfaces – which are different from silent asphalt – make use of rubber-like materials to bind stones, and are more flexible than regular road surfaces. Such roads already exist in Japan and the US. Promising tests have been run in Europe, for example on the ring-road around Brussels.
Another solution is the greening of roofs and walls. Materials suitable for growing plants soften the urban environment through sound absorption, keep sound levels low, and have the potential to significantly reduce traffic noise in urban courtyards. Greening is not yet applied as a means of noise reduction, but is in my view a promising future concept.
TECHNOLOGIST Will electric vehicles help reduce noise?
INES LOPEZ ARTEAGA Only somewhat. With their relatively quiet motors, electric vehicles can help reduce noise in city centres, where speed limits are low. This won’t be the case, though, for people living close to ring-roads and highways. At speeds above 50 km/h, tyres rather than engines are the main source of noise. And above 130 km/h, aerodynamic noise dominates.
TECHNOLOGIST How did you become involved with environmental noise?
INES LOPEZ ARTEAGA During my PhD research on railway noise, I developed technical solutions to design more silent wheels for trains. Later I extended my research to understanding noise generation by car tyres.
Initially, my aim was to help manufacturers provide better products. But with both train wheels and car tyres, you quickly realise that improvements to the tracks and road surfaces are needed to achieve large-scale noise reduction. So, I started to interact with the infrastructure and government stakeholders, and became interested in the overall challenge of environmental noise.
Interview by Conor Purcell @ConorPPurcell
“Long-term exposure to transportation noise increases the risk of hypertension, ischaemic heart diseases, tinnitus, cognitive impairment in children and sleep disturbance”, says Ines Lopez Arteaga, a researcher at the Eindhoven University of Technology. Her work has been published in an array of peer-reviewed journals, including the Journal of Sound and Vibration and Applied Acoustics.
As part of Europe’s NoisePlanet project, French researchers have developed NoiseCapture, a mobile application that geolocates sound pollution. You download it for free (only on Android) and record the noises you hear. The collected data will help create an interactive map of sound pollution around the world.
How decibels work
The human ear responds logarithmically, rather than linearly, to sound intensity. The wide range of human hearing is therefore best described with the logarithmic decibel scale rather than with absolute values in pascals.