. 24/7 Space News .
EARTH OBSERVATION
Using sound to model the world
by Adam Zewe for MIT News
Boston MA (SPX) Nov 02, 2022

MIT researchers have developed a machine-learning technique that accurately captures and models the underlying acoustics of a scene from only a limited number of sound recordings. In this image, a sound emitter is marked by a red dot. The colors show the sound volume if a listener were to stand at different locations - yellow is louder and blue is quieter.

Imagine the booming chords from a pipe organ echoing through the cavernous sanctuary of a massive, stone cathedral.

The sound a cathedral-goer will hear is affected by many factors, including the location of the organ, where the listener is standing, whether any columns, pews, or other obstacles stand between them, what the walls are made of, the locations of windows or doorways, etc. Hearing a sound can help someone envision their environment.

Researchers at MIT and the MIT-IBM Watson AI Lab are exploring the use of spatial acoustic information to help machines better envision their environments, too. They developed a machine-learning model that can capture how any sound in a room will propagate through the space, enabling the model to simulate what a listener would hear at different locations.

By accurately modeling the acoustics of a scene, the system can learn the underlying 3D geometry of a room from sound recordings. The researchers can use the acoustic information their system captures to build accurate visual renderings of a room, similarly to how humans use sound when estimating the properties of their physical environment.

In addition to its potential applications in virtual and augmented reality, this technique could help artificial-intelligence agents develop better understandings of the world around them. For instance, by modeling the acoustic properties of the sound in its environment, an underwater exploration robot could sense things that are farther away than it could with vision alone, says Yilun Du, a grad student in the Department of Electrical Engineering and Computer Science (EECS) and co-author of a paper describing the model.

"Most researchers have only focused on modeling vision so far. But as humans, we have multimodal perception. Not only is vision important, sound is also important. I think this work opens up an exciting research direction on better utilizing sound to model the world," Du says.

Joining Du on the paper are lead author Andrew Luo, a grad student at Carnegie Mellon University (CMU); Michael J. Tarr, the Kavcic-Moura Professor of Cognitive and Brain Science at CMU; and senior authors Joshua B. Tenenbaum, professor in MIT's Department of Brain and Cognitive Sciences and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science and a member of CSAIL; and Chuang Gan, a principal research staff member at the MIT-IBM Watson AI Lab. The research will be presented at the Conference on Neural Information Processing Systems.

Sound and vision
In computer vision research, a type of machine-learning model called an implicit neural representation model has been used to generate smooth, continuous reconstructions of 3D scenes from images. These models utilize neural networks, which contain layers of interconnected nodes, or neurons, that process data to complete a task.

The MIT researchers employed the same type of model to capture how sound travels continuously through a scene.

But they found that vision models benefit from a property known as photometric consistency which does not apply to sound. If one looks at the same object from two different locations, the object looks roughly the same. But with sound, change locations and the sound one hears could be completely different due to obstacles, distance, etc. This makes predicting audio very difficult.

The researchers overcame this problem by incorporating two properties of acoustics into their model: the reciprocal nature of sound and the influence of local geometric features.

Sound is reciprocal, which means that if the source of a sound and a listener swap positions, what the person hears is unchanged. Additionally, what one hears in a particular area is heavily influenced by local features, such as an obstacle between the listener and the source of the sound.

To incorporate these two factors into their model, called a neural acoustic field (NAF), they augment the neural network with a grid that captures objects and architectural features in the scene, like doorways or walls. The model randomly samples points on that grid to learn the features at specific locations.

"If you imagine standing near a doorway, what most strongly affects what you hear is the presence of that doorway, not necessarily geometric features far away from you on the other side of the room. We found this information enables better generalization than a simple fully connected network," Luo says.

From predicting sounds to visualizing scenes
Researchers can feed the NAF visual information about a scene and a few spectrograms that show what a piece of audio would sound like when the emitter and listener are located at target locations around the room. Then the model predicts what that audio would sound like if the listener moves to any point in the scene.

The NAF outputs an impulse response, which captures how a sound should change as it propagates through the scene. The researchers then apply this impulse response to different sounds to hear how those sounds should change as a person walks through a room.

For instance, if a song is playing from a speaker in the center of a room, their model would show how that sound gets louder as a person approaches the speaker and then becomes muffled as they walk out into an adjacent hallway.

When the researchers compared their technique to other methods that model acoustic information, it generated more accurate sound models in every case. And because it learned local geometric information, their model was able to generalize to new locations in a scene much better than other methods.

Moreover, they found that applying the acoustic information their model learns to a computer vison model can lead to a better visual reconstruction of the scene.

"When you only have a sparse set of views, using these acoustic features enables you to capture boundaries more sharply, for instance. And maybe this is because to accurately render the acoustics of a scene, you have to capture the underlying 3D geometry of that scene," Du says.

The researchers plan to continue enhancing the model so it can generalize to brand new scenes. They also want to apply this technique to more complex impulse responses and larger scenes, such as entire buildings or even a town or city.

"This new technique might open up new opportunities to create a multimodal immersive experience in the metaverse application," adds Gan.

"My group has done a lot of work on using machine-learning methods to accelerate acoustic simulation or model the acoustics of real-world scenes. This paper by Chuang Gan and his co-authors is clearly a major step forward in this direction," says Dinesh Manocha, the Paul Chrisman Iribe Professor of Computer Science and Electrical and Computer Engineering at the University of Maryland, who was not involved with this work. "In particular, this paper introduces a nice implicit representation that can capture how sound can propagate in real-world scenes by modeling it using a linear time-invariant system. This work can have many applications in AR/VR as well as real-world scene understanding."

This work is supported, in part, by the MIT-IBM Watson AI Lab and the Tianqiao and Chrissy Chen Institute.

Research Report:"Learning Neural Acoustic Fields"


Related Links
MIT-IBM Watson AI Lab
Computer Science and Artificial Intelligence Laboratory
Earth Observation News - Suppiliers, Technology and Application


Thanks for being there;
We need your help. The SpaceDaily news network continues to grow but revenues have never been harder to maintain.

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.
SpaceDaily Monthly Supporter
$5+ Billed Monthly


paypal only
SpaceDaily Contributor
$5 Billed Once


credit card or paypal


EARTH OBSERVATION
'Earth is in our hands': Astronaut Pesquet's plea for the planet
Paris (AFP) Oct 31, 2022
From his unique viewpoint hundreds of kilometres above Earth, French astronaut Thomas Pesquet told AFP he felt helpless watching fires rage across the planet below, calling for more to be done to protect this fragile "island of life". Pesquet said his two tours onboard the International Space Station convinced him more than ever that the world is failing to address the threat posed by climate change. He also witnessed moments of astonishing beauty while in space, some of which are captured in 30 ... read more

Comment using your Disqus, Facebook, Google or Twitter login.



Share this article via these popular social media networks
del.icio.usdel.icio.us DiggDigg RedditReddit GoogleGoogle

EARTH OBSERVATION
Rice from space promises robust new varieties

NASA to resume spacewalks after investigation into 'close call'

NASA Crew-4 astronauts safely splash down in Atlantic

Eagle-designed space drones target in-orbit construction

EARTH OBSERVATION
UCF researcher receives NASA award to develop revolutionary rocket engine technology

Gilmour Space partners with Equipmake on advanced motors for rocket program

Gilmour Space offers tech demo satellite mission from Australia in 2024

AFRL upgrades rocket fabrication capabilities

EARTH OBSERVATION
Trying to Avoid Nodules: Sols 3633-3634

Ancient bacteria might lurk beneath Mars' surface

Considerations for microbial survivability of ionizing radiation on Mars for sample returns

Driving on the Sidewalk, MARDI-Style: Sols 3630-3632

EARTH OBSERVATION
China's 'Palace in the sky' space station complete after successful launch

China launches third and final module for Tiangong space station: state TV

China launches experimental satellite into space

Thermal control designs keep astronauts cool on space station

EARTH OBSERVATION
SatixFy completes business combination with Endurance Acquisition Corp

NanoAvionics announces growth plans to become the prime supplier for small satellite constellations

Spacecraft manufacturer Apex emerges from stealth with $7.5M in funding

Designing the trajectory of a microsatellite swarm from the macro-micro perspective

EARTH OBSERVATION
NASA laser project benefits animal researchers, UW scientists show

Canada orders Chinese firms to exit rare minerals deals

NASA inflatable heat shield finds strength in flexibility

D-Orbit signs launch contract with AAC SpaceQuest

EARTH OBSERVATION
New technique to determine age will open new era of planetary science

Discovery could dramatically narrow search for space creatures

Discovery could dramatically narrow search for space creatures

Secret behind spectacular blooms in world's driest desert is invisible to human eyes

EARTH OBSERVATION
Mars and Jupiter moons meet

NASA studies origins of dwarf planet Haumea

NASA study suggests shallow lakes in Europa's icy crust could erupt

Sharpest Earth-based images of Europa and Ganymede reveal their icy landscape









The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.