{"id":3889,"date":"2025-02-06T18:44:46","date_gmt":"2025-02-06T18:44:46","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2025\/02\/06\/when-the-earth-talks-ai-listens\/"},"modified":"2025-02-06T18:44:46","modified_gmt":"2025-02-06T18:44:46","slug":"when-the-earth-talks-ai-listens","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2025\/02\/06\/when-the-earth-talks-ai-listens\/","title":{"rendered":"When the Earth Talks, AI Listens"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p>AI built for speech is now decoding the language of earthquakes.<\/p>\n<p>A team of researchers from the Earth and environmental sciences division at Los Alamos National Laboratory repurposed Meta\u2019s Wav2Vec-2.0, an AI model designed for speech recognition, to analyze seismic signals from Hawaii\u2019s 2018 K\u012blauea volcano collapse.<\/p>\n<p>Their findings, published in Nature Communications, suggest that faults emit distinct signals as they shift \u2014 patterns that AI can now track in real time. While this doesn\u2019t mean AI can predict earthquakes, the study marks an important step toward understanding how faults behave before a slip event.<\/p>\n<p>\u201cSeismic records are acoustic measurements of waves passing through the solid Earth,\u201d said Christopher Johnson, one of the study\u2019s lead researchers. \u201cFrom a signal processing perspective, many similar techniques are applied for both audio and seismic waveform analysis.\u201d<\/p>\n<figure id=\"attachment_77635\" aria-describedby=\"caption-attachment-77635\" class=\"wp-caption alignleft\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-77635 size-full\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2025\/02\/Picture2.jpg\" alt=\"\" width=\"443\" height=\"312\"><figcaption id=\"caption-attachment-77635\" class=\"wp-caption-text\">The AI model was tested using data from the 2018 collapse of Hawaii\u2019s K\u012blauea caldera, which triggered months of earthquakes and reshaped the volcanic landscape. The lava lake in Halema\u02bbuma\u02bbu during the 2020-2021 eruption (USGS\/F. Trusdell) is a striking reminder of K\u012blauea\u2019s ongoing activity.<\/figcaption><\/figure>\n<p>Big earthquakes don\u2019t just shake the ground \u2014 they upend economies. In the past five years, quakes in Japan, Turkey and California have caused tens of billions of dollars in damage and displaced millions of people.<\/p>\n<p>That\u2019s where AI comes in. Led by Johnson, along with Kun Wang and Paul Johnson, the Los Alamos team tested whether speech-recognition AI could make sense of fault movements \u2014 deciphering the tremors like words in a sentence.<\/p>\n<p>To test their approach, the team used data from the dramatic 2018 collapse of Hawaii\u2019s K\u012blauea caldera, which triggered a series of earthquakes over three months.<\/p>\n<p>The AI analyzed seismic waveforms and mapped them to real-time ground movement, revealing that faults might \u201cspeak\u201d in patterns resembling human speech.<\/p>\n<p>Speech recognition models like Wav2Vec-2.0 are well-suited for this task because they excel at identifying complex, time-series data patterns \u2014 whether involving human speech or the Earth\u2019s tremors.<\/p>\n<p>The AI model outperformed traditional methods, such as gradient-boosted trees, which struggle with the unpredictable nature of seismic signals. Gradient-boosted trees build multiple decision trees in sequence, refining predictions by correcting previous errors at each step.<\/p>\n<p>However, these models struggle with highly variable, continuous signals like seismic waveforms. In contrast, deep learning models like Wav2Vec-2.0 excel at identifying underlying patterns.<\/p>\n<h2>How AI Was Trained to Listen to the Earth<\/h2>\n<p>Unlike previous machine learning models that required manually labeled training data, the researchers used a self-supervised learning approach to train Wav2Vec-2.0. The model was pretrained on continuous seismic waveforms and then fine-tuned using real-world data from K\u012blauea\u2019s collapse sequence.<\/p>\n<p>NVIDIA accelerated computing played a crucial role in processing vast amounts of seismic waveform data in parallel. High-performance NVIDIA GPUs accelerated training, enabling the AI to efficiently extract meaningful patterns from continuous seismic signals.<\/p>\n<p>What\u2019s Still Missing: Can AI Predict Earthquakes?<\/p>\n<p>While the AI showed promise in tracking real-time fault shifts, it was less effective at forecasting future displacement. Attempts to train the model for near-future predictions \u2014 essentially, asking it to anticipate a slip event before it happens \u2014 yielded inconclusive results.<\/p>\n<p>\u201cWe need to expand the training data to include continuous data from other seismic networks that contain more variations in naturally occurring and anthropogenic signals,\u201d he explained.<\/p>\n<h2>A Step Toward Smarter Seismic Monitoring<\/h2>\n<p>Despite the challenges in forecasting, the results mark an intriguing advancement in earthquake research. This study suggests that AI models designed for speech recognition may be uniquely suited to interpreting the intricate, shifting signals faults generate over time.<\/p>\n<p>\u201cThis research, as applied to tectonic fault systems, is still in its infancy,\u201d Johnson. \u201cThe study is more analogous to data from laboratory experiments than large earthquake fault zones, which have much longer recurrence intervals. Extending these efforts to real-world forecasting will require further model development with physics-based constraints.\u201d<\/p>\n<p>So, no, speech-based AI models aren\u2019t predicting earthquakes yet. But this research suggests they could one day \u2014 if scientists can teach it to listen more carefully.<\/p>\n<p><strong><i>Read the full paper, \u201c<\/i><a target=\"_blank\" href=\"https:\/\/www.nature.com\/articles\/s41467-025-55994-9\" rel=\"noopener\"><i>Automatic Speech Recognition Predicts Contemporaneous Earthquake Fault Displacement<\/i><\/a><i>,\u201d to dive deeper into the science behind this groundbreaking research.<\/i><\/strong><\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/earth-ai\/<\/p>\n","protected":false},"author":0,"featured_media":3890,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3889"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=3889"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/3889\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/3890"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=3889"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=3889"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=3889"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}