Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect the most relevant recent discoveries and papers — particularly in but not limited to artificial intelligence — and explain why they matter.
A number of recently published research projects have used machine learning to attempt to better understand or predict these phenomena.
This week has a bit more “basic research” than consumer applications. Machine learning can be applied to advantage in many ways users benefit from, but it’s also transformative in areas like seismology and biology, where enormous backlogs of data can be leveraged to train AI models or as raw material to be mined for insights.
Inside earthshakers
We’re surrounded by natural phenomena that we don’t really understand — obviously we know where earthquakes and storms come from, but how exactly do they propagate? What secondary effects are there if you cross-reference different measurements? How far ahead can these things be predicted?
A number of recently published research projects have used machine learning to attempt to better understand or predict these phenomena. With decades of data available to draw from, there are insights to be gained across the board this way — if the seismologists, meteorologists and geologists interested in doing so can obtain the funding and expertise to do so.
The most recent discovery, made by researchers at Los Alamos National Labs, uses a new source of data as well as ML to document previously unobserved behavior along faults during “slow quakes.” Using synthetic aperture radar captured from orbit, which can see through cloud cover and at night to give accurate, regular imaging of the shape of the ground, the team was able to directly observe “rupture propagation” for the first time, along the North Anatolian Fault in Turkey.
“The deep-learning approach we developed makes it possible to automatically detect the small and transient deformation that occurs on faults with unprecedented resolution, paving the way for a systematic study of the interplay between slow and regular earthquakes, at a global scale,” said Los Alamos geophysicist Bertrand Rouet-Leduc.
Another effort, which has been ongoing for a few years now at Stanford, helps Earth science researcher Mostafa Mousavi deal with the signal-to-noise problem with seismic data. Poring over data being analyzed by old software for the billionth time one day, he felt there had to be better way and has spent years working on various methods. The most recent is a way of teasing out evidence of tiny earthquakes that went unnoticed but still left a record in the data.
The “Earthquake Transformer” (named after a machine-learning technique, not the robots) was trained on years of hand-labeled seismographic data. When tested on readings collected during Japan’s magnitude 6.6 Tottori earthquake, it isolated 21,092 separate events, more than twice what people had found in their original inspection — and using data from less than half of the stations that recorded the quake.
The tool won’t predict earthquakes on its own, but better understanding the true and full nature of the phenomena means we might be able to by other means. “By improving our ability to detect and locate these very small earthquakes, we can get a clearer view of how earthquakes interact or spread out along the fault, how they get started, even how they stop,” said co-author Gregory Beroza.
from TechCrunch https://ift.tt/3n7adcA
via Tech Geeky Hub
No comments:
Post a Comment