The complexity of collecting data from large earthquakes
Why are seismic events so hard to measure?

The United States Geological Survey (USGS) works against the clock every time a large earthquake strikes. This storymap explains USGS's data collection process and the implications of having incomplete, unreliable, and uncertain earthquake information for disaster response.
Beyond earthquake magnitude

Every time there is an earthquake, the first thing you will hear from the news is its epicenter and magnitude. These metrics represent the location where the earthquake starts and its total size.
For the magnitude, the Richter Scale (ML) is what most people have heard about, but in practice, it is not commonly used anymore, and it has been replaced with more accurate metrics such as the Moment Magnitude (Mw)

These metrics are commonly used because they are efficient when communicating the expected extent of the damage. For instance, if an earthquake strikes with an Mw of 7 and epicenter near a large city, significant infrastructure damage is highly probable.
This data, however, is not the only one recorded by scientists. USGS, for instance, collects data from seismometers to develop a map of the acceleration distribution right after a large earthquake. This information is represented in a map called the ShakeMap.
The ShakeMap
Data sources & uncertainty
Map evolution
So how much does the new data affect the ShakeMap? You tell me. This animated graph shows the multiple versions of the ShakeMap released for the Indios Earthquake.
Notice that there are major changes to the PGA distribution during the first month after the event. The first major change is the modification of the epicenter in version 3. This change happens because more data is included from seismometers to refine the exact coordinates. The second major change is the inclusion of finite rupture data in version 8, which modifies the earthquake geometry from a point source to a more complex shape of the rupture (see rectangle after version 8 in the epicenter).
The evolution of the uncertainty is different from the PGA. For this metric, there are two major stages in which the uncertainty decreases:
- As more data is collected from seismometers and online surveys, the regions with more sensors and population will have a decrease in uncertainty in later versions (see the evolution from version 1 to 7).
- With the inclusion of finite rupture data, the uncertainty is no longer dependent on the distance to the epicenter (see uncertainty evolution from version 7 to 8).
Implications
The evolution of the ShakeMap has significant implications for the workflow of local emergency managers and entities interested in modeling building damage. Emergency managers, for example, estimate the areas of extensive damage by looking at the locations of highest ground motion (i.e., maximum PGA in the ShakeMap). Thus, if uncertainty in the ShakeMap is not considered, there can be an impact on personnel deployment for rescue missions, increasing the response time. In terms of earthquake modeling, most damage assessment methodologies are highly dependent on the PGA. This metric is combined with structural parameters and soil properties to determine the expected level of damage to a building. In this case, if the uncertainty in the ShakeMap is considered, researchers can improve the estimation of building damage, increasing the reliability of infrastructure recovery models.
As we have seen, there is a myriad of data published after an earthquake. All of this data is helpful for multiple public and private entities. Thus, understanding the impact of the evolution of uncertainty in ground motions can improve the response to a disaster and eventually make our cities safer.