How can an earthquake have a negative magnitude? It’s got to do with ground motion and distance, and how sensitive seismographs have become.
When Charles Richter used a Wood Anderson torsion seismograph in California in the early 1935s, the main source of earthquakes was around 100km away. An earthquake that showed 10 microns of peak ground motion on that seismograph at that distance was defined as a magnitude 1.0 earthquake.
This image shows an earthquake magnitude nomogram. You need to determine how far away the earthquake is from your seismograph (by using the time difference between the Primary and Secondary energy waves) and note the peak amount of ground displacement recorded on the seismograph.
Then you can draw a line between the two outer logarithmic axes to determine the magnitude by noting where the line crosses the centre axis.
Magnitude Zero was suggested to be around the limit of human perceptibility, or an earthquake at 100km range that generated about 1 micron (1µm, or 1 thousandth of a millimetre) of ground displacement.
The magnitude scale is also logarithmic, where each unit is 10 times bigger than the last. Think of the magnitude as a ten-to-the-power-of number, e.g. 101 = 10, 100 = 1, 10-1 = 0.1, etc.
So what if that 1 micron of peak ground motion was from a closer earthquake? If it was only 50km away, drawing a line from 1µm to 50km you would get about magnitude -0.3, and if the epicentre was at the location of the seismograph (0km) it would be about magnitude -1.7.
Modern seismographs are almost 1000 times more sensitive than those original models, which means we can detect about earthquakes that are about 3 orders of magnitude smaller, at the same distance.
You can watch a one-minute video on this topic on this TikTok channel.