When officers suspect a driver of hitting the road while under the influence, they submit this person to a battery of tests to prove whether there is enough evidence to make an arrest.
Field sobriety tests often serve as one of the first tests a driver may come across. But what are they, and what sets standardized tests apart from non-standardized tests?
Why did standardization happen?
VeryWell Mind takes a look at the standardization of field sobriety tests. Before standardization, all field sobriety tests ended up judged based solely on what the testing officer felt was correct. In other words, it left a huge margin of error and room for personal bias to sway an officer’s opinion. This in turn led to a disproportionate amount of arrests for certain people compared to others.
To help combat this issue, standardization came into play. This created a unified rubric for use across the entire country. Instead of a test just getting judged by one person and their idea of what looks like fail or a pass, officers now had to judge tests based on how they compared to this rubric.
The ongoing problem of bias
However, it still leaves room for interpretation and officer bias. Field sobriety tests are not an accurate tool of scientific measurement and courts know not to treat it as such. This makes it a weak piece of evidence in court which is often only used to substantiate other evidence.
Still, field sobriety tests – and especially their standardized counterparts – hold a strong place in an officer’s arsenal. No one should underestimate their potential when coming across them.