Speeding Tickets, Insurance Points, and Standards for Strategic Information
- Joe
- Sep 11, 2020
- 5 min read
Updated: Sep 14, 2020
Last week, I got my first speeding ticket. I was descending altitude (a rare occurrence for a born-and-raised Texan) and lost track of my speedometer. By the time I suspected my speed, there were lights behind me to confirm it.
Now, I’m not super concerned about the cost of the ticket itself. What I want to avoid is the nefarious points on my driver’s license. These points act like a signal to insurance companies: “this guy can’t drive down a hill! He’s high risk! Charge him more!” My driver’s license point is a piece of strategic information, or information that informs action. In this case, it suggests that Progressive needs to bump up their rates.
When making decisions using strategic information, I like to think of three standards to hold it to. Precision, objectivity, and reliability. A piece of strategic information is precise if the measurement used to generate it measures the strategic information exactly. It’s objective if two people observing the same measurement reach the same conclusion. And it’s reliable if different “ground truths” are measured the same way by the same test.
As it turns out, driver’s license points meet only one of these standards. Before I explain, let me define the key pieces of the analogy. The “measurement” is the number of points on a driver’s record, or the change in that number after a ticket. The “measurer” is the insurance company, who uses this information to make a decision. The phenomenon being “measured” is a particular driver’s behavior on the road.
Let’s start with precision. The point of a driver’s license point is to measure how risky of a driver you are. There are many reasons it doesn’t do this directly. For one, you can decrease the number of points in several ways. My ticket offered point reductions for on-time payment of the fee. You can take an online driving school. There are plenty of traffic attorneys out there who will help you knock out those points for a small fee. That you can easily change the number of points on your license without changing the way you drive should point to its lack of precision in measuring driving safety.
Points are also far from reliable. Two different officers may observe the same driving behavior, with one assigning a more lenient ticket (or none at all!) than the other. From the perspective of the insurer, these drivers look different, even if their true risk level is the same.
Where points score well is their objectivity. When two different assessors from the same insurance company view the same number of points, there isn’t a lot of room for interpretation. This is especially true if there is a numerical rule for assessing risk based on those points.
In this particular case, the information is numeric: the number of points on your license. Strategic information, like any piece of data, can take many forms, and the form will imply something about how the standards are met.
In the numerical case, you can almost always assume strong objectivity and weak precision. Numeric information will rarely tell you, with nuance, the information needed to make a decision. This is because we often rely on a single number to tell too much. This simplicity necessarily entails objectivity – there isn’t room for interpretation.
There are plenty of other sources of information, and all of them are (alone) imperfect. If you collect information from fixed surveys, each response is precise (everybody gets the same questions asking about a particular thing), unreliable (people express the same thought differently), and non-objective (they require interpretation). Interviews may offer more reliability (you can tailor questions to the interviewee, to get clearer answers) but are even less objective (variation in questions introduces randomness). This list isn’t at all exhaustive – there are tons of different information collection methods. All of them have tradeoffs.
If the imperfections of raw information are unavoidable, then there must be ways to refine information. Generally, you do this via a two-step process: combining different sources of information and interpreting them. The combination step can happen with different types of information (a numerical score and a survey) or through multiple pieces of the same information (like multiple pieces of numerical data).
When combining different types of information, the process is rarely straightforward. There’s almost never a single, predeveloped way to combine the information and quickly interpret it. Instead, you have to reason through the information that you have to try to paint a clearer picture of whatever you’re trying to measure. This process requires careful questioning and justifying of assumptions, and must be taken with care.
In cases when different pieces of the same kind of information are combined, the method will be based on the type of information being used. A simple example would be a statistical model: given several numeric inputs, you may want to predict one more. For interviews, it may involve a conversation among those who have conducted the interviews.
You should conceptualize the combination-interpretation process as synthesizing different kinds of information into a single piece of information. At the end, you can judge this combined, processed information for its own precision, reliability, and objectivity. If you need to, you can then combine it with other information and interpret again. In algorithms vocabulary, this is a “recursive” process.
An example of how this can be powerful is data science. Put simply, data science takes numeric data and simple non-numeric data, combines it using advanced statistical methods, and returns information that improves on the weak precision and reliability offered by a single piece of data. In this example, the combination step is just getting different pieces of information, and the interpretation is done through the statistical model. The reliability of a statistical estimator is directly measurable via its bias and variance. Its precision really just depends on the dependent variable you’re predicting – if you’re trying to measure next month’s demand, then a model that outputs a predicted demand will be precise. Its objectivity is strong due to the numeric form the output information takes.
Let me bring it all together by returning to the problem of measuring the safety of driving. In reality, insurance companies don’t just rely on the points associated with your record to price your risk. They look at things about you, like your age and previous driving history and the city in which you live. More recently, they’ve undergone the process of combining and interpreting information through devices like the Progressive Snapshot, which take varied information directly from your driving to inform your rate. This easily improves on reliability, since the measured quantity has no variation from person to person – all the data collected is about the same insured driver. On the backend, they use a statistical model to improve on precision. Together, it gives them more valuable information about your driving habits.
To conclude, I want to connect this to uncertainty, a topic I’ve written about several times. At its core, this framework helps you identify three different kinds of uncertainty. In some cases, unreliability or objectivity can be quantified. Imprecision is generally unquantifiable. In addition to telling you how to improve your information, these standards can help you identify your blind spots. In your decision-making, you should use this knowledge to develop nuanced alternatives that help you hedge against the particular uncertainty you face.
Comments