I seem recently to have been reading about systems that can be exploited easily.
- A big one is the police. An increasingly common problem for US police is something called Swatting.
I’d not heard of swatting until a few weeks ago, when I read this most incredible article about an online troll who took his trolling to the real world.
He finds out the address of a young woman, then calls the police pretending to have taken hostages at that location.
The police arrive in a hurry, with a SWAT team (hence ‘swatting’). Doors get kicked in. The intent, the troll claims, is to frighten. But sometimes, innocent people die.
Swatting shares some similarities with framing another person for crime, but the difference is the police force does little to no checking before it reacts.
2. Pizza delivery is the same technique at the other end of the seriousness spectrum. The pizza delivery location doesn’t check the address you give them either.
The seriousness comes when you find a powerful system that is forced (or chooses) to reacts very quickly without doing much checking.
3. National governments responding to terrorist attacks are perhaps another example. In the aftermath of the recent terror attacks, dust had barely settled on Paris when French jets took off over Syria, bombing… things.
The attackers wanted that and got it.
Lesson is – we should be careful when we design powerful systems that have to respond quickly, without doing much checking.
4. This makes me think of driverless cars. These will be programmed to react quickly. They’ll (sometimes) be going fast, meaning their reactions could be powerful.
Driverless cars will avoid pedestrians. It is possible they will be extremely good at doing so.
But the more reliable they are at dodging loose humans, the easier the system will be to exploit. I can imagine a future where you can step off the kerb without even looking and be sure the cars will avoid you.
Sounds nice, right? It would be a relief. But I can think of two risks.
The low-level risk is pedestrians frequently step out and cars frequently screech to a halt, traffic gets worse, and eventually the two systems are segregated and pedestrians lose a lot of access.
The bigger risk is when the cars are travelling a bit faster, but a pedestrian can still trust them to react predictably.
If they knew cars would always swerve once they were inside a certain distance, a prankster who knew the right moment to step onto the road could cause five crashes on the way to work.
Much has been written about how driverless cars will have to choose between the lives of their passengers or other potential victims. If they have a single best strategy they always follow in answering that question, they have a weakness.
There could be ways of behaving on or near the road that reliably make cars swerve into each other or off the road.
One solution I can think of is very low speed limits. Another is programming a random element into the systems so they don’t always react the same way. If they sometimes go left and sometimes go right, the system will be a little harder to predict and a lot less tempting to exploit.