My distrust of code and programmers fuels scepticism over driverless vehicles. It’s up there with my hatred of automated checkouts. “Unexpected person in driving area”.
It’s very hard not to be impressed by the demonstration videos of cars steering around cones, braking safely in the face of danger and taking life-threatening decisions away from the driver. But with such cars being permitted on British roads from 2015, I’m fairly convinced that this media face is a charade, behind which lurks some serious problems, not just with the technology but also societal.
First of all, driverless is a bit of a misnomer. Modern cars already do a lot of thinking for us: antilock-braking, traction control, stability programming, and so forth. The cars that Google have been testing, which have racked up nearly a million miles without incident, are limited to 25mph because that’s as fast as the (butt-ugly) sensors mounted around the car can process the barrage of signals-per-second overlaid on its map data.
They claim that’s an unparalleled safety record, which far exceeds a human. But let’s remind ourselves of the fact that it’s only at 25mph. And the cars don’t work in snow or heavy rain because the sensors get confused. If conventional drivers were limited to 25mph everywhere they went, and there was brilliant sunshine 24×7, there’d probably be fewer crashes (well, aside from the fact nobody would be concentrating because it’d be as boring as an episode of Last of the Summer Wine at half speed).
A drivers’ utopia
Proponents of autonomous cars are quick to point out its benefits:
Car-to-car communication permits far safer driving conditions because vehicles can advertise their intentions to surrounding receivers so they can alter their own trajectories accordingly. That’s great in theory, as long as every vehicle, truck, motorbike, bicycle and pedestrian is equipped with it.
And, heaven forbid, an enthusiast might decide to load Linux into their car to make it do something a bit different. Or somebody maliciously breaks into the car-to-car network. But of course, that can’t happen because software is perfect and everyone uses uncrackable passwords.
Eventual removal of traffic lights because cars can autonomously decide who goes first at intersections and thread among themselves without a central control system. Traffic lights often irritate me so this seems an attractive pipe dream. Though I’m of the opinion that most traffic lights could be removed if roundabouts were installed more thoughtfully to ensure that traffic flowed evenly from all entrances.
Like sat-nag, traffic lights make us lazy, worse drivers. Light is green = go, regardless of looking first or considering that your exit may be blocked, or you might block other drivers. As a psychology side note, no central control means drivers have to consider other road users as humans, not just cars. The irony is we don’t need driverless vehicles to make this happen, and proof can be found in the infamous Ethiopia intersection without traffic lights video.
Reduction in collisions because a network of sensors attached to computers are faster to react than humans. Oh the joy of instantaneous reactions to keep us all safe. Except there’s the notion of too fast, making knee-jerk decisions which are detrimental to overall safety (see below).
Cheaper insurance because companies pay out less in claims. Ha! Anyone who believe this is simply delusional. When I was younger, my dad said that, in general, the older I became, the cheaper my insurance would be because I was a lower risk driver. I don’t drive big, sporty or modified cars and would easily be given approval for driving Miss Daisy around, but in more than twenty years of claim-free motoring, my insurance is over twice what it was when I was a hot-headed eighteen-year-old.
The press claimed a national database of insurance and tax details and blanket number-plate surveillance would drive insurance premiums down because it’s impossible to put an unlicensed vehicle on the road. I’m not holding my breath for cashback.
In the case of driverless, when it does go wrong and there’s a calamity — and there will be — who’s at fault? “It wasn’t me driving, it was the car,” is a fabulous defence. So instead of passing on my insurance details, I should let the other driver have the name of the guy who wrote the shitty software?
In stark contrast to the actions of world governments, let’s assume the basic tenet of driverless machines is that life is ultimately important. Sidestepping the fact that we all saw how that panned out for Robocop, if cars are continually scanning for hazards, there’ll be a class of people — let’s call them Jackasses — who will jump in front of them at the last second to prove it, either for the buzz or the attention.
If cars react to one another and a bozo leaps off the pavement, the guy’s mate videoing across the street will capture fifty cars simultaneously swerving out of each other’s way in a metallic ballet of processing power, and it’ll be on YouTube and social media within the hour attracting a gazillion hits, going viral and causing others to imitate the stunt. Woe betide the poor guy caught in the middle who hasn’t upgraded his car yet.
And lest we forget that technology makes us lazy. Look at how someone who relies on sat-nag falls apart when that technology breaks, or isn’t present in a car they’re driving. Not a pretty sight. Now imagine after several years of having the car do the work, the driver has to make a decision. Fatal.
Software has bugs. Fact
I write software. I know how hard it is to trap every conceivable error, and make every conceivable parameter foolproof to user input or external stimulus (e.g. network or power outage). Even with the best software testing suites in the world, code in any medium-to-complex system that is evolving cannot, cannot, cannot, cannot be free of bugs. Ever.
In the infinitesimally small probability that software works flawlessly in v1, adding a feature for v2 will most certainly break something. On the flip-side, in the incredibly likely scenario that v1 is buggy, v2 fixes those bugs and introduces more, along with new features that introduce further bugs. Like any piece of hardware, software eventually wears out.
On top of this, there’s the inalienable truth that software costs money to develop and test. Companies who win contracts to write software are either the lowest bidder or have the best salespeople, promising features that won’t be delivered on time, leading to corners being cut. Software is usually written by contractors who don’t fully understand the problems they’re being asked to solve. Worse, they’re often coding to a specification written by people who do perfectly grasp the real-world problem, but don’t understand software. And caught in the middle is the end user: you and me.
Put it this way, when was the last time you used a computer and it did what you wanted, 100% of the time, without a single hitch and without you getting frustrated?
Exceptions and corner cases
I’m of the opinion that the true merit of a piece of software is not in how it handles routine stuff under perfect conditions, it’s how it copes with exceptions. Driverless software will fail if a person — perhaps a member of the police — stands at the side of (or in) the road, waving to warn drivers of a problem ahead. A driverless car sees pixels in the shape of a person. It routes around the gesticulating obstacle and carries on. A human would take heed and not do that.
Another example where failure is guaranteed: the other day, I and about forty other cars were following an open-top truck in the middle lane of the motorway, stuffed full of refuse. Every one of the drivers behind it knew this vehicle’s nature instinctively and, further, that every piece of litter that fluttered off the top of the truck could be safely ignored. My car was peppered with scraps of food, hit by a plastic bottle and was under attack from more than one plastic bag until I managed to overtake.
Driverless cars on the other hand see missiles. They are programmed to spot anything out of the ordinary because their underlying programming ethos is to assume everything is a potential problem and react. That is, assume every nanosecond is guilty until the absence of anomalies implies innocence. Real drivers do the opposite: assume everything is pretty much OK, while being aware of potential problems, i.e. accept innocence over guilt.
To a computer, a fluttering plastic bag, whipped by the turbulence of other road users and the inclement weather of the day is an obstacle. If all forty cars behind the dump truck were driverless, they would all (needlessly) try and avoid the bag as it flitted side-to-side, up and down across the carriageway.
What if only half of the cars were driverless? They would do likewise, leaving the remaining half of real drivers dodging vehicles that had no reason to be taking evasive action in the first place.
All this boils down to one question: why bother? Why waste trillions of [insert your currency here] vainly trying to perfect software to drive people more safely from A to B when it would be better to spend a fraction of that on improving Internet connection speeds so fewer people had to commute at all? Isn’t that better for our environment? And aren’t governments and politicians supposedly working towards reducing the human impact on the planet?
If you believe the rhetoric, that’s precisely what they want you to think. But consider the lost revenue from fuel tax, road tax, tolls, VAT on car purchases, insurance premium tax, not to mention the lost stock options and kickbacks for politicians on the boards at tech firms that develop driverless software, and oil companies that grease the wheels of election campaigns. Cynical? Moi?
Maybe I’m missing the big picture, but I don’t see the need for driverless cars. If you allow the car to drive you to a meeting in London, what will you do exactly on the journey? Work? Doesn’t sound like much fun to me.