Technology needs to be more like anti-lock brake systems in cars, which do exactly what we need them to do, when we need them, without us realizing they are even present…We don’t have to mess with it. We just say here’s what we want. When technology reaches that level of invisibility in our lives, that’s our ultimate goal. It vanishes into our lives. It says: ‘you don’t have to do the work, I’ll do the work.’
That was Google X leader Dr. Astro Teller speaking at TechCrunch Disrupt about that “wonderful technology moment” when artificial intelligence makes decisions for us invisibly in the background. Google X is the moonshot division in Google pioneering driver-less cars, which makes Teller’s comment especially interesting.
Technology can indeed be wonderful when a decision is obviously the right one. Imagine this horrible scenario: you must brake or hit a person crossing the road at the wrong time. The computer knows this and brakes for you no matter what you were doing with your foot. The car stops in time and nobody is harmed; this is a good outcome.
But what if braking suddenly causes the car behind carrying a family of four to crash into you? It’s reasonable to assume that sometime in the future, the computer in your car can know this.
How should the computer weigh that trade-off? Brake and cause a crash, putting all five of you at risk? Don’t brake but crash into the one person?
This might seem like a contrived, unlikely scenario, but it’s a possible one among many.
What if a crash was inevitable on the road and the computer had to make a decision between swerving right into an SUV or swerving left into a sedan? There’s a lower chance for a fatality with the SUV, so perhaps steering right is the better decision. But how would you feel as the SUV driver, knowing you’ll always be a victim in that trade-off through no fault of your own?
A different scenario: go right and hit a motorcyclist with a helmet, go left and hit a motorcyclist without one. Go right and the helmet might save a person. Go left and the person surely dies. In isolation, the life-maximizing decision might be to go right. But should the person who wore a helmet be punished while the person who neglected to wear one be rewarded?
Thought provoking pieces on WIRED and Popular Science explore this topic to greater depth and posit more such moral dilemmas.
Computers will only act as we tell it to. As we march towards a future where those same computers increasingly and invisibly make decisions for us, we must carefully consider the moral impact of those instructions. Let’s hope we are good teachers.