Recent issues around machine learning biases and ethics make it clear that math and data can only take us so far. The recent fake news debacle and the efforts of some top researchers in natural language processing to address it show that sometimes even just defining the problem you’re trying to solve is the hardest part. We need human intelligence to decide how and when to use machine intelligence, and the more sophisticated the uses we make of machine intelligence, the more critically we need human intelligence to ensure it’s deployed sensibly and safely.
It’s time we started exalting critical thinking skills the way we do math skills. While we can entrust machines with mathematical calculations, we can’t entrust them with critical thinking, nor will we be able to any time soon. Reasoning about moral issues and identifying which types of problems are solvable with math are skills unique to humans.
When math and data fall short
Researchers recently claimed to have found evidence that criminality can be predicted from facial features. In “Automated Inference on Criminality using Face Images,” Xiaolin Wu and Xi Zhang describe how they trained classifiers using various machine learning techniques that were able to distinguish with a high level of accuracy photos of criminals from photos of non-criminals. The paper has been roundly attacked as a flawed and irresponsible use of machine learning. Their math was presumably fine; the idea itself wasn’t.
The authors put far too much trust in the algorithms they were using, and failed to notice the assumptions they were bringing to the task, e.g. the assumption that there is no bias in the criminal justice system, which completely colored their view of what they found. Failure to think about questions like this can have very real consequences, and no amount of machine intelligence can overcome it.
A triumph in machine intelligence
Perhaps one of the most impressive triumphs of machine intelligence in recent times was AlphaGo’s victory over the the world Go champion back in March. The system, from researchers at Google’s DeepMind, made incredibly sophisticated use of multiple machine learning techniques to achieve this feat.
Reasoning about moral issues and identifying which types of problems are solvable with math are skills unique to humans.
These techniques included learning from millions of past games, playing against itself and using advanced statistical techniques to come up with shortcuts that eliminated the need to evaluate every single possible move (of which there are more than the number of atoms in the universe). The game was won using machine intelligence, although it was sheer human ingenuity that designed the system that did it.
This may sound like tautological reasoning, but when we solve a problem with machine intelligence, it simply means the problem was solvable with machine intelligence; it doesn’t mean we created human-level intelligence. If a task is achievable with data and math, it’s a machine intelligence task. The piece that’s not solvable with data and math is designing that system in the first place.
The integral role of humans
If the people designing AlphaGo had failed in their critical thinking, the result would have been a poorly performing Go-playing machine. But with something like predicting criminality, the result of poor critical thinking abilities is potentially disastrous — certainly for any individual falsely identified as a criminal by the system.
Fears around machine intelligence are, at their core, fears of becoming irrelevant. Once the machines are able to do everything, what will the world need us for? But this is simply a misunderstanding of what machine intelligence is.
Machine intelligence systems are just tools, designed by humans to serve the interests of humans. Machines may win in a game of Go, but people will always be the ones choosing the game.
We need to stop focusing on how impressive these tools are (yes, many are truly impressive!) and focus more on ensuring they are designed well and serve human interests ethically. Is there bias in my training data? What are the repercussions of a false positive? Answering those questions is all about human intelligence — as utterly indispensable now as it has ever been.
Featured Image: JUNG YEON-JE/AFP/Getty Images