“A system that works” is categorically different from “a system that works reliably”.
Or to be more precise – a system that sometimes works is categorically different from one that reliably works.
The former kind is a “wish-me-luck” system; the latter kind is a reliable system.
A wish-me-luck system:
- Sometimes works and sometimes not
- Produces errors in unpredictable ways
- Produces errors that are often not acceptable
- Doesn’t have enough error tolerance for it to work reliably
A reliable system:
- Works consistently and reliably
- Produces more predictable errors than unpredictable ones
- Produces more acceptable errors than unacceptable ones
- Tolerates errors consistently and predictably
Between the two is a huge engineering gap.
The former earns the praises, while the latter keeps the lights on.
I’m afraid to say that both the common people and the media tend to focus way too much on the wish-me-luck system’s wow factors, and fail to pay enough attention to the reliable system’s boring but far more important engineering challenges as well as achievements.
Just like in design or management, AI’s real achievements are often not what is obviously seen, nor are they what is easily comprehensible.
Perhaps, the bigger achievement we’ve made in attempting to build driverless cars is not that we’ve got a car that sometimes works, but that we’ve built behind-the-scene AI capabilities that could be applied to a wide range of other use cases, which may have nothing to do with either driving or car.
Perhaps, the bigger achievement we’ve made in building a chat-based system is not that we’ve got a chatbot that sometimes works, but that we’ve built behind-the-scene AI capabilities that could be applied to a wide range of other use cases, which may have nothing to do with either chatting or being human.
A wide range of AI capabilities have already been widely applied in many digital things, even though most of them are behind the scene. In fact, you probably wouldn’t live out a day smoothly without using any kind of AI – it’s probably much harder than you think.
AI really works mostly when it’s well engineered and built into reliable systems.
Since when have we stopped appreciating the “boring” engineering and started loving the “exciting” innovation?
Since when have we stopped appreciating the well-engineered, reliable systems and started applauding the opportunistic, wish-me-luck systems that only sometimes works?
That’s not a new problem.
In a very similar vein, The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work That Matters Most’s authors talk about how we tend to devalue the work that actually keeps the world going, while becoming obsessed with the “shiny new things” much symbolized by the notion of innovation.
Understanding AI and putting AI to use go hand in hand.
A good start to that understanding could be A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going, where it says:
Much of what is published about AI in the popular press is ill-informed or irrelevant. Most of it is garbage, from a technical point of view, however entertaining it might be.Wooldridge, Michael. A Brief History of Artificial Intelligence (p. 3). Flatiron Books. Kindle Edition.
Yup. Maybe AI’s enemies are the irresponsible popular press, occasional human stupidity and momentary lack of curiosity.