This post asks if we are making a mistake in the way we anticipate the future of robots and intelligent machines. It is all based on my perceptions and understanding of how far our digital assistant/nemeses have got so far. Please comment below if you know of progress I appear not to be aware of!
I’ve been reading a lot about robots, artificial intelligence and machine learning. I am trying to weigh up what it all means. Will jobs disappear? Whose jobs? Who stays in work what do they do? Will we even need to work in future?
One machine I am definitely excited about is the new best player at chess. It dominates because we demanded that it teach itself. Within a few hours it beat one of the top systems in the world. That is exciting and also terrifying.
And yet. Some robots are still utter rubbish.
The Jetsons’ robot maid is nowhere to be seen in my life. There is little evidence of robots coming to dominate in many of the domains people insisted they would.
Voice recognition, for example, remains underdeveloped, despite years of focus. And yet the machines can turn around and defeat us at Go, the one thing we thought we could edge them on for another few years.
It seems to me we are bad judges of what intelligent machines will be good at.
Often, the machines are better at things we consider hard than things we consider easy. One of the first things machines came to dominate at was chess (a game for the human intellectual elite). They remain truly appalling at soccer (a game for everybody).
We assume things children could do will be easy for robots. And we scream with laughter when they find them hard. Later, we are amazed when machines can easily outstrip us at things only the smartest adults can do. This paradox needs resolving.
Why are they smartest at hard things and dumbest at easy things?
Are we benchmarking things wrong? Perhaps we over-emphasise how smart the adult human is; how capable of operating effectively in the abstract world. And underemphasise how physically capable the average adult human is in the material world.
Maybe what we see as hard is just abstract; and what we see as easy involves manipulating the infinite variability of the real world.
From where I work I can watch two turtle doves improving their nest. One flies out, finds a stick or bit of grass, and brings it back. The other takes it and works it into the existing structure with a wiggle of its head. I doubt we could program two drones to do that, even with a decade and a multi million dollar budget.
How different are we from the animals? Is it possible the animal parts of our brain are actually far more advanced than the human parts of our brain? Our software has had aeons to work on things like navigating 3D space, recognising and manipulating never before-seen objects, hearing and identifying sounds. But only a few dozen millenia to work on the higher human plane of logic and abstraction.
Computers operate in that abstract world and are – mostly – killing us at it. Arithmetic is their bread and butter. Accounting, logic and other kinds of rule following that defined human intelligence until quite recently are firmly within their grasp.
Yet machines attempts to navigate the physical world are mostly poor. If you consider how refined those animal circuits are, is it any wonder that machines still can’t do these animal things? If what we can do easily is actually very hard, it might be less surprising that our first iteration at self-driving cars smashes into giant objects right in front of them. And we might approach the task of training robots to interact with dynamic real world space with more humility.
ROBOTS TAKING OUR JOBS
If we have misconstrued the extent of human skill in various domains, could that lead to confusion about what tasks can easily be automated? Everyone seems to think truck driving is due for immediate automation. What if that is because of a sense that truckies aren’t smart?
Many people assume a chess-playing computer must also be able to do everything a person of everyday intelligence can do. Here’s Tesla CEO Elon Musk, speaking at the company’s annual earnings call on 7 February 2018.
“I am pretty excited about how much progress we are making on the neural net front… It is also one of those things where it is kind of exponential. … It doesn’t seem much progress, doesn’t seem much progress and then suddenly: Wow!
“That has been my observation generally with AI stuff . And if you look at what Google’s DeepMind did with AlphaGo. It went from not being able to beat even a pretty good Go player to suddenly it could beat the European Champion. Then it could beat the world champion. Then it could thrash the world champion. Then it could thrash everyone simultaneously.
“Then they had AlphaZero which could thrash AlphaGo! And just learning by itself was better than all the human experts.
“It is going to kind of be like that for self-driving. It will seem like this is a lame driver, this is a lame driver, this is a pretty good driver … [then] holy cow this driver is good!”
It seems to follow logically, but it might not.
We value abstract cognition because it is rare in humans. But we don’t value what is profoundly and abundantly available to us – skill in moving through the real world. That’s why the stock analyst gets paid more than the taxi driver.
Yet traders are already being replaced with algorithms. Taxi drivers – not yet. That could be a warning signal, and our model of intelligence could be impeding us from seeing it.
The smartest people applying neural nets to self-driving vehicles say they are still a long way off.
“Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology. For those of us who have been working on the technology for a long time, we’re going to tell you the issue is still really hard, as the systems are as complex as ever.”
And that’s just the driving part. There was a great post on Marginal Revolution last week about the complexity of a truck driver’s job.
“I wonder how many of the people making predictions about the future of truck drivers have ever ridden with one to see what they do?
One of the big failings of high-level analyses of future trends is that in general they either ignore or seriously underestimate the complexity of the job at a detailed level. Lots of jobs look simple or rote from a think tank or government office, but turn out to be quite complex when you dive into the details.
For example, truck drivers don’t just drive trucks. They also secure loads, including determining what to load first and last and how to tie it all down securely. They act as agents for the trunking company. They verify that what they are picking up is what is on the manifest. They are the early warning system for vehicle maintenance. They deal with the government and others at weighing stations. When sleeping in the cab, they act as security for the load. If the vehicle breaks down, they set up road flares and contact authorities. If the vehicle doesn’t handle correctly, the driver has to stop and analyze what’s wrong – blown tire, shifting load, whatever.
I’ve been working in automation for 20 years. When you see how hard it is to simply digitize a paper process inside a single plant (often a multi-year project), you start to roll your eyes at ivory tower claims of entire industries being totally transformed by automation in a few years.
Perhaps this argument is upside down. Perhaps we chose not to make computers good at the material word. Perhaps we trained computers to do abstract things because only a few people can do them. To get the benefit of training a computer we must set it on tasks where human skill is rare. It is not that they couldn’t do what we can do, just that we haven’t put in the effort.
FINDING THE PATTERNS TO RECOGNISE
I suspect the problem is not so much in asking computers to process the data produced by manual tasks as getting them to identify it as data.
In an abstract world data is always in the right place and fully visible. In a spreadsheet, the data you need will always be exactly in the right place. And if not, nobody expects the spreadsheet to figure that out and fix it. In the physical world, information might be harder to find. Where’s the label on this box? Where’s the face on this human? Where’s the road under this snow? etc.
We already know how you can get robots to take on jobs in the material world. You need to standardise the inputs. Robots do a wonderful job welding things that come down a production line. They do a great job driving trains in wholly separated systems. They do a perfect job of driving lifts up and down lift-wells, etc. In these cases we give the material world the standardised appearance of an abstract one. Take away the production line, the protected rails and the lift-well, and those systems are all at sea.
Neural nets will of course be much smarter than the computers that drive lifts. They will be able to parse information from the material world. Self-driving cars can use cameras, radar, lidar and 360 degree vision to get advantages over us in sensing. These systems should be able to learn fast.
But I am not yet convinced we can apply the lessons from an abstract world which has only 64 different locations, to a real world which is infinitely more complex. Assuming those lessons will cross over is the exact kind of intellectual trap a cognitively limited species would fall into.
13 thoughts on “The world’s smartest robot, falling down the stairs”
Their ability to adapt and the rate at which they can do it leaves us unable to compete in the preceding domain of aptitude.
“Maybe what we see as hard is just abstract; and what we see as easy involves manipulating the infinite variability of the real world.”
We discover things theoretically long before we can achieve them mechanically. An acceptably optimised fitness solution is time dependent (we “brute force” iterations). Abstract is just noisey or incomplete real so we should expect it to be substantially slower.
The rate of machine learning already far exceeds our own in both the abstract and real, I think you are correct in your counterpoint that our focus and application is the inhibitor. Our direction isn’t optimised to achieve these developments, because of those aeons of baggage. And we can’t change it (yet), so does relative competence really matter if they can learn better? For how long seems the only possibility? This also brings in another problem, these intelligence mechanism are universal, long before we learn to do it in ourselves I suspect we will continue as we have thus far, to solve it in machines first.
“Will jobs disappear? Whose jobs? Who stays in work what do they do? Will we even need to work in future?”
I’ve commented once before about how difficult I think it will be (because we can’t adapt fast enough). We see them as competition but every time we are outcompeted the realisation is actually a surplus that exceeds human capability. I’m concerned that machine pressure is driving our evolved behaviours to absurdity with this new feedback. We demonstrate a diminishing capacity to find economic efficiencies and threats.
LikeLiked by 1 person
see Moravec’s paradox
LikeLiked by 2 people
I can’t thank you enough for this comment. I love little more than discovering that I’ve independently come up with something that is well-characterised already in the literature, then reading to find out how much my thinking overlapped!
“Moravec’s paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.
Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are unconscious. “In general, we’re least aware of what our minds do best”, he wrote, and added “we’re more aware of simple processes that don’t work well than of complex ones that work flawlessly”.”
nb. I am horrified to read this by Moravec, written in 1999
“… Nevertheles I am convinced that the decades-old dream of a useful, general-purpose autonomous robot will be realized in the not too distant future.
By 2010 we will see mobile robots as big as people but with cognitive abilities similar in many respects to those of a lizard. The machines will be capable of carrying out simple chores, such as vacuuming, dusting, delivering packages and taking out the garbage. By 2040, I believe, we will finally achieve the original goal of robotics and a thematic mainstay of science fiction: a freely moving machine with the intellectual capabilities of a human being.”
Even the man for whom the paradox is named is over-optimistic!!
This article on how poor Google translations can be is outstanding: https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/
And it’s by Douglas Hofstadter, who is outstanding generally.
LikeLiked by 2 people
I have done a very small amount of translating and I reckon it’s far harder than people give credit for. Once you fully and completely understand what the original author is trying to say, you then fid yourself in the exact position they were in when they wrote the piece – you know what you will say but you still need to figure out how to best express it.
To make a piece as good as the original,A translator essentially needs to be as adapt at conveying all the subtle elements of language as the original author. And they got published on the basis of their ability to do that, while you’ve spent your life learning a second language.
Tough luck translating Shakespeare to Chinese, for example. No wonder people like to read things in the original.
Machines are better where the heuristics (‘rules of thumb’) are more easily identified and defined and outputs are more easily quantified or predicted in relationship to input. So although the task seems complex, the process is more predictable.
Interacting with the natural world widens the number of factors that come into play, interpreting inputs may be ambiguous or random (e.g. driverless vehicles – expanse of road versus gaping hole versus solid dark wall) and defining what is considered an optimal outcome is not as clear (e.g. GPS – most direct route versus shortest route versus less trafficked route versus safest route versus most accessible route)
LikeLiked by 1 person
100% agree. I think to get robots to help us, we need to help them by making the domains they interact with less variable.
One of my favourite bloggers was talking about how driverless vehicles might prove to be a boon for public transport first, because when you have a well-defined route you intend to follow many times, you can more easily control that route. (indeed this is what a train track is – a controlled, low friction, fenced exclusive right of way).
Ever since I wrote this piece I have become hyper aware of evidence to the contrary. This video is an amazing example: robots with soft fingers and smart AI picking up things pretty dextrously!
LikeLiked by 1 person
Watch a couple of robots assemble an ikea chair !
These robot skills astound me:
Quadcopter pole balancing
The astounding athletic qualities of quadcopters
Whilst I’m a fan of Kurzweil’s exponential growth of information technology, which will impact our lives in the coming 20-30 years significantly, I also think that human values, fears & economic rationalization will impact the acceptance and usage of this technology.
Ultimately, however, our definition of what it is to be human will be forever changed as AI permeates every facet of our existence.
LikeLiked by 1 person