It started out like this, just a bunch of ideas on the wall.
And after a lot of hard grind, today it looks like this:
There were speed-bumps of many kinds. Including a terrifying moment when I tipped water all over my computer a few weeks before the book was due.
Rice didn’t rescue the machine. So I put it in the oven. That seemed to help a little bit and I was able to turn the computer back on.
The big mistake I made was what I did next – blasting it with a hairdryer. I melted half the keyboard and the “I” key came off completely. I struggled through the end of the book with a warped and wobbly 25-letter alphabet under my fingers.
But it got done, and it got out, and now it is in bookshops, alongside a lot of very serious authors!
I just re-read Malcom Gladwell’s book Outliers – picking it up again with the goal of cribbing from it what is necessary to write a best-selling piece of pop non-fiction.
While I’m not yet clear on how useful it was in that sense, the book’s contents surprised me. What I vaguely remembered as a tome about the secrets to success is in fact anything but.
Sure, it contains the chapter on the “10,000 hour rule.” But the vast bulk of the book is framed around something far less like “self-help.”
The book is really about why people succeed because of circumstances they did not create. My favourite example, the simplest and most arbitrary in the book, is to do with why professional Canadian ice hockey players are disproportionately born in January and February.
Canada groups junior ice hockey players by year of birth, so kids born in January play mostly against kids younger than they are. At that age, Gladwell explains, a few months makes a big difference. The older kids are naturally the biggest, strongest and most coordinated. They get chosen first, then rewarded for their superiority with more opportunities, more games, more coaching etc. The rest is path dependency.
I like this theory (and not only because as the youngest kid in my year at school I was particularly unsuccessful at sport). It makes an intuitive kind of sense that success is to do with luck as well as talent. For example, while some very dedicated short people have played professional basketball, the luck to be born tall is a big part of your ability to make it in the game. The book is stacked with examples like this.
So the Gladwell book is basically about privilege. It’s about how successful people are the product of a confluence of factors they don’t control. There’s even a fantastic chapter on Gladwell’s own Jamaican heritage and how perceived light-skin tone helped his forebears.
Privilege in general, but especially white privilege and male privilege, are some of the hottest and most contested topics these days, this book is pretty much completely about them, and yet it wasn’t swept up in the debate.
‘It’s weird’, I found myself thinking. If this book had come out now it’d be part of a fierce partisan culture war. Gladwell would be reviled in the pages of 4Chan. He’d be a cuck and an SJW and a whipping boy.
But it came out a while ago and so it missed that.
How, I found myself wondering, did this major book, that sold so many copies, miss the cultural moment so narrowly?
I did what I always do when faced with these sort of questions, and headed to Google Trends (where Google measures interest in various search terms). What I found raised my eyebrows.
The privilege line kicks up to a new level in around late 2008, early 2009 – the precise time the book was released. Is it possible, I asked myself, that we’re looking at cause and effect here? Did the book make people more interested in the concept of privilege?
Of course, people were googling the term both before and after the book’s release – but some of the traffic will be completely unrelated to this sense of privilege. (A fair part of it will be people trying to check the spelling).
The lift in interest in searches for privilege still needs to be explained. The 2008 US Presidential election and the identity of its winning candidate is definitely a possible explanation for a rising interest in the role of privilege in society at that time. But what makes plausible the attribution of at least some of the lift in interest to Gladwell is the incredible success the book had. Outliers hit number one on the New York Times bestseller list upon debut, and stayed there. It went on to sell over 1.5 million copies and on the way became a sort of cultural touchstone.
Nowadays, Malcolm Gladwell’s combination of popular style and popular success makes him unfashionable. (The public refutation of the 10,000 hours rule didn’t do wonders for his brand either.) Few would attribute their awareness of the role of privilege in society to Gladwell.
While pondering that, consider this quote from John Maynard Keynes:
“Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”
(For “practical men” you may substitute “insurgent cultural theorists with impressive numbers of twitter followers.“)
Now, I’m not saying that Malcolm Gladwell invented the concept of privilege. Clearly, the concept has been a part of the study of humanities for a long time. I’m not even saying that he introduced people to the academic sense of the term. The word is far from prominent in the book. But he does relentlessly slay the conception, so dominant until recently, that success depends solely on hard work or inherent talent.
Gladwell lays bare several structural factors that lift some people up while holding others back. And more important than that, he makes those factors memorable. In doing that, Outliers potentially opens minds to a more critical analysis of why some people – and some types of people – seem to be able to squeeze more out of society.
That may have prepared the earth for a rising interest in the topic of privilege as the years have gone by, and the more recent and far more dramatic upsurge in awareness of the concept of privilege when it comes to race.
If this is even in some small way Malcolm Gladwell’s intellectual legacy, then Outliers was a particularly powerful book. If I can, using techniques stolen from him, write something with a fraction the impact I’ll be delighted.
We sit at an inflection point, extrapolating it to the stars.
Technological progress seems suddenly overwhelming. But there is reason to expect a breakdown in the recent rate of growth, reason to expect that we’ve grown deluded about the prospects of the silicon-based slice of progress we like to label “technology.”
I wrote about this the other day in my regular column over at News.
Suddenly people are taking seriously all the following ideas:
This is not just about people speculating on the future of a few companies. This is about believing the life of humans is about to change faster than ever before in human history. It is like a belief that we’re living through the agricultural revolution, the Renaissance and the Industrial revolution all at once — and all in fast forward.
Why so credulous?
Why do we suddenly believe technology will remake the future so utterly and swiftly? Partly because of a cognitive bias called the recency bias. We remember the recent past much better than the time before it. And in the recent past, technology has wreaked havoc on modern life. You’re reading this on a website that didn’t exist before 20 years ago.
In the past 20 years, the world has changed a lot. And technology has been a big part of it. But that doesn’t mean technology can change everything. The personal computing technology we all interact with daily has made it very obvious to us that technology can change very fast.
But this is a classic case of selection bias. If we try to measure the pace of technology by looking at the things that are changing very fast, we will get the wrong picture. We need to look elsewhere too.
If you tried to measure the pace of technology by looking at commercial aviation, say, what you’d discover is a lack of obvious progress. We used to have supersonic commercial aviation, but nowadays most of us fly around in Airbus A320s (a plane launched in the 1980s) and Boeing 737s (a plane first launched in the 1960s).
You can get a similarly glum feeling if you look at progress in fighting Alzheimers disease or Multiple Sclerosis. There hasn’t been any, despite a huge amount of effort. Likewise with the common cold — and we seem to be losing the battle against bacteria as they develop antibiotic resistance.
I don’t mean to say that technology won’t change. It can and surely will. Just to say that there is a certain wildness to the predictions of the future at the moment. People seem willing to believe just about anything, so long as it has a technology angle.
When the bubble finally pops, it will take with it not only the valuations of some of the biggest technology companies, but also a lot of utopian visions of the future.
In the News story I call it a recency bias but you might as easily call it an availability bias. We are very willing to believe technology can change the world utterly and quickly because in living memory personal computing has created very visible changes in our daily lives. (Maximally visible, but not necessarily maximally important – the famous hypothetical is whether you’d give up the internet before you gave up indoor plumbing.)
IT’S THE STUPID ECONOMY
These cognitive biases have been allowed to grow unchallenged because of the peculiar financial circumstances of the times.
Some people argue the loose monetary policy of the last decade does not explain high asset prices, but I think they’re wrong. The simultaneous global bubbles in property, bonds and tech stocks almost certainly trace their roots to the low/zero/negative interest rates across much of the world, and quantitative easing that left developed economies awash with liquidity.
The money flood provided patient capital that gave companies with scant profits a long time to experiment and expand revenues. If you’ve ever taken an Uber using a 50 per cent discount, you’re using some venture capitalist’s money to improve your own lifestyle, while simultaneously propping up the impression that new tech is destined to remake the known world.
(For what it’s worth, Uber is pretty big improvement over taxis! But its major advantage comes from taking on a regulated market with colossal rents, rather than being inherent to the app.)
The money flood has propped up some far more dubious beliefs than the prospects of Uber. The faith certain investors have in Tesla’s ability to win a giant share of the “shared mobility market” (fleets of driverless taxis) is intriguing to me.
Valuing a junior company on the prospects of winning a large share of a market that doesn’t yet exist, using technology that is in its infancy? It seems, um … more optimistic than is prudent. If this kind of thing works for Elon Musk, perhaps he should also set up Red Real Estate and start selling rights to land on Mars.
The NASDAQ chart above explains why the cognitive bias we’ve developed has been allowed to progress so far. It’s a feedback loop from confidence, to investment, to expanding revenues, to stock prices, to headlines, to confidence.
And Bitcoin?! … . Actually no. Let’s not even talk about Bitcoin.
(Non-financial evidence that technology really is changing the world, in the shape of temperature records and CO2 concentrations, doesn’t seem quite so influential on the mass mood. I leave it to the reader to ponder why.)
Eventually, the technology cycle of misplaced confidence and out-sized valuations will find it has reached the highest possible equilibrium and begin to tack backward.
It is likely to do that even absent a macroeconomic reason, but one is coming anyway.
Interest rates are rising in the United States and inflation is lifting. The anti-Keynesian Trump stimulus – adding fire to a booming economy – looks set to intensify those trends. The Fed is now slowly soaking back up loose money. This represents a clear and present danger to any asset whose value is not based on making real money right now.
If the market values of all those tech stocks fall, the stories they told about the future will suddenly appear thin. A pin will prick the bubble of credulity and the stories of inevitable autonomy, existential AI risk and imminent interplanetary expansion will fade from our front pages. The distance between the pssible and the probable will lengthen again.
So I’d like to place a stake in the ground and say we will look back on this era – with a TV show called Silicon Valley; a plan for Elon Musk to become the richest man in the world; non-stop headlines about drone delivery; and a relentless faith driverless cars were just a few months away – with a kind of nostalgia for a simpler and more optimistic time.
This post asks if we are making a mistake in the way we anticipate the future of robots and intelligent machines. It is all based on my perceptions and understanding of how far our digital assistant/nemeses have got so far. Please comment below if you know of progress I appear not to be aware of!
I’ve been reading a lot about robots, artificial intelligence and machine learning. I am trying to weigh up what it all means. Will jobs disappear? Whose jobs? Who stays in work what do they do? Will we even need to work in future?
One machine I am definitely excited about is the new best player at chess. It dominates because we demanded that it teach itself. Within a few hours it beat one of the top systems in the world. That is exciting and also terrifying.
And yet. Some robots are still utter rubbish.
The Jetsons’ robot maid is nowhere to be seen in my life. There is little evidence of robots coming to dominate in many of the domains people insisted they would.
Voice recognition, for example, remains underdeveloped, despite years of focus. And yet the machines can turn around and defeat us at Go, the one thing we thought we could edge them on for another few years.
It seems to me we are bad judges of what intelligent machines will be good at.
Often, the machines are better at things we consider hard than things we consider easy. One of the first things machines came to dominate at was chess (a game for the human intellectual elite). They remain truly appalling at soccer (a game for everybody).
We assume things children could do will be easy for robots. And we scream with laughter when they find them hard. Later, we are amazed when machines can easily outstrip us at things only the smartest adults can do. This paradox needs resolving.
Why are they smartest at hard things and dumbest at easy things?
Are we benchmarking things wrong? Perhaps we over-emphasise how smart the adult human is; how capable of operating effectively in the abstract world. And underemphasise how physically capable the average adult human is in the material world.
Maybe what we see as hard is just abstract; and what we see as easy involves manipulating the infinite variability of the real world.
From where I work I can watch two turtle doves improving their nest. One flies out, finds a stick or bit of grass, and brings it back. The other takes it and works it into the existing structure with a wiggle of its head. I doubt we could program two drones to do that, even with a decade and a multi million dollar budget.
How different are we from the animals? Is it possible the animal parts of our brain are actually far more advanced than the human parts of our brain? Our software has had aeons to work on things like navigating 3D space, recognising and manipulating never before-seen objects, hearing and identifying sounds. But only a few dozen millenia to work on the higher human plane of logic and abstraction.
Computers operate in that abstract world and are – mostly – killing us at it. Arithmetic is their bread and butter. Accounting, logic and other kinds of rule following that defined human intelligence until quite recently are firmly within their grasp.
Yet machines attempts to navigate the physical world are mostly poor. If you consider how refined those animal circuits are, is it any wonder that machines still can’t do these animal things? If what we can do easily is actually very hard, it might be less surprising that our first iteration at self-driving cars smashes into giant objects right in front of them. And we might approach the task of training robots to interact with dynamic real world space with more humility.
ROBOTS TAKING OUR JOBS
If we have misconstrued the extent of human skill in various domains, could that lead to confusion about what tasks can easily be automated? Everyone seems to think truck driving is due for immediate automation. What if that is because of a sense that truckies aren’t smart?
Many people assume a chess-playing computer must also be able to do everything a person of everyday intelligence can do. Here’s Tesla CEO Elon Musk, speaking at the company’s annual earnings call on 7 February 2018.
“I am pretty excited about how much progress we are making on the neural net front… It is also one of those things where it is kind of exponential. … It doesn’t seem much progress, doesn’t seem much progress and then suddenly: Wow!
“That has been my observation generally with AI stuff . And if you look at what Google’s DeepMind did with AlphaGo. It went from not being able to beat even a pretty good Go player to suddenly it could beat the European Champion. Then it could beat the world champion. Then it could thrash the world champion. Then it could thrash everyone simultaneously.
“Then they had AlphaZero which could thrash AlphaGo! And just learning by itself was better than all the human experts.
“It is going to kind of be like that for self-driving. It will seem like this is a lame driver, this is a lame driver, this is a pretty good driver … [then] holy cow this driver is good!”
It seems to follow logically, but it might not.
We value abstract cognition because it is rare in humans. But we don’t value what is profoundly and abundantly available to us – skill in moving through the real world. That’s why the stock analyst gets paid more than the taxi driver.
Yet traders are already being replaced with algorithms. Taxi drivers – not yet. That could be a warning signal, and our model of intelligence could be impeding us from seeing it.
“Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology. For those of us who have been working on the technology for a long time, we’re going to tell you the issue is still really hard, as the systems are as complex as ever.”
“I wonder how many of the people making predictions about the future of truck drivers have ever ridden with one to see what they do?
One of the big failings of high-level analyses of future trends is that in general they either ignore or seriously underestimate the complexity of the job at a detailed level. Lots of jobs look simple or rote from a think tank or government office, but turn out to be quite complex when you dive into the details.
For example, truck drivers don’t just drive trucks. They also secure loads, including determining what to load first and last and how to tie it all down securely. They act as agents for the trunking company. They verify that what they are picking up is what is on the manifest. They are the early warning system for vehicle maintenance. They deal with the government and others at weighing stations. When sleeping in the cab, they act as security for the load. If the vehicle breaks down, they set up road flares and contact authorities. If the vehicle doesn’t handle correctly, the driver has to stop and analyze what’s wrong – blown tire, shifting load, whatever.
I’ve been working in automation for 20 years. When you see how hard it is to simply digitize a paper process inside a single plant (often a multi-year project), you start to roll your eyes at ivory tower claims of entire industries being totally transformed by automation in a few years.
Perhaps this argument is upside down. Perhaps we chose not to make computers good at the material word. Perhaps we trained computers to do abstract things because only a few people can do them. To get the benefit of training a computer we must set it on tasks where human skill is rare. It is not that they couldn’t do what we can do, just that we haven’t put in the effort.
FINDING THE PATTERNS TO RECOGNISE
I suspect the problem is not so much in asking computers to process the data produced by manual tasks as getting them to identify it as data.
In an abstract world data is always in the right place and fully visible. In a spreadsheet, the data you need will always be exactly in the right place. And if not, nobody expects the spreadsheet to figure that out and fix it. In the physical world, information might be harder to find. Where’s the label on this box? Where’s the face on this human? Where’s the road under this snow? etc.
We already know how you can get robots to take on jobs in the material world. You need to standardise the inputs. Robots do a wonderful job welding things that come down a production line. They do a great job driving trains in wholly separated systems. They do a perfect job of driving lifts up and down lift-wells, etc. In these cases we give the material world the standardised appearance of an abstract one. Take away the production line, the protected rails and the lift-well, and those systems are all at sea.
Neural nets will of course be much smarter than the computers that drive lifts. They will be able to parse information from the material world. Self-driving cars can use cameras, radar, lidar and 360 degree vision to get advantages over us in sensing. These systems should be able to learn fast.
But I am not yet convinced we can apply the lessons from an abstract world which has only 64 different locations, to a real world which is infinitely more complex. Assuming those lessons will cross over is the exact kind of intellectual trap a cognitively limited species would fall into.
Was the problem a shortage of cool plans? I didn’t realise the problem was a shortage of cool plans.
Yesterday, Tesla announced two new vehicles – a semi trailer, and a roadster. The launch was awesome.
Musk does theatre like a natural. Adding to the happy vibe was that he spent no time covering Tesla’s big problem, which is delivering on existing plans. Instead, he added more plans.
Here’s the problem with plans. Not everything works out. The more plans you have the more chances for something to go wrong.
When you have many independent systems, adding an additional system adds to your chance of one working.
However when each system is interlinked (by, say being in the same corporate structure, or worse, by being an input to another system) the rising chance of one failure increases the chance of mutliple failures.
For example, the gigafactory battery plant is an input to the Model 3 production line. Failures at the gigafactory are holding up production, imperilling the Model 3 and the whole company’s cashflow, and therefore its survival.
When you have interlinked systems, risk management is, in the long run, almost everything.
If you google “Elon Musk” and Risk, you find a lot about him worrying about the risk from artificial intelligence to human survival. And I could see nothing about him discussing his approach to risk management in business.
Elon Musk has a longstanding pattern of managing risk by insourcing. When something’s not going right he tries to solve it by doing it in house, or even personally.
Nobody wants their supplier to go broke because they forced the supplier to take on too much risk. If it happens you’re short on inputs. But if you bring the problem under your own roof and find it can’t be solved then you’re short on inputs and in a financial hole.
A list of things Tesla is doing in house that a regular carmaker doesn’t is… eye-opening.
Energy generation and storage systems. (solar panels and batteries)
Developing autonomous driving systems
All of these are tricky. They may cost more to do than expected.
Just because this looks like a car company doesn’t mean it has the risk profile of a car company. Building cars is not the only prerequisite for success.
Solving production hell will take management effort and money. But the two new vehicles will divert effort and money. A juggling analogy may be apt. When you add extra balls, the juggler trying to control them drops the lot, not just the new ones.
The Roadster has some serious technical questions to answer, but – if it can be built – of course they can sell a lot of copies. It’s the world’s fastest car from the world’s coolest brand.
The truck, however, is not certain to sell. While consumers buy on brand and image, the logistics industry is relentlessly optimised around cost. The range Tesla’s truck offers (500 miles) demands an enormous battery, which will make the truck expensive to buy and increase its weight by an estimated 12 tons. That weight matters for at least three reasons
1. The total weight to payload ratio changes, offsetting some fuel advantages.
2. Road damage increases exponentially on a weight per axle basis and governments are increasingly keen to get the logistics industry to pay road user charges based on weight.
3. Trucks are sometimes empty and carrying a battery at those times raises the cost.
The truck also means Tesla had to invent a whole new charger to make sure their trucks could be charged in a reasonable time (30 minutes for 400 miles). It is unclear to what extent this new Megacharger has been invented as opposed to just envisioned. It is further unclear how much they might cost to install, or if they are compatible with existing electricity distribution infrastructure.
Incidentally, the time it takes to charge a vehicle means Tesla may need to install a high ratio of chargers to vehicles on the highways. We’ve all pulled into a petrol station to find all the pumps are in use. You wait three or four minutes and they become free. If the person in front of you is going to take 30 minutes to charge, and then you’re going to take another 30 minutes, you’ve got an enforced one hour stop. God forbid it’s busy and there’s more than one person in line in front of you.
The way for Tesla to combat this inconvenience is by installing *a lot* of chargers at places where people are taking long trips. (This problem should not apply at home, where people can charge their vehicle overnight, but it would apply if you’re doing distance travel, and especially to semi trailer trucks.)
The Tesla semi trailer and the roadster are, however, not just extra risk. They can help Tesla raise capital it sorely needs. Pre-orders of the first 1000 roadsters are available by putting down $250,000. If they can find 1000 people willing to put 250k on ice for a few years, that will put $250,000,000 into Tesla’s pockets. Its most recent cash burn was $1.4 billon in a quarter, so $250 million would buy them an extra three weeks.
Congestion charging is back on my list of good ideas
For a while there, I was influenced by the equity arguments against it. The lack of substitutes to travel, and the unique role of commuting to work in a person’s well-being tipped me against congestion charging. Good economic reform doesn’t throw out equity every time it can get an efficiency dividend, and I decided congestion charging equity problems made the policy unworkable.
But I’m swinging back to support for a simpler price signal. What has captured my attention is the following graph from a new Grattan Institute report. It shows the extent of congestion in Sydney. Amazingly, most people experience almost no congestion. Their commutes are swift.
What this tells me is the the impact of a congestion charge is actually not likely to be widespread. Serious congestion, of more than ten minutes in a trip, is the purview of a small subset of commuters.
That subset is likely to be going into the CBD, where congestion is real.
Remember that despite the importance of CBDs, most jobs are still in the suburbs. If we know one thing about CBD jobs – especially nine to five CBD jobs – it is that they tend to be the good kind.
City centres are where the business services jobs are. The specialised jobs that pay big coin, as opposed to the population-serving jobs (pharmacies, florists, bakers, doctors, schools) that are found disproportionately in the suburbs.
It looks like driving into the city in peak hour is an elite problem. No wonder it gets so much attention. The Grattan analysis makes it clear the congestion charging would really only have to be applied in a narrow area.
This fact also counters the argument that congestion charging can’t be introduced until better public transport happens. Melbourne and Sydney have radial public transport systems that provide terrific CBD access.
Traffic is bad. The absence of price signals on the use of existing infrastructure causes crowding and delays. You end up listening to way too much FM radio. But that might not be the most costly effect. The big downside is probably the pressure to build yet more infrastructure.
Daniel Andrews has green-lit the West Gate tunnel – a big freeway that will not only soak up $5.5 billion but also lock Victorians into a regimented tolling regime (not a congestion charge system) for decades.
Big freeway projects have a lot of side effects.
One is making the places that they travel through less pleasant. Place-making is a big theme in urban planning now and a lot of money is spent on making areas seem nice. This “tunnel” which is actually an elevated road for a good section of its length,is kind of the opposite of place making.
A second side effect is city-shaping. You can cut travel times to the city, but that encourages yet more sprawl and inefficient urban form. (Thanks, marchetti’s constant._
If you want a policy that is likely to be equitable, can potentially conserve scarce government funds for more valuable projects, and prevent the paving over of the inner city, then congestion charging is your horse.
To finish with here’s some data to make you go “huh!” – rain, apparently, has no effect on traffic: