We sit at an inflection point, extrapolating it to the stars.
Technological progress seems suddenly overwhelming. But there is reason to expect a breakdown in the recent rate of growth, reason to expect that we’ve grown deluded about the prospects of the silicon-based slice of progress we like to label “technology.”
I wrote about this the other day in my regular column over at News.
Suddenly people are taking seriously all the following ideas:
This is not just about people speculating on the future of a few companies. This is about believing the life of humans is about to change faster than ever before in human history. It is like a belief that we’re living through the agricultural revolution, the Renaissance and the Industrial revolution all at once — and all in fast forward.
Why so credulous?
Why do we suddenly believe technology will remake the future so utterly and swiftly? Partly because of a cognitive bias called the recency bias. We remember the recent past much better than the time before it. And in the recent past, technology has wreaked havoc on modern life. You’re reading this on a website that didn’t exist before 20 years ago.
In the past 20 years, the world has changed a lot. And technology has been a big part of it. But that doesn’t mean technology can change everything. The personal computing technology we all interact with daily has made it very obvious to us that technology can change very fast.
But this is a classic case of selection bias. If we try to measure the pace of technology by looking at the things that are changing very fast, we will get the wrong picture. We need to look elsewhere too.
If you tried to measure the pace of technology by looking at commercial aviation, say, what you’d discover is a lack of obvious progress. We used to have supersonic commercial aviation, but nowadays most of us fly around in Airbus A320s (a plane launched in the 1980s) and Boeing 737s (a plane first launched in the 1960s).
You can get a similarly glum feeling if you look at progress in fighting Alzheimers disease or Multiple Sclerosis. There hasn’t been any, despite a huge amount of effort. Likewise with the common cold — and we seem to be losing the battle against bacteria as they develop antibiotic resistance.
I don’t mean to say that technology won’t change. It can and surely will. Just to say that there is a certain wildness to the predictions of the future at the moment. People seem willing to believe just about anything, so long as it has a technology angle.
When the bubble finally pops, it will take with it not only the valuations of some of the biggest technology companies, but also a lot of utopian visions of the future.
In the News story I call it a recency bias but you might as easily call it an availability bias. We are very willing to believe technology can change the world utterly and quickly because in living memory personal computing has created very visible changes in our daily lives. (Maximally visible, but not necessarily maximally important – the famous hypothetical is whether you’d give up the internet before you gave up indoor plumbing.)
IT’S THE STUPID ECONOMY
These cognitive biases have been allowed to grow unchallenged because of the peculiar financial circumstances of the times.
Some people argue the loose monetary policy of the last decade does not explain high asset prices, but I think they’re wrong. The simultaneous global bubbles in property, bonds and tech stocks almost certainly trace their roots to the low/zero/negative interest rates across much of the world, and quantitative easing that left developed economies awash with liquidity.
The money flood provided patient capital that gave companies with scant profits a long time to experiment and expand revenues. If you’ve ever taken an Uber using a 50 per cent discount, you’re using some venture capitalist’s money to improve your own lifestyle, while simultaneously propping up the impression that new tech is destined to remake the known world.
(For what it’s worth, Uber is pretty big improvement over taxis! But its major advantage comes from taking on a regulated market with colossal rents, rather than being inherent to the app.)
The money flood has propped up some far more dubious beliefs than the prospects of Uber. The faith certain investors have in Tesla’s ability to win a giant share of the “shared mobility market” (fleets of driverless taxis) is intriguing to me.
Valuing a junior company on the prospects of winning a large share of a market that doesn’t yet exist, using technology that is in its infancy? It seems, um … more optimistic than is prudent. If this kind of thing works for Elon Musk, perhaps he should also set up Red Real Estate and start selling rights to land on Mars.
The NASDAQ chart above explains why the cognitive bias we’ve developed has been allowed to progress so far. It’s a feedback loop from confidence, to investment, to expanding revenues, to stock prices, to headlines, to confidence.
And Bitcoin?! … . Actually no. Let’s not even talk about Bitcoin.
(Non-financial evidence that technology really is changing the world, in the shape of temperature records and CO2 concentrations, doesn’t seem quite so influential on the mass mood. I leave it to the reader to ponder why.)
Eventually, the technology cycle of misplaced confidence and out-sized valuations will find it has reached the highest possible equilibrium and begin to tack backward.
It is likely to do that even absent a macroeconomic reason, but one is coming anyway.
Interest rates are rising in the United States and inflation is lifting. The anti-Keynesian Trump stimulus – adding fire to a booming economy – looks set to intensify those trends. The Fed is now slowly soaking back up loose money. This represents a clear and present danger to any asset whose value is not based on making real money right now.
If the market values of all those tech stocks fall, the stories they told about the future will suddenly appear thin. A pin will prick the bubble of credulity and the stories of inevitable autonomy, existential AI risk and imminent interplanetary expansion will fade from our front pages. The distance between the pssible and the probable will lengthen again.
So I’d like to place a stake in the ground and say we will look back on this era – with a TV show called Silicon Valley; a plan for Elon Musk to become the richest man in the world; non-stop headlines about drone delivery; and a relentless faith driverless cars were just a few months away – with a kind of nostalgia for a simpler and more optimistic time.
This post asks if we are making a mistake in the way we anticipate the future of robots and intelligent machines. It is all based on my perceptions and understanding of how far our digital assistant/nemeses have got so far. Please comment below if you know of progress I appear not to be aware of!
I’ve been reading a lot about robots, artificial intelligence and machine learning. I am trying to weigh up what it all means. Will jobs disappear? Whose jobs? Who stays in work what do they do? Will we even need to work in future?
One machine I am definitely excited about is the new best player at chess. It dominates because we demanded that it teach itself. Within a few hours it beat one of the top systems in the world. That is exciting and also terrifying.
And yet. Some robots are still utter rubbish.
The Jetsons’ robot maid is nowhere to be seen in my life. There is little evidence of robots coming to dominate in many of the domains people insisted they would.
Voice recognition, for example, remains underdeveloped, despite years of focus. And yet the machines can turn around and defeat us at Go, the one thing we thought we could edge them on for another few years.
It seems to me we are bad judges of what intelligent machines will be good at.
Often, the machines are better at things we consider hard than things we consider easy. One of the first things machines came to dominate at was chess (a game for the human intellectual elite). They remain truly appalling at soccer (a game for everybody).
We assume things children could do will be easy for robots. And we scream with laughter when they find them hard. Later, we are amazed when machines can easily outstrip us at things only the smartest adults can do. This paradox needs resolving.
Why are they smartest at hard things and dumbest at easy things?
Are we benchmarking things wrong? Perhaps we over-emphasise how smart the adult human is; how capable of operating effectively in the abstract world. And underemphasise how physically capable the average adult human is in the material world.
Maybe what we see as hard is just abstract; and what we see as easy involves manipulating the infinite variability of the real world.
From where I work I can watch two turtle doves improving their nest. One flies out, finds a stick or bit of grass, and brings it back. The other takes it and works it into the existing structure with a wiggle of its head. I doubt we could program two drones to do that, even with a decade and a multi million dollar budget.
How different are we from the animals? Is it possible the animal parts of our brain are actually far more advanced than the human parts of our brain? Our software has had aeons to work on things like navigating 3D space, recognising and manipulating never before-seen objects, hearing and identifying sounds. But only a few dozen millenia to work on the higher human plane of logic and abstraction.
Computers operate in that abstract world and are – mostly – killing us at it. Arithmetic is their bread and butter. Accounting, logic and other kinds of rule following that defined human intelligence until quite recently are firmly within their grasp.
Yet machines attempts to navigate the physical world are mostly poor. If you consider how refined those animal circuits are, is it any wonder that machines still can’t do these animal things? If what we can do easily is actually very hard, it might be less surprising that our first iteration at self-driving cars smashes into giant objects right in front of them. And we might approach the task of training robots to interact with dynamic real world space with more humility.
ROBOTS TAKING OUR JOBS
If we have misconstrued the extent of human skill in various domains, could that lead to confusion about what tasks can easily be automated? Everyone seems to think truck driving is due for immediate automation. What if that is because of a sense that truckies aren’t smart?
Many people assume a chess-playing computer must also be able to do everything a person of everyday intelligence can do. Here’s Tesla CEO Elon Musk, speaking at the company’s annual earnings call on 7 February 2018.
“I am pretty excited about how much progress we are making on the neural net front… It is also one of those things where it is kind of exponential. … It doesn’t seem much progress, doesn’t seem much progress and then suddenly: Wow!
“That has been my observation generally with AI stuff . And if you look at what Google’s DeepMind did with AlphaGo. It went from not being able to beat even a pretty good Go player to suddenly it could beat the European Champion. Then it could beat the world champion. Then it could thrash the world champion. Then it could thrash everyone simultaneously.
“Then they had AlphaZero which could thrash AlphaGo! And just learning by itself was better than all the human experts.
“It is going to kind of be like that for self-driving. It will seem like this is a lame driver, this is a lame driver, this is a pretty good driver … [then] holy cow this driver is good!”
It seems to follow logically, but it might not.
We value abstract cognition because it is rare in humans. But we don’t value what is profoundly and abundantly available to us – skill in moving through the real world. That’s why the stock analyst gets paid more than the taxi driver.
Yet traders are already being replaced with algorithms. Taxi drivers – not yet. That could be a warning signal, and our model of intelligence could be impeding us from seeing it.
“Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology. For those of us who have been working on the technology for a long time, we’re going to tell you the issue is still really hard, as the systems are as complex as ever.”
“I wonder how many of the people making predictions about the future of truck drivers have ever ridden with one to see what they do?
One of the big failings of high-level analyses of future trends is that in general they either ignore or seriously underestimate the complexity of the job at a detailed level. Lots of jobs look simple or rote from a think tank or government office, but turn out to be quite complex when you dive into the details.
For example, truck drivers don’t just drive trucks. They also secure loads, including determining what to load first and last and how to tie it all down securely. They act as agents for the trunking company. They verify that what they are picking up is what is on the manifest. They are the early warning system for vehicle maintenance. They deal with the government and others at weighing stations. When sleeping in the cab, they act as security for the load. If the vehicle breaks down, they set up road flares and contact authorities. If the vehicle doesn’t handle correctly, the driver has to stop and analyze what’s wrong – blown tire, shifting load, whatever.
I’ve been working in automation for 20 years. When you see how hard it is to simply digitize a paper process inside a single plant (often a multi-year project), you start to roll your eyes at ivory tower claims of entire industries being totally transformed by automation in a few years.
Perhaps this argument is upside down. Perhaps we chose not to make computers good at the material word. Perhaps we trained computers to do abstract things because only a few people can do them. To get the benefit of training a computer we must set it on tasks where human skill is rare. It is not that they couldn’t do what we can do, just that we haven’t put in the effort.
FINDING THE PATTERNS TO RECOGNISE
I suspect the problem is not so much in asking computers to process the data produced by manual tasks as getting them to identify it as data.
In an abstract world data is always in the right place and fully visible. In a spreadsheet, the data you need will always be exactly in the right place. And if not, nobody expects the spreadsheet to figure that out and fix it. In the physical world, information might be harder to find. Where’s the label on this box? Where’s the face on this human? Where’s the road under this snow? etc.
We already know how you can get robots to take on jobs in the material world. You need to standardise the inputs. Robots do a wonderful job welding things that come down a production line. They do a great job driving trains in wholly separated systems. They do a perfect job of driving lifts up and down lift-wells, etc. In these cases we give the material world the standardised appearance of an abstract one. Take away the production line, the protected rails and the lift-well, and those systems are all at sea.
Neural nets will of course be much smarter than the computers that drive lifts. They will be able to parse information from the material world. Self-driving cars can use cameras, radar, lidar and 360 degree vision to get advantages over us in sensing. These systems should be able to learn fast.
But I am not yet convinced we can apply the lessons from an abstract world which has only 64 different locations, to a real world which is infinitely more complex. Assuming those lessons will cross over is the exact kind of intellectual trap a cognitively limited species would fall into.
Was the problem a shortage of cool plans? I didn’t realise the problem was a shortage of cool plans.
Yesterday, Tesla announced two new vehicles – a semi trailer, and a roadster. The launch was awesome.
Musk does theatre like a natural. Adding to the happy vibe was that he spent no time covering Tesla’s big problem, which is delivering on existing plans. Instead, he added more plans.
Here’s the problem with plans. Not everything works out. The more plans you have the more chances for something to go wrong.
When you have many independent systems, adding an additional system adds to your chance of one working.
However when each system is interlinked (by, say being in the same corporate structure, or worse, by being an input to another system) the rising chance of one failure increases the chance of mutliple failures.
For example, the gigafactory battery plant is an input to the Model 3 production line. Failures at the gigafactory are holding up production, imperilling the Model 3 and the whole company’s cashflow, and therefore its survival.
When you have interlinked systems, risk management is, in the long run, almost everything.
If you google “Elon Musk” and Risk, you find a lot about him worrying about the risk from artificial intelligence to human survival. And I could see nothing about him discussing his approach to risk management in business.
Elon Musk has a longstanding pattern of managing risk by insourcing. When something’s not going right he tries to solve it by doing it in house, or even personally.
Nobody wants their supplier to go broke because they forced the supplier to take on too much risk. If it happens you’re short on inputs. But if you bring the problem under your own roof and find it can’t be solved then you’re short on inputs and in a financial hole.
A list of things Tesla is doing in house that a regular carmaker doesn’t is… eye-opening.
Energy generation and storage systems. (solar panels and batteries)
Developing autonomous driving systems
All of these are tricky. They may cost more to do than expected.
Just because this looks like a car company doesn’t mean it has the risk profile of a car company. Building cars is not the only prerequisite for success.
Solving production hell will take management effort and money. But the two new vehicles will divert effort and money. A juggling analogy may be apt. When you add extra balls, the juggler trying to control them drops the lot, not just the new ones.
The Roadster has some serious technical questions to answer, but – if it can be built – of course they can sell a lot of copies. It’s the world’s fastest car from the world’s coolest brand.
The truck, however, is not certain to sell. While consumers buy on brand and image, the logistics industry is relentlessly optimised around cost. The range Tesla’s truck offers (500 miles) demands an enormous battery, which will make the truck expensive to buy and increase its weight by an estimated 12 tons. That weight matters for at least three reasons
1. The total weight to payload ratio changes, offsetting some fuel advantages.
2. Road damage increases exponentially on a weight per axle basis and governments are increasingly keen to get the logistics industry to pay road user charges based on weight.
3. Trucks are sometimes empty and carrying a battery at those times raises the cost.
The truck also means Tesla had to invent a whole new charger to make sure their trucks could be charged in a reasonable time (30 minutes for 400 miles). It is unclear to what extent this new Megacharger has been invented as opposed to just envisioned. It is further unclear how much they might cost to install, or if they are compatible with existing electricity distribution infrastructure.
Incidentally, the time it takes to charge a vehicle means Tesla may need to install a high ratio of chargers to vehicles on the highways. We’ve all pulled into a petrol station to find all the pumps are in use. You wait three or four minutes and they become free. If the person in front of you is going to take 30 minutes to charge, and then you’re going to take another 30 minutes, you’ve got an enforced one hour stop. God forbid it’s busy and there’s more than one person in line in front of you.
The way for Tesla to combat this inconvenience is by installing *a lot* of chargers at places where people are taking long trips. (This problem should not apply at home, where people can charge their vehicle overnight, but it would apply if you’re doing distance travel, and especially to semi trailer trucks.)
The Tesla semi trailer and the roadster are, however, not just extra risk. They can help Tesla raise capital it sorely needs. Pre-orders of the first 1000 roadsters are available by putting down $250,000. If they can find 1000 people willing to put 250k on ice for a few years, that will put $250,000,000 into Tesla’s pockets. Its most recent cash burn was $1.4 billon in a quarter, so $250 million would buy them an extra three weeks.
Congestion charging is back on my list of good ideas
For a while there, I was influenced by the equity arguments against it. The lack of substitutes to travel, and the unique role of commuting to work in a person’s well-being tipped me against congestion charging. Good economic reform doesn’t throw out equity every time it can get an efficiency dividend, and I decided congestion charging equity problems made the policy unworkable.
But I’m swinging back to support for a simpler price signal. What has captured my attention is the following graph from a new Grattan Institute report. It shows the extent of congestion in Sydney. Amazingly, most people experience almost no congestion. Their commutes are swift.
What this tells me is the the impact of a congestion charge is actually not likely to be widespread. Serious congestion, of more than ten minutes in a trip, is the purview of a small subset of commuters.
That subset is likely to be going into the CBD, where congestion is real.
Remember that despite the importance of CBDs, most jobs are still in the suburbs. If we know one thing about CBD jobs – especially nine to five CBD jobs – it is that they tend to be the good kind.
City centres are where the business services jobs are. The specialised jobs that pay big coin, as opposed to the population-serving jobs (pharmacies, florists, bakers, doctors, schools) that are found disproportionately in the suburbs.
It looks like driving into the city in peak hour is an elite problem. No wonder it gets so much attention. The Grattan analysis makes it clear the congestion charging would really only have to be applied in a narrow area.
This fact also counters the argument that congestion charging can’t be introduced until better public transport happens. Melbourne and Sydney have radial public transport systems that provide terrific CBD access.
Traffic is bad. The absence of price signals on the use of existing infrastructure causes crowding and delays. You end up listening to way too much FM radio. But that might not be the most costly effect. The big downside is probably the pressure to build yet more infrastructure.
Daniel Andrews has green-lit the West Gate tunnel – a big freeway that will not only soak up $5.5 billion but also lock Victorians into a regimented tolling regime (not a congestion charge system) for decades.
Big freeway projects have a lot of side effects.
One is making the places that they travel through less pleasant. Place-making is a big theme in urban planning now and a lot of money is spent on making areas seem nice. This “tunnel” which is actually an elevated road for a good section of its length,is kind of the opposite of place making.
A second side effect is city-shaping. You can cut travel times to the city, but that encourages yet more sprawl and inefficient urban form. (Thanks, marchetti’s constant._
If you want a policy that is likely to be equitable, can potentially conserve scarce government funds for more valuable projects, and prevent the paving over of the inner city, then congestion charging is your horse.
To finish with here’s some data to make you go “huh!” – rain, apparently, has no effect on traffic:
Sometimes you can see a policy change coming a mile off. For about the last two decades, drug legalisation looked like such a case.
The positive results of decriminalisation in Portugal, and the examples of marijuana legalisation in Uruguay and various states of the US were becoming more widely known. The Penington report in 1996 argued for decriminalisation of marijuana and when Victorian Premier Jeff Kennett ignored its recommendations it was seen as a stance justified only by retail politics.
It seemed only a matter of time before expert recommendations on decriminalisation and legalisation were taken on board by Australia and nations across the world. An armistice was about to be announced in the increasingly stupid war on drugs. So it seemed.
Then the opiates crisis began. America is in the grip of a really shocking wave of premature mortality, caused by addiction to opiates. The scale of it is really awful – at 32,000 deaths a year, roughly equal to the numbers killed by firearms in that country.
(If you’d like your faith in journalism to be restored utterly while your heart is smashed into a million irrevocable pieces, I recommend this piece, Seven Days Of Heroin, from the Cincinatti Enquirer.)
The US opiates crisis has forced some hard thinking on the merits of legalisation (for drugs beyond talking about drugs beyond marijuana, mostly.)
Opiates are not only a gateway to heroin abuse but a problem in themselves. Legal opiates accounted for 20,101 overdose deaths in the USA in 2015 compared to 12,990 related to heroin. If a legal drug, tightly controlled by law and available only under prescription, can be abused in a way that spirals way out of control, what does that say about the prospects of ending prohibition of drugs?
TOO MUCH TRUST
With legalisation, nothing is going to end up as available as buying flour at the supermarket. There will always be controls – regulation, licensing, etc. Choosing them is critical. But there is one shortcut we tend to take.
We love to rely on doctors as one of those controls. “Only available via prescription” sounds like a big barrier to drug availability. We have a lot of trust in doctors at a personal care level and that transfers over to a policy level.
Meanwhile, even the best-intentioned doctors are at the mercy of a pharmaceutical system that is itself far from perfect.
(If you’d like your faith in journalism to further cement while your faith in capitalism is smashed into a million irrevocable pieces, I recommend this piece, ‘You want a description of hell?’ OxyContin’s 12-hour problem, from the LA Times. It describes how a big Pharmaceutical companies lies about its products, got loads of people hooked on opiates and evaded a whole lot of systems designed to stop exactly that from happening.)
To some extent this is like the story on Elon Musk yesterday. It bothers me when too much trust is vested in an entity, person or institution that doesn’t deserve it. And nobody deserves as much trust as we invest in doctors without an panopticon of ombudsmans, review committees and inspectors.
I think we can move towards legalisation of drugs. But what is crucial in regulating anything is the fine details of the way they are controlled.
I wrote about this in The Right Amount of Smoking. Finding the exact sweet spot for control and legalisation is hard. You can fiddle with public and private ownership of suppliers, taxation, occupational licensing, sales licensing and controls on consumption.
At this stage, we probably don’t have enough controls for gambling, and we have too many of the wrong kinds for most drugs.
Finding the right kinds of control is hard and requires ongoing adjustment of the policy settings. Trying to outsource the difficulty we have in solving that to doctors is an attractive shortcut, but not the answer.
Elon Musk gets on my nerves. Whenever I see him in a headline my teeth start grinding.
But why? I agree with all his goals. I love the idea of clean energy. I want better batteries. I’m excited by colonising the universe and digging cheaper tunnels. So why does his every pronouncement get me upset?
I’ve been dwelling on this recently, and can only conclude it’s because of the lack of public skepticism he encounters.
Fly to most places on Earth in under 30 mins and anywhere in under 60. Cost per seat should be… https://t.co/dGYDdGttYd
Whenever I think about the future, I like to consider it in probabilistic terms. So when I hear Elon Musk talk about using rockets to travel from New York to Sydney in an hour, I naturally try to imagine what the likelihood of this happening is. I generally come up with numbers awfully close to zero.
Apparently other people’s thinking goes off in different directions, wondering about comfort during take off:
I don’t find myself thinking about g-forces. I’m too busy puzzling over why he should be able to make a roof including solar panels for less than the price of a roof. What does he think roof manufacturers have been doing for all this time?
Musk is not short of ambition or afraid to make his life more complex. For example, the original Telsa plan had nothing in it about automation or self-driving. He just bolted that onto the plan, presumably expecting it would be doable if the engineers just tried hard enough.
I remain skeptical.
When people think about the progress of science, they have an awful tendency to be swayed by survivor bias. They think especially about progress in personal electronics – because that’s where the progress is. They infer that technology can utterly transform itself with a decade or two.
But when you take a broader sample, you see something different. For every iPhone that did get invented, a flying car failed to be. While we beat back AIDS, cures for dementia and multiple sclerosis languished. And not for want of effort. You can’t tell in advance which fields will yield to effort.
I was a big fan of an old website called Paleo Future, which goes back and looks at old predictions of the future. They’re mostly silly.
In fifty years, most Musk plans will seem as silly. But they’re being repeated across all forms of media. That credulity, and the adulation that goes with it, really rustles my jimmies.
The other relevant cognitive bias is the base rate fallacy. People ignore the fact that in a given domain (colonising space, say) background probabilities of success are very low. They prefer instead to focus on some other seemingly salient factor, like whether the person making the plan to do so is a genius. (And I’m perfectly willing to admit Musk is.)
Now, the charm of having so many cognitive biases running in your favour, is you can attract a lot of capital and hire a lot of good employees. You get to make a lot of bets at once. Take one 5 per cent chance, you’re set to fail. But take ten and you have a 40% chance of one of them coming good.
So I’d be surprised if everything Musk tries from hereon turns to poop. He can probably go down in history as a genius inventor. But at the moment he’s getting way ahead of himself.
Musk’s strategic thinking has worked well so far for Tesla, but past performance is no guarantee of future performance. You only need to look back on his Tesla “Master plan Part Deux” from 2016 to get a sense of how iffy it can be. It contained a very peculiar section on taking the aisles out of buses to make room for more seats. Ignore for a moment that aisles are important to buses – the point is that that kind of fine detail has no place in a strategic plan. Shortly afterward, he walked back the whole section on buses anyway. The whole thing made me wonder if his success came because of or despite his strategic vision.
It is possible that long before he has a chance to be proven wrong on intercontinental travel, Mr Musk will have a reversal of fortune.
Tesla’s plan to ramp up production of Model 3s in a new facility looked risky to me from the start. Manufacturing is hard and Tesla is new to doing it at scale. Today we learned initial production of the smaller more affordable car has fallen short.
I understand why they’re rushing. There’s two reasons Tesla must sprint to survive.
First, so much debt has been brought on that they need a lot of sales to pay the interest (with the share price so high I don’t understand why they wouldn’t just issue shares, which don’t need to be periodically refinanced).
So far Tesla burns cash just to stay running. Having big debt and negative cashflow is not sustainable. There’s not many times corporate finance is heart-in-your-mouth terrifying, but Tesla is making it like watching one of those guys in a wingsuit.
I think the Tesla corporate structure needs careful steering to not end up on the rocks. The technology and brand could well be for sale within five years, and gleefully bought up by someone like General Motors or Google. That’d be awful for Tesla investors and employees but mostly fine for society, as the losses incurred in creating all this technological progress would be internalised by all the investors who’ve done their dough.
SHOULD I BE SO MAD?
So am I justified in being so cross at Elon Musk and all the people who believe in him?
One argument is I am not. To the extent that he is making great progress, I should shut up, and to the extent he is selling risky bets, his main victims are private investors who are welcome to include in their portfolios a few risky bets.
While money will be wasted, technology will also be created. If it has value, that technology will presumably be up for grabs if Tesla (or SpaceX, or Hyperloop) ever needs to make their creditors whole, and society will still be able to benefit from them. From this perspective, the cult of Elon Musk is just a big scheme to get private investors to take the risks of moving science forward. And it’d be awfully pig-headed to be mad at that.
From another perspective, investor money is finite, and we should be careful to steer it toward those schemes with the highest chance of success.
So tell me, dear reader. Am I being too much of a grouch toward Mr Musk?