Last Friday the world looked up and nodded as the Woolworth’s of the mobile phone industry released 3D print designs for the body of a current mobile phone, the Lumina 820. Interesting, certainly. But hardly earth-shattering.
Not, at least, until Monday. When we all caught up with the implications.
Nokia says it wants to move away from global to local production. The phone casing is the easiest component to start this journey. However, the rest of the mobile can follow.
And once the economics work out (speculate away as to when that’ll be) whole phones will be built more cheaply in each individual market. And they’ll be a chance to customise for each location.
We know this is the way the world is going. The advent of 3D printing almost makes it a foregone conclusion. So with Nokia apparently out of the blocks first – and with Apple or Samsung rather unlikely to follow any time soon – will this be the idea that saves our favourite ’90s phone-maker?
Well, it’s certainly innovative. And Nokia needs to be that – in spades – to escape its death-spiral.
And this is a big but:
Is this the right kind of innovation from Finland?
Firstly, we don’t know when the economics of local printing will work. And it remains possible that, to a certain extent, they never will. Today, a printed hamburger would set you back $300,000.
Secondly, Nokia’s core issues are not related to its supply chain. What it lacks (or at least did until recently) is a decent smartphone people want to buy.
So it’s quite possible Nokia won’t be around when the local supply chain revolution kicks in.
But why then does the squirmy, excited feeling remain in the pit of our stomachs? Why is this a significant announcement?
Most importantly it shows that 3D printing is being taken seriously by those with enough capital to make a proper impact on the shape of the world. Nokia may be the first to announce such plans. But others – if not the other grown-ups of its own industry – will follow. Perhaps rather shortly?
More emotionally, this is Nokia showing that it can still think different. And have the courage to back a conviction. To mix a couple of clichéd metaphors – its back’s against the wall and that’s put fire in it’s belly.
And that’s a big deal. Many of us still have fond memories of phones that worked. That crashed so little you didn’t think of them as computers. All the while being utterly intuitive to use.
So what else does it have up its sleeve? Is there an advance in the pipeline that’ll steal market share? This felt impossible. Now? We’re not so sure.
And what does this say about the process of innovation? We all know the answer. But it’s still a surprise every time it’s illustrated so vividly:
Invention is born of necessity.
When things are working, you comfortably float on, not fixing it. Because ‘it’ ain’t broke.
Consider Apple. Can’t you just hear them in the boardroom? Those loyal to Jobs are, currently quite politely, reminding the assembled company that they haven’t invented a major new paradigm for three years. And that their lifeblood is inventing major new paradigms. So they’d better get that TV out into the marketplace. Quickly. And in ship-shape fashion.
But the others are feigning attention. Looking at the sales figures and wondering why anyone would be stupid enough to take a risk. Rome burns. They fiddle.
And all of us are guilty of that sometimes. To change our behaviour we need to change our context.
And Nokia’s context was changed. Substantially. Almost overnight. The only question – still mostly unanswered given the rope it has to hang itself – is how it reacts. Like Kodak or Yahoo!? Or like Apple before the iPod? For Apple, let us remind ourselves, was three-quarts dead. With one foot and a half in the grave.
Because if innovation is again taking the lead at Nokia, rumours of the Fin’s death might very well have been exaggerated.
They know better than us that the real local manufacturing revolution is probably a decade away. But they’re thinking long-term. And that has to give you confidence. The terminally ill don’t plan much.
Which, if you want one, is a Reason to Believe again.
But whatever you decide, one thing’s for sure. This soap opera of innovation and technology is highly entertaining. And it’ll keep on rolling.
A couple of days ago Life on the Edge was challenged. The Golden Utopia of THE FUTURE may never happen, we were informed. Economics might not work like that.
Having rationalised our way out of that one, (phew, close call) it must be back luck that this then happens: It has been persuasively suggested that our Utopia might actually be a tad oppressive. Not to mention downright scary. Utterly hostile to human existence, in fact.
Writing in Newsweek’s first digital-only magazine Tom Wolfe (author of The Bonfire of the Vanities) pens this disturbing description of visiting a data centre:
“…any human being who entered was engulfed, oppressed, unnerved, spooked out by an overwhelming droning sound and an X-ray-blue fluorescent light that made your skin look posthumous. The droning seemed to create a pressure upon your skull. Sometimes the drone would rise slightly, then lower…and rise…and lower. It made you think this enormous robo-monster was breathing… If you were knowledgeable enough even to be allowed to enter one of these huge server rooms, you knew that most of the droning came from air-conditioning units high as a wall… that ran constantly to keep this concentration of machines from auto-melting because of their own ungodly heat.
“You could know all that, but the robo-monster would ride your head so hard, you would turn anthropomorphic in spite of your superior brain…The robo-monster—it’s breathing…it’s starting to move…it’s got me by the head…it’s thinking with its CPU (Central Processing Unit) mind, thinking in algorithms, sequences of programmed decisions along the lines of “If A261, then G1432, and therefore B5556 or QQ42—” spotting discrepancies, making buy-sell decisions, even deceptive looks-like-a-buy feints to trick competing robo-brains into making foolish calculations. The monster’s human… No, he’s not human…No human brain could possibly think or act as fast, as accurately, as cunningly as a robo-brain.”
Is this what man-machine collaboration is to be like? Are we to be comprehensively bested by the technology?
If you believe, as we do, that Artificial Intelligence will – in the not-too-distant-future – be of a higher order than that of the human brain, then this is a worrying piece of prose.
Forget the Terminator / Matrix scenarios. Perhaps our physical oppression is irrelevant. Are we to be made irrelevant by the machines’ superior abilities? Is this how we become their pets?
Of course, the superior intelligence is likely to cloak itself in a manner friendly to humans. But underneath the veneer – that we will know to be false – how will we feel? Knowing we are second-class citizens?
Alienated and inadequate are two words that come to mind.
So, a note – nay a plea – to future generations. When devising AI please ensure there is a direct interface into the human brain. Use AI to enhance our own consciousness. Provide an API so that we can merge and thereby experience what and how the machines think.
Or else develop the technology so that we dispense with our meat form, download our conscious selves and merge with the AIs.
For we have decided we don’t want to live in a world where machines are more intelligent than us. And where we can’t participate. We just can’t stand the inadequacy.
Of course, such thoughts cannot direct technology. The developments will evolve in the direction of their own logic. Our collective cannot make it otherwise?
Well, perhaps there is reason for optimism. Perhaps we can feel sure that we won’t be second class.
There’s going to be massive demand from humans who wish be absorbed into the AI. Not to be left behind. To think and feel as they do. To be artificially intelligent, operating at the same level. On equal terms. With our own – far extended – wetware as a base.
And if this is indeed a human desire then, once AI arrives, perhaps there will be sufficient forward motion for the next generation of technology to allow humans to cross the divide.
Some will presumably choose not to. But, we suspect, the delicate egos of many will mean they will choose to leave the pure human behind. And become one with AI.
Which we think is a truly mind-blowing idea.
See? Like in any other belief system, ours allows all simple crises of confidence to be rationalised away as well.
So we re-state the position:
The rate of change is increasing. Exponentially. And – on the whole – it’s good.
You either get to over-turn your outmoded paradigm. An exciting event all-round. Or you get to reaffirm what you believe to be true.
Either way, that’s a big win.
Today’s challenge comes from Robert J. Gordon at the National Bureau of Economic Research (BNER). In a properly academic paper he asks: ‘Is U.S. economic growth over?’ And he contends that, to all intents and purposes, it probably is.
<Pause to let you inspect the impact crater that may have made in your mind.>
Obviously that conflicts somewhat with LotE’s current view. As we opined less than a few day’s before Christmas, our narrative would have you believe that, with the onset of ever-wider Artificial Intelligence, we’ll move into the fastest period of economic growth ever seen. And that this will continue to accelerate beyond any visible technological event horizon.
On the contrary, says Gordon. We had precious little growth before 1750. There can be no assumption that the rapidity of Twentieth Century development will continue. In fact, all evidence points to a slowdown over the last eight years. And given the challenges the US economy faces – demography; education; inequality; globalisation; energy/environment; and the overhang of debt – the slowdown will continue. With no discernible manner of change.
So what’s a group of semi-sentient apes to think? Well, first things first. We’re talking about the future here. And so we don’t have the data to prove anything. In fact, there is by definition a complete absence of fact. It is a metaphysical, impossible discussion. Necessarily theoretical.
However, we can examine the logic of both arguments.
Gordon’s analysis assumes that the computing revolution has effectively run it’s course – at least in terms of its ability to make us more efficient and increase our output. It kicked off around 1960 and gave us a growth spurt between 1996 and 2004.
Exact empirical validation aside, we’d have little quibble with this.
What we would challenge is Gordon’s apparent view of the future.
Borrowing heavily from the thoughts of Robin Hanson’s Big History analysis, we would argue that the initial computing revolution is merely the fag-end of the manufacturing revolution.
Up until recently we only had narrow-AI machines that were capable of following narrow sets of rules to create narrow sets of outcomes within highly constrained environments.
Even super-computers could be described in these terms.
The move to wider-AI is coming as computers begin to solve much more complex problems. Machines are now capable of following complex sets of rules to create broad sets of outcomes within less constrained environments.
Self-driving cars, for instance.
Start networking wider-AI devices together and collectively they could start taking all sorts of decisions. Add in Big Data with some analysis tools and they might even get creative.
How much more productive will that be? How much more labour (will that still be the right word?) will it inject into the economy? The same amount, as Gordon rightly points out, that was introduced through women joining the workforce in the 20th century? Or considerably more?
So if we accept that wide-AI is on its way, it seems reasonable to expect this to have a fundamental effect on growth rates globally.
Gordon’s analysis is based on this not being the case. On the computing revolution being spent.
And therein lies your choice. Is the rate of change accelerating exponentially?
We think that the very existence of this blog implies that it is. We exist therefore it is, if you will.
Taken from today’s daily news, here’s a list of some things humanity knows how to do now that it didn’t know only a short while before:
1. Using helium instead of air in hard drives could make them significantly more efficient;
2. You can now take a bath with your phone and expect it to survive;
3. It’s possible, with existing technologies, to launch a PC-on-a-HDMI-stick and make computing even more portable than ever;
4. Budding Han Solos will be pleased that laser weapons are a reality (we’re aware of the dubious mortality of that statement, btw – no more emails please); and
5. Scientists think that slimy, ocean-dwelling bacteria and their use of quantum physics will help develop more efficient solar power.
But most compelling of all Google is spending heavily on a piece of wider-AI, an agent that will help search out everything you’re interested in and deliver updates as-and-when they are available in the manner most convenient to you.
That line of enquiry sounds growth-generating to us. Imagine what one could achieve if new information were available the moment it was released without the need to look for it. What if other machines could also do something useful with it?
So on balance, our worldview appears to be reaffirmed. But what it does remind us is that any view of the future is only that – a view. And there may be other reasons why we’ve got it all horribly, desperately wrong.
Because today we were also reminded that those far mightier and much cleverer than us are sometimes spectacularly misguided.
Today Sir James Dyson, inventor of the bag less vacuum cleaner, denounced the government’s obsession with ‘Silicon Roundabout’ and for valuing ‘the glamour of web fads’ over ‘more tangible technology’ to boost export revenues.
We’ll leave you to draw your own detailed conclusions. But was it telling that Sir James chose to talk to The Radio Times?
We were depressed that such a great mind appears to have disappeared up its own suction pipe. So if you need cheering up as another of your heroes bites the proverbial, just remember that one day your ancestors may breathe in smart dust and exhale pure data.
Goodness knows what Dyson’s and Gordon’s children’s children’s children will be up to though.
What’s more kinds of wrong? That Americans are signing a petition for Obama to start building their very Death Star? Or that, as The Register reported, BHP Bilton – the world’s largest mining company – is having to modify a coal terminal because of rising sea levels?
Given America’s coining of the term ‘Axis of Evil’ both stories could be termed ‘ironic’. In fact, as The Register noted, they kinda skip past delicious, pay a passing nod to schadenfraude and land somewhere between “you’re kidding” and “too good to be true”.
But in a test of how hardy life on earth truly is, a group of scientists will be looking for life 3km below the surface of the Antarctic. Chances are any living thing won’t have encountered outside species for millions of years.
But given the guilt’s not gotten to you and you’re still driving a fossil fuel-burner, you can at least console yourself that you’ll save petrol with an app that finds a parking space.
Which might be handy when you pop down the road to pick up that 3D-printed item Staples might be selling you, from your own design, next year.
And you can also smile yourself to sleep knowing that we’re about to get real web TV – via the BBC merging iPlayer with the Red Button.
But then again, we’ve learned your TV might start watching you as you’re watching it. Will that keep you awake? Some already sound a bit queasy.
Well, there’s a social network perfect for them. While other networks were confirmed as the most popular thing to do on the web, Patients Like Me is helping the sick pool their experiences. Sounds like Hypochondriacs Anonymous to us. Or at least the only social network no one wants to join.
But then again, we’ve been wrong before.