Tagged: growth

The bagless inventor’s ancestors won’t be breathing smart dust.

headSo today brings a fundamental challenge to Life on the Edge’s deeply-held worldview. And you’ve already probably guessed how much we genuinely appreciate one of those.

You either get to over-turn your outmoded paradigm. An exciting event all-round. Or you get to reaffirm what you believe to be true.

Either way, that’s a big win.

Today’s challenge comes from Robert J. Gordon at the National Bureau of Economic Research (BNER). In a properly academic paper he asks: ‘Is U.S. economic growth over?’ And he contends that, to all intents and purposes, it probably is.

<Pause to let you inspect the impact crater that may have made in your mind.>

Obviously that conflicts somewhat with LotE’s current view. As we opined less than a few day’s before Christmas, our narrative would have you believe that, with the onset of ever-wider Artificial Intelligence, we’ll move into the fastest period of economic growth ever seen. And that this will continue to accelerate beyond any visible technological event horizon.

On the contrary, says Gordon. We had precious little growth before 1750. There can be no assumption that the rapidity of Twentieth Century development will continue. In fact, all evidence points to a slowdown over the last eight years. And given the challenges the US economy faces – demography; education; inequality; globalisation; energy/environment; and the overhang of debt – the slowdown will continue. With no discernible manner of change.

So what’s a group of semi-sentient apes to think? Well, first things first. We’re talking about the future here. And so we don’t have the data to prove anything. In fact, there is by definition a complete absence of fact. It is a metaphysical, impossible discussion. Necessarily theoretical.

However, we can examine the logic of both arguments.

Gordon’s analysis assumes that the computing revolution has effectively run it’s course – at least in terms of its ability to make us more efficient and increase our output. It kicked off around 1960 and gave us a growth spurt between 1996 and 2004.

Exact empirical validation aside, we’d have little quibble with this.

What we would challenge is Gordon’s apparent view of the future.

Borrowing heavily from the thoughts of Robin Hanson’s Big History analysis, we would argue that the initial computing revolution is merely the fag-end of the manufacturing revolution.

Up until recently we only had narrow-AI machines that were capable of following narrow sets of rules to create narrow sets of outcomes within highly constrained environments.

Even super-computers could be described in these terms.

The move to wider-AI is coming as computers begin to solve much more complex problems. Machines are now capable of following complex sets of rules to create broad sets of outcomes within less constrained environments.

Self-driving cars, for instance.

Start networking wider-AI devices together and collectively they could start taking all sorts of decisions. Add in Big Data with some analysis tools and they might even get creative.

How much more productive will that be? How much more labour (will that still be the right word?) will it inject into the economy? The same amount, as Gordon rightly points out, that was introduced through women joining the workforce in the 20th century? Or considerably more?

So if we accept that wide-AI is on its way, it seems reasonable to expect this to have a fundamental effect on growth rates globally.

Gordon’s analysis is based on this not being the case. On the computing revolution being spent.

And therein lies your choice. Is the rate of change accelerating exponentially?

We think that the very existence of this blog implies that it is. We exist therefore it is, if you will.

Taken from today’s daily news, here’s a list of some things humanity knows how to do now that it didn’t know only a short while before:

1. Using helium instead of air in hard drives could make them significantly more efficient;
2. You can now take a bath with your phone and expect it to survive;
3. It’s possible, with existing technologies, to launch a PC-on-a-HDMI-stick and make computing even more portable than ever;
4. Budding Han Solos will be pleased that laser weapons are a reality (we’re aware of the dubious mortality of that statement, btw – no more emails please); and
5. Scientists think that slimy, ocean-dwelling bacteria and their use of quantum physics will help develop more efficient solar power.

But most compelling of all Google is spending heavily on a piece of wider-AI, an agent that will help search out everything you’re interested in and deliver updates as-and-when they are available in the manner most convenient to you.

That line of enquiry sounds growth-generating to us. Imagine what one could achieve if new information were available the moment it was released without the need to look for it. What if other machines could also do something useful with it?

So on balance, our worldview appears to be reaffirmed. But what it does remind us is that any view of the future is only that – a view. And there may be other reasons why we’ve got it all horribly, desperately wrong.

Because today we were also reminded that those far mightier and much cleverer than us are sometimes spectacularly misguided.

Today Sir James Dyson, inventor of the bag less vacuum cleaner, denounced the government’s obsession with ‘Silicon Roundabout’ and for valuing ‘the glamour of web fads’ over ‘more tangible technology’ to boost export revenues.

We’ll leave you to draw your own detailed conclusions. But was it telling that Sir James chose to talk to The Radio Times?

We were depressed that such a great mind appears to have disappeared up its own suction pipe. So if you need cheering up as another of your heroes bites the proverbial, just remember that one day your ancestors may breathe in smart dust and exhale pure data.

Goodness knows what Dyson’s and Gordon’s children’s children’s children will be up to though.