Tagged: Google

The bagless inventor’s ancestors won’t be breathing smart dust.

headSo today brings a fundamental challenge to Life on the Edge’s deeply-held worldview. And you’ve already probably guessed how much we genuinely appreciate one of those.

You either get to over-turn your outmoded paradigm. An exciting event all-round. Or you get to reaffirm what you believe to be true.

Either way, that’s a big win.

Today’s challenge comes from Robert J. Gordon at the National Bureau of Economic Research (BNER). In a properly academic paper he asks: ‘Is U.S. economic growth over?’ And he contends that, to all intents and purposes, it probably is.

<Pause to let you inspect the impact crater that may have made in your mind.>

Obviously that conflicts somewhat with LotE’s current view. As we opined less than a few day’s before Christmas, our narrative would have you believe that, with the onset of ever-wider Artificial Intelligence, we’ll move into the fastest period of economic growth ever seen. And that this will continue to accelerate beyond any visible technological event horizon.

On the contrary, says Gordon. We had precious little growth before 1750. There can be no assumption that the rapidity of Twentieth Century development will continue. In fact, all evidence points to a slowdown over the last eight years. And given the challenges the US economy faces – demography; education; inequality; globalisation; energy/environment; and the overhang of debt – the slowdown will continue. With no discernible manner of change.

So what’s a group of semi-sentient apes to think? Well, first things first. We’re talking about the future here. And so we don’t have the data to prove anything. In fact, there is by definition a complete absence of fact. It is a metaphysical, impossible discussion. Necessarily theoretical.

However, we can examine the logic of both arguments.

Gordon’s analysis assumes that the computing revolution has effectively run it’s course – at least in terms of its ability to make us more efficient and increase our output. It kicked off around 1960 and gave us a growth spurt between 1996 and 2004.

Exact empirical validation aside, we’d have little quibble with this.

What we would challenge is Gordon’s apparent view of the future.

Borrowing heavily from the thoughts of Robin Hanson’s Big History analysis, we would argue that the initial computing revolution is merely the fag-end of the manufacturing revolution.

Up until recently we only had narrow-AI machines that were capable of following narrow sets of rules to create narrow sets of outcomes within highly constrained environments.

Even super-computers could be described in these terms.

The move to wider-AI is coming as computers begin to solve much more complex problems. Machines are now capable of following complex sets of rules to create broad sets of outcomes within less constrained environments.

Self-driving cars, for instance.

Start networking wider-AI devices together and collectively they could start taking all sorts of decisions. Add in Big Data with some analysis tools and they might even get creative.

How much more productive will that be? How much more labour (will that still be the right word?) will it inject into the economy? The same amount, as Gordon rightly points out, that was introduced through women joining the workforce in the 20th century? Or considerably more?

So if we accept that wide-AI is on its way, it seems reasonable to expect this to have a fundamental effect on growth rates globally.

Gordon’s analysis is based on this not being the case. On the computing revolution being spent.

And therein lies your choice. Is the rate of change accelerating exponentially?

We think that the very existence of this blog implies that it is. We exist therefore it is, if you will.

Taken from today’s daily news, here’s a list of some things humanity knows how to do now that it didn’t know only a short while before:

1. Using helium instead of air in hard drives could make them significantly more efficient;
2. You can now take a bath with your phone and expect it to survive;
3. It’s possible, with existing technologies, to launch a PC-on-a-HDMI-stick and make computing even more portable than ever;
4. Budding Han Solos will be pleased that laser weapons are a reality (we’re aware of the dubious mortality of that statement, btw – no more emails please); and
5. Scientists think that slimy, ocean-dwelling bacteria and their use of quantum physics will help develop more efficient solar power.

But most compelling of all Google is spending heavily on a piece of wider-AI, an agent that will help search out everything you’re interested in and deliver updates as-and-when they are available in the manner most convenient to you.

That line of enquiry sounds growth-generating to us. Imagine what one could achieve if new information were available the moment it was released without the need to look for it. What if other machines could also do something useful with it?

So on balance, our worldview appears to be reaffirmed. But what it does remind us is that any view of the future is only that – a view. And there may be other reasons why we’ve got it all horribly, desperately wrong.

Because today we were also reminded that those far mightier and much cleverer than us are sometimes spectacularly misguided.

Today Sir James Dyson, inventor of the bag less vacuum cleaner, denounced the government’s obsession with ‘Silicon Roundabout’ and for valuing ‘the glamour of web fads’ over ‘more tangible technology’ to boost export revenues.

We’ll leave you to draw your own detailed conclusions. But was it telling that Sir James chose to talk to The Radio Times?

We were depressed that such a great mind appears to have disappeared up its own suction pipe. So if you need cheering up as another of your heroes bites the proverbial, just remember that one day your ancestors may breathe in smart dust and exhale pure data.

Goodness knows what Dyson’s and Gordon’s children’s children’s children will be up to though.

Homemade silicon, man-made tornadoes and a future made by Google.

LHC2‘Man-made’ and ‘homemade’ were definitely the buzwords of the day.

We’ll be watching the man-made moon collisions, as two gravity-mapping satellites make their crash landings 10pm UK time.

On a more positive note tornadoes of human creation are being touted as the latest clean power-source. Which seems far more palatable – and effective – than the wee-power of a couple of weeks ago.

And if you want to make a microchip in the comfort of your own home, that now seems feasible too. Although no-one’s yet suggesting that you can create God Particles at home.

Not least because there seems to be some confusion at CERN over exactly how many types of those pesky Higgs Boson they’ve found. Would one of those touch-feely-smelly computers predicted by IBM help the world’s brightest physicists read the data? Or perhaps we should set the artificial intelligences building video games of their own devising on the problem?

But all of this ignores the really big news of the day, at least from an Edge Tech perspective. Redoubtable futurologist Ray Kurzweil has just been given the top engineering job at Google. This seems to suggest the search behemoth fully intends to keep inventing the future.

That’s exactly the statement of intent we like to see but let’s just hope this holy alliance remembers the Prime Directive of ‘Don’t be evil’.

And if all these ideas has delayed your progress and you’re now running late, there’s an app for that too. It can’t teleport you to where you need to be instantly. But it will tell you when you’ll arrive – and help make your excuses to those left waiting.

Maybe you know someone who could use a copy for Christmas?

 

Never say we’re afraid of a big question.

milgramexperimentSeth Godin posed it. We’re up for answering it. To paraphrase:

“Will technology steal one of humanity’s defining characteristics, agency – our ability to make a moral decision and take responsibility?”

He explained it thus:

“A soldier following orders is not a murderer, as he doesn’t have agency – society doesn’t generally want its soldiers questioning orders from our generals. But the industrial age has taken this absolution to ever-higher heights. Every worker in every job is given a pass, because he’s just doing his job. The cigarette marketer or the foreman in the low-wage sweatshop. As the industrial company sputters and fades, there’s a fork in the road. In one direction lies the opportunity to regain agency. In the other direction is the race to the bottom.”

So which way are we going?

For our money, technology will provide more opportunities for agency to disappear. If the Milgram Experiment taught us anything, it was that the combination of authority and technology will tend to lead us inflict harm – with potentially lethal consequences.

So more technology will somewhat inevitably provide more opportunity to become desensitised to the consequences of our actions.

But this isn’t a clincher. Opportunity isn’t causality.

As information technologies continue to pervade our lives, the consequences of our actions are also better-documented. The news travels wider, at a much faster pace.

Look at the moral panics that surround us every day. The Daily Mail’s coverage of ‘killer robots’ for instance. These panics create new taboos in the popular mind. And immediate backlashes. The consumer reaction to Starbuck’s reticence to pay tax in the UK was instant. Decisions made under these conditions may not be optimal but they do tend to prevent the targeted abuse.

Perhaps today’s announcement that Google will use Big Data and drone aircraft to help save endangered species is an example of how better information makes us more – not less – responsible? Because there is more we can actually do to help.

So where does that leave us? As eternal optimists we think the correct(ish) fork will be taken. A lack of moral responsibility may occur more often. But when it does it will be shorter-lived. And therefore less widespread.

This is not an entirely positive view. Yet it is not the race to the bottom Godin may fear.

Because humans do not want to be held up in public and criticised. So goodness knows how the boy who sold his kidney to buy an iPhone feels today. But his experience is useful because his decision may now be helping to create a moral mood around the practice that may well dissuade others.

Perhaps such disapprobation will also put us off an app featured in today’s headlines that lets us have a fantasy relationship with celebrities. Although – like drugs and pornography – it might just mean it gets used behind closed doors.

So hopefully other developments documented today will make bigger impacts. LinkedIn CEO Jeff Weiner wants his company’s platform to: “digitally map the global economy, identifying the connections between people, jobs, skills, companies, and professional knowledge — and spot in real-time the trends pointing to economic opportunities.”

A sort of digital, economic cartography? But if his vision falls short perhaps making brains cells from wee will compensate? We’re certainly looking forward to free WiFi in London’s Black Cabs. And we’re sure the nano-tech condom that protects then disappears will make a positive contribution, while we’re very much looking forward to watching more geeks on TV.

OK! Fair cop! But we only said we weren’t afraid of the big questions. We didn’t say we specialised in them alone.

Until tomorrow…