Artificial Intelligence can we keep it in the box

Artificial intelligence – can we keep it in the box?

 

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Exploding intelligence? Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”. He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view. Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)? AI as a low achiever Optimists sometimes take comfort from the fact the field of AI has very chequered past. Periods of exuberance and hype have been mixed with so-called “AI winters” – times of reduced funding and interest, after promised capabilities fail to materialise. Some people point to this as evidence machines are never likely to reach human levels of intelligence, let alone to exceed them. Others point out that the same could have been said about heavier-than-air flight.

 

Further information: For a thorough and thoughtful analysis of this topic, we recommend The Singularity: A Philosophical Analysis by the Australian philosopher David Chalmers. Jaan Tallinn’s recent public lecture The Intelligence Stairway is available as a podcast or on YouTube via Sydney Ideas. The Centre for the Study of Existential Risk The authors are the co-founders, together with the eminent British astrophysicist, Lord Martin Rees, of a new project to establish a Centre for the Study of Existential Risk (CSER) at the University of Cambridge. The Centre will support research to identify and mitigate catastrophic risk from developments in human technology, including AI – further details at CSER.ORG.

Related posts

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.