Just about two months ago, on a (typically) gorgeous spring day in May, I found myself back on the Stanford campus to attend a day's worth of talks by a distinguished panel of speakers at a symposium entitled, modestly enough, The Singularity Summit.
What on Earth is that about?
Here's what the Introduction to the day's program says:
A little more detail: here.
In the past few years, Ray Kurzweil (at right), the lead speaker at the "summit", has been perhaps the most visible exponent (that's kind of a pun) of the singularity idea, through a series of books, beginning with (as editor) The Age of Intelligent Machines (1990), followed by (as author) The Age of Spiritual Machines (1999) and The Singularity is Near (2005).
Kurzweil writes:
But Kurzweil is hardly the originator of the idea. It has been foreshadowed by people such as the mathematician and major contributor to early computer science John von Neumann (1950s) and statistician I. J. Good.
A more organized presentation of the singularity idea came from mathematician and (naturally) science fiction writer Vernor Vinge (also here) in the 1980s, culminating in his 1993 paper The Coming Technological Singularity. (Updated a little in 2003 here.) At the beginning of his 1993 paper Vinge wrote "Within thirty years [by 2023], we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."
Of course, there's plenty of skepticism that something like this singularity will actually occur in the foreseeable future, let alone by 2023. Many people are skeptical (to say the least) not only of the possibility, but also of allowing such a thing to happen even if possible. I won't go into that here. But one of the less hostile skeptics is Douglas Hofstadter, who took an interest in the idea around the time that Kurzweil's Spiritual Machines book appeared in 1999. Shortly thereafter he came out with a paper discussing his philosophical reservations, Moore's Law, Artificial Evolution, and the Fate of Humanity.
Hofstadter was the second speaker at the Summit and his presentation was (in my opinion) as interesting as Kurzweil's. The third speaker whose presentation I found valuable (i. e., added much to the discussion) was Sebastian Thrun, current director of the fabled Stanford AI Lab. Thrun mostly described his laboratory's work on Stanley, the winning autonomous robotic vehicle, of the 2005 DARPA "Grand Challenge". He had little to say about the Singularity per se, and instead used Stanley to represent, in his humble (and possibly correct) opinion, the current state of the art in artificial intelligence.
I was attending the Summit very much with the idea in mind of writing about it here. But after all was said and done, I couldn't think of much very useful to say. In a private discussion group I summarized my impressions thusly:
I guess the point to be made here is that this discussion is all about the possible future of artificial intelligence, and not its current state of the art. And as Yogi Berra said, prediction is very difficult, especially about the future. It's possible to expostulate easily and at great length about whether the Singularity that Kurzweil and others foresee is good or bad. That, after all, is the province of mere philosophers and op ed writers. Trying to assess what science and technology is actually capable of doing in the near future is much harder, to say nothing of the farther future.
But it's irresistably interesting, and writing a lot more about this is on my to-do list.
------------------------------------
Additional references:
Singularity Summit Coverage - includes links to some press and blog articles, plus audio of all presentations and powerpoints of most
Singularity Institute for Artificial Intelligence - one of the Summit's primary sponsors
Singularity Summit LIVE! - a series of live blog posts at the Responsible Nanotechnology site (this item is only the first)
Singularity Summit Opens - first of another series of blog posts (by Kurzweil's publicist)
The Singularity Summit - a Daily Kos diary by one Summit attendee
Technological Singularity - Wikipedia article
Stanford conference ponders a brave new world with machines more powerful than their creators - San Francisco Chronicle article (before the event)
The age of Ray Kurzweil - friendly bio in Kurzweil's hometown paper
KurzweilAI.net - huge collection of essays and news articles about AI, the Singularity, and related topics
The Singularity is Near - promotional site for Kurzweil's book
Selected Annotated Bibliography of Douglas R. Hofstadter - in case you aren't familiar with his writing
Critical Discussion of Vinge's Singularity Concept - collection of 13 essays
The Singularity - brief list of references
------------------------------------
Tags: the singularity, artificial intelligence, Singularity Summit, Vernor Vinge, Raymond Kurzweil, Douglas Hofstadter
What on Earth is that about?
Here's what the Introduction to the day's program says:
The singularity scenario is a hypothesized "event horizon" in human technological development beyond which our models of the future cease to give reliable answers. The hypothesis rests on the creation of "superintelligence": any future intellect, possibly strong artificial intelligence, that tremendously eclipses the best human minds in practically every field, including scientific creativity, general wisdom, and social skills.
A little more detail: here.
In the past few years, Ray Kurzweil (at right), the lead speaker at the "summit", has been perhaps the most visible exponent (that's kind of a pun) of the singularity idea, through a series of books, beginning with (as editor) The Age of Intelligent Machines (1990), followed by (as author) The Age of Spiritual Machines (1999) and The Singularity is Near (2005).
Kurzweil writes:
What, then, is the singularity? It's a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one's view of life in general and one's own particular life.
But Kurzweil is hardly the originator of the idea. It has been foreshadowed by people such as the mathematician and major contributor to early computer science John von Neumann (1950s) and statistician I. J. Good.
A more organized presentation of the singularity idea came from mathematician and (naturally) science fiction writer Vernor Vinge (also here) in the 1980s, culminating in his 1993 paper The Coming Technological Singularity. (Updated a little in 2003 here.) At the beginning of his 1993 paper Vinge wrote "Within thirty years [by 2023], we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."
Of course, there's plenty of skepticism that something like this singularity will actually occur in the foreseeable future, let alone by 2023. Many people are skeptical (to say the least) not only of the possibility, but also of allowing such a thing to happen even if possible. I won't go into that here. But one of the less hostile skeptics is Douglas Hofstadter, who took an interest in the idea around the time that Kurzweil's Spiritual Machines book appeared in 1999. Shortly thereafter he came out with a paper discussing his philosophical reservations, Moore's Law, Artificial Evolution, and the Fate of Humanity.
Hofstadter was the second speaker at the Summit and his presentation was (in my opinion) as interesting as Kurzweil's. The third speaker whose presentation I found valuable (i. e., added much to the discussion) was Sebastian Thrun, current director of the fabled Stanford AI Lab. Thrun mostly described his laboratory's work on Stanley, the winning autonomous robotic vehicle, of the 2005 DARPA "Grand Challenge". He had little to say about the Singularity per se, and instead used Stanley to represent, in his humble (and possibly correct) opinion, the current state of the art in artificial intelligence.
I was attending the Summit very much with the idea in mind of writing about it here. But after all was said and done, I couldn't think of much very useful to say. In a private discussion group I summarized my impressions thusly:
Maybe Hofstadter had the right attitude. ... I think his best point was that this sort of thing needs much more rigorous discussion is very apt. Science fictional blue sky thinking is all well and good, but it will not be what enables this kind of thing to actually be implemented (and, one hopes, in a benign way).
Kurzweil's ideas are interesting, once you get beyond just extrapolation of exponential growth curves. And he may well be the most rigorous thinker (as least on the relevant topics) of all the speakers, excepting the SAIL guy (Thrun). (Drexler appeared very bored, and didn't even bother to come back after lunch.) But so far I can't see much "science" in these ideas...
Oh, yeah. I'd also like to have had someone talk rigorously and concretely about near-term prospects for "life extension". (Prospects more than, say, 50 years out won't help most of us, much.) But even Kurzweil didn't get into this. Looks like most non-biologists don't care to explore the subject much beyond the high-level generalities.
I guess the point to be made here is that this discussion is all about the possible future of artificial intelligence, and not its current state of the art. And as Yogi Berra said, prediction is very difficult, especially about the future. It's possible to expostulate easily and at great length about whether the Singularity that Kurzweil and others foresee is good or bad. That, after all, is the province of mere philosophers and op ed writers. Trying to assess what science and technology is actually capable of doing in the near future is much harder, to say nothing of the farther future.
But it's irresistably interesting, and writing a lot more about this is on my to-do list.
------------------------------------
Additional references:
Singularity Summit Coverage - includes links to some press and blog articles, plus audio of all presentations and powerpoints of most
Singularity Institute for Artificial Intelligence - one of the Summit's primary sponsors
Singularity Summit LIVE! - a series of live blog posts at the Responsible Nanotechnology site (this item is only the first)
Singularity Summit Opens - first of another series of blog posts (by Kurzweil's publicist)
The Singularity Summit - a Daily Kos diary by one Summit attendee
Technological Singularity - Wikipedia article
Stanford conference ponders a brave new world with machines more powerful than their creators - San Francisco Chronicle article (before the event)
The age of Ray Kurzweil - friendly bio in Kurzweil's hometown paper
KurzweilAI.net - huge collection of essays and news articles about AI, the Singularity, and related topics
The Singularity is Near - promotional site for Kurzweil's book
Selected Annotated Bibliography of Douglas R. Hofstadter - in case you aren't familiar with his writing
Critical Discussion of Vinge's Singularity Concept - collection of 13 essays
The Singularity - brief list of references
------------------------------------
Tags: the singularity, artificial intelligence, Singularity Summit, Vernor Vinge, Raymond Kurzweil, Douglas Hofstadter