Singularity Summit, part two
Succinct Stanford Singularity Summit Summary
===================================
http://sss.stanford.edu
See also
troyworks's entry for his summary of it.
Part one of this was friends-only because it's a little more personal. The Stanford Singularity Summit held yesterday was the world's largest conference held on the topic of the possibly impending technological singularity in all of history (about 1200 people). Personally, I prefer Eliezer Yudkowsky's term "intelligence explosion" to the more popular term "singularity" but so many people call it the singularity that I've pretty much resigned myself by now to using that term since it's already too far ingrained in people's vocabularies to change. It's important to realize that it has no relation whatsoever to the term as used in physics or math (aside from Vernor Vinge's original confusion between event horizons and singularities).
The following is a list of the people who gave talks yesterday, and what I thought of them.
Ray Kurzweil:
Ray kicked off the conference with quite a bang. His talk was nothing less than phenomenal. He's a very good speaker and (somewhat to my surprise) everything he said sounded pretty darn reasonable, and he had some stunning demos for us that completely blew me away. In the past, I've expressed doubts as to whether some of the claims he makes might be a bit unfounded or even downright "nutty". But I've said all along, I will not make any official judgements until I've read at least one of his books. And hearing his talk yesterday did a lot to quell my doubts about him. However, that's not to say I think that everything he says must be right. It's just that I think he's going about this in a pretty reasonable way, that he's done quite a bit of research on it, and like anyone else with a theory with testable predictions, he might be right and he might be wrong... only time will tell for sure. He projects that machines will surpass human intelligence by approximately 2029 (this is referring to "general" intelligence, as they've already passed human intelligence in some limited ways such as ability to play chess), and we will enter the singularity era around 2045. After that, according to him, the entire world will be transformed so much by continually self-improving AI that there's no way to predict anything. Whatever happens, I think it's a safe bet that posthumans will be the dominant species on the planet by then (if the rest of the timeframe is correct) and humans (who haven't been uploaded or merged into machines) will only play a minor role, if they survive at all. The first figure is surprisingly quite near mine, even though I pulled mine out of a hat and have very little confidence in it.
The first demo that Kurzweil did was a camera his company recently designed that takes pictures of documents, signs, etc. and then immediately in real time translates the image into text and the text into voice and reads it aloud in a computer-voice that sounds very close to naturally human. This is the latest in a series of inventions he's made for blind people. Later he demoed a language translator, where a person speaks in one language (say english) and the computer translates what they're saying into another language like french, german, or spanish. The person speaking had to speak very slowly and carefully, but it was still very impressive and again, the computer voice sounded close enough to human it's hard to tell the difference.
I think my gut feelings on Kurzweil at this point are that he has made the best estimate anyone can make for when this stuff is going to happen. But his problem is that he's a bit too sure of himself. There are a lot of unknowns here, so there might be things nobody has thought of yet that neither he nor anyone else has taken into account. Related to this over-confidence problem, his personality is what I'd describe as a bit "arrogant". That said, it's very easy to argue he has a right to be arrogant. He has 12 honorary doctorates, and has been awarded honors by 3 US Presidents for the inventions he's come up with. He's been described by some as the Thomas Edison of our age. He originally got into projecting future trends as a practical means of determining what things he should start designing now so he can build them as soon as the technology to do it arrives. In doing this, he was naturally led to the conclusion that AI would surpass humans in another few decades, and after another couple the singularity will begin whether we're ready or not.
Well, I find myself again wanting a break even though I only got done describing the very first speaker... but this was the most I have to say on any of them, so the rest should go more quickly. I'll save the others for "part three".
===================================
http://sss.stanford.edu
See also
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
Part one of this was friends-only because it's a little more personal. The Stanford Singularity Summit held yesterday was the world's largest conference held on the topic of the possibly impending technological singularity in all of history (about 1200 people). Personally, I prefer Eliezer Yudkowsky's term "intelligence explosion" to the more popular term "singularity" but so many people call it the singularity that I've pretty much resigned myself by now to using that term since it's already too far ingrained in people's vocabularies to change. It's important to realize that it has no relation whatsoever to the term as used in physics or math (aside from Vernor Vinge's original confusion between event horizons and singularities).
The following is a list of the people who gave talks yesterday, and what I thought of them.
Ray Kurzweil:
Ray kicked off the conference with quite a bang. His talk was nothing less than phenomenal. He's a very good speaker and (somewhat to my surprise) everything he said sounded pretty darn reasonable, and he had some stunning demos for us that completely blew me away. In the past, I've expressed doubts as to whether some of the claims he makes might be a bit unfounded or even downright "nutty". But I've said all along, I will not make any official judgements until I've read at least one of his books. And hearing his talk yesterday did a lot to quell my doubts about him. However, that's not to say I think that everything he says must be right. It's just that I think he's going about this in a pretty reasonable way, that he's done quite a bit of research on it, and like anyone else with a theory with testable predictions, he might be right and he might be wrong... only time will tell for sure. He projects that machines will surpass human intelligence by approximately 2029 (this is referring to "general" intelligence, as they've already passed human intelligence in some limited ways such as ability to play chess), and we will enter the singularity era around 2045. After that, according to him, the entire world will be transformed so much by continually self-improving AI that there's no way to predict anything. Whatever happens, I think it's a safe bet that posthumans will be the dominant species on the planet by then (if the rest of the timeframe is correct) and humans (who haven't been uploaded or merged into machines) will only play a minor role, if they survive at all. The first figure is surprisingly quite near mine, even though I pulled mine out of a hat and have very little confidence in it.
The first demo that Kurzweil did was a camera his company recently designed that takes pictures of documents, signs, etc. and then immediately in real time translates the image into text and the text into voice and reads it aloud in a computer-voice that sounds very close to naturally human. This is the latest in a series of inventions he's made for blind people. Later he demoed a language translator, where a person speaks in one language (say english) and the computer translates what they're saying into another language like french, german, or spanish. The person speaking had to speak very slowly and carefully, but it was still very impressive and again, the computer voice sounded close enough to human it's hard to tell the difference.
I think my gut feelings on Kurzweil at this point are that he has made the best estimate anyone can make for when this stuff is going to happen. But his problem is that he's a bit too sure of himself. There are a lot of unknowns here, so there might be things nobody has thought of yet that neither he nor anyone else has taken into account. Related to this over-confidence problem, his personality is what I'd describe as a bit "arrogant". That said, it's very easy to argue he has a right to be arrogant. He has 12 honorary doctorates, and has been awarded honors by 3 US Presidents for the inventions he's come up with. He's been described by some as the Thomas Edison of our age. He originally got into projecting future trends as a practical means of determining what things he should start designing now so he can build them as soon as the technology to do it arrives. In doing this, he was naturally led to the conclusion that AI would surpass humans in another few decades, and after another couple the singularity will begin whether we're ready or not.
Well, I find myself again wanting a break even though I only got done describing the very first speaker... but this was the most I have to say on any of them, so the rest should go more quickly. I'll save the others for "part three".
no subject
no subject
Though not directly related, I heard rumors about SLAC doing an experiment on a rare archaeological item which is believed to contain some lost original formulae/theorems by Archimedes some of which my sources have claimed were very similar to or identical with Newton's laws of mechanics! It was written with some sort of berry on animal skin and erased with lemon juice to make room for a prayer book.
no subject
Yeah I think you are right in the sense that the event horizon is an entire 3-surface as opposed to a point on the space-time manifold.
That's not what I was referring to. You can of course have singular surfaces in manifolds, not just points so that would not make any difference.
The technological singularity, as I've heard it explained, is a horizon in time beyond which we can't see past. Nothing becomes infinite or singular there, it's just that we're shielded from seeing what's beyond. This is why I don't see any connection between the technological singularity and mathematical singularities.
no subject
This is really why Yudkowsky's phrase "intelligence explosion" is much superior to Vinge, Kurzweil, and others "singularity" term. I think even in Kurzweil's models, everything is expect to be smooth and infinitely differentiable through the singularity. In other words, we currently have a double exponential e^a(e^bx) where the exponent of the upper exponential is such that it's barely starting to change. But by the time machines start self-improving the second feedback loop becomes more relevant and you can no longer approximate it by a single exponential. That's the impression I get, although I haven't actually read his book so I could be misunderstanding his model.
no subject
would no longer be "smooth" and hence non-singular in a sense
meant to type "and hence singular in a sense"
no subject
BTW, thanks for the reports! I wish I could have caught this too.
no subject
I'm not too fond of the term either, but I wish you'd give Vinge some credit -- he's a subtle thinker, a lot more interesting than, for example, Kurzweil.
I guess I should apologize here for being so hard on Vinge. I have a tendency to give my opinions on things before I really know very much about them... it's something I need to work on not doing so much :) I still don't understand why he would pick that term if that's what he meant, but since I haven't actually read anything from him directly (expect vague statements like the one you give above that don't tell me much). It's just that I see the term being very misleading and I want to make sure people don't take it the wrong way. It could also be that I'm especially annoyed by it being in physics when I'm used to hearing it nearly every day at work/school in a very different sense. Other people in my department have expressed similar opinions, but perhaps someone outside of physics wouldn't find it such a conflict... or maybe the way mathematicians look at it is a bit different.
Another thing is that, even the event horizon analogy I don't personally believe in... since there may still be some things we can guess about what might happen afterwards, and even beforehand things are going to be pretty difficult to predict. Another word I wish they would have chosen instead of singularity is "phase transition". I like that almost as much as "intelligence explosion" even though it's less descriptive.
no subject
Here's some guessing about what might happen afterwards. I like his original essay (updated) too, it's a nice counterpart to your criticisms of Kurzweil: considering multiple ways it could happen, or might not happen -- neither Kurzweil's "this is your future, bitch" nor "gee, who knows?"
BTW "intelligence explosion" appears to have been coined by I.J. Good.
no subject
Of course, I should say I know nothing about the technical theories put forth with regards to the future evolution of technology but I will also confess that I think it is all non-sense, only in that one cannot reasonably have access to the actual relative probabilities in the absence of a set of dynamical laws. In effect, my perception of the relative probabilities are completely tainted by my biased perception of human history and consequently any predictions I make will be so flawed as to be completely useless especially if the system exhibits chaos. In short, it is probably worse than trying to predict the weather. Of course, none of this is cause to abandon the attempts, after all that is what science strive for.
I am much more used to making sharp distinctions between point and surface singularities because in the real 4-space manifold the distinctions are vital as the lead to vastly different physical evolutions. If you have an entirely singular then you are dealing with a separated surface in the topological sense. This is different, of course, from a singularity in coordinates which can be transformed away. In fact, a singularity is a real or physical singularity in the rigorous mathematical sense if it cannot be transformed away. Though this viewpoint is neither necessary nor sufficient because the 4-space and the connection of its geometry to its matter via the Einstein equations requires a careful and rigorous definition. This definition was given by Schmidt and involves the termination of geodesics. Nonetheless, a surface being singular is also vastly different from a single point because the to 4-space are topologically distinct and physically distinct as well.
We seldom hear about the important differences in physics because unless someone is working in a very specific theory of gravitation/cosmology, rarely does any bother working out the physical significances of topologically distinct space-times. As an example, MTW cite an example where the arising of a singularity is due to the initial conditions of the universe in space known as Taub-NUT types.
Now which one of these greatly varied possibilities best describes the impending technological evolution is anyone's guess. I'm quite comfortable staying out of the whole business and saying that I simply don't know. But I will say this, from the biased manner in which I view history, I expect that nature is filled with surprises and I am guessing that no one will come even close to what actually unfolds.
no subject
in the sense that the event horizon is an entire 3-surface as opposed to a point on the space-time manifold.
I should also mention that we're dealing with a 1-dimensional function here (progress versus time) so the distinction between surfaces and points is completely moot.