Singularity Summit, part two
Succinct Stanford Singularity Summit Summary
===================================
http://sss.stanford.edu
See also
troyworks's entry for his summary of it.
Part one of this was friends-only because it's a little more personal. The Stanford Singularity Summit held yesterday was the world's largest conference held on the topic of the possibly impending technological singularity in all of history (about 1200 people). Personally, I prefer Eliezer Yudkowsky's term "intelligence explosion" to the more popular term "singularity" but so many people call it the singularity that I've pretty much resigned myself by now to using that term since it's already too far ingrained in people's vocabularies to change. It's important to realize that it has no relation whatsoever to the term as used in physics or math (aside from Vernor Vinge's original confusion between event horizons and singularities).
The following is a list of the people who gave talks yesterday, and what I thought of them.
Ray Kurzweil:
Ray kicked off the conference with quite a bang. His talk was nothing less than phenomenal. He's a very good speaker and (somewhat to my surprise) everything he said sounded pretty darn reasonable, and he had some stunning demos for us that completely blew me away. In the past, I've expressed doubts as to whether some of the claims he makes might be a bit unfounded or even downright "nutty". But I've said all along, I will not make any official judgements until I've read at least one of his books. And hearing his talk yesterday did a lot to quell my doubts about him. However, that's not to say I think that everything he says must be right. It's just that I think he's going about this in a pretty reasonable way, that he's done quite a bit of research on it, and like anyone else with a theory with testable predictions, he might be right and he might be wrong... only time will tell for sure. He projects that machines will surpass human intelligence by approximately 2029 (this is referring to "general" intelligence, as they've already passed human intelligence in some limited ways such as ability to play chess), and we will enter the singularity era around 2045. After that, according to him, the entire world will be transformed so much by continually self-improving AI that there's no way to predict anything. Whatever happens, I think it's a safe bet that posthumans will be the dominant species on the planet by then (if the rest of the timeframe is correct) and humans (who haven't been uploaded or merged into machines) will only play a minor role, if they survive at all. The first figure is surprisingly quite near mine, even though I pulled mine out of a hat and have very little confidence in it.
The first demo that Kurzweil did was a camera his company recently designed that takes pictures of documents, signs, etc. and then immediately in real time translates the image into text and the text into voice and reads it aloud in a computer-voice that sounds very close to naturally human. This is the latest in a series of inventions he's made for blind people. Later he demoed a language translator, where a person speaks in one language (say english) and the computer translates what they're saying into another language like french, german, or spanish. The person speaking had to speak very slowly and carefully, but it was still very impressive and again, the computer voice sounded close enough to human it's hard to tell the difference.
I think my gut feelings on Kurzweil at this point are that he has made the best estimate anyone can make for when this stuff is going to happen. But his problem is that he's a bit too sure of himself. There are a lot of unknowns here, so there might be things nobody has thought of yet that neither he nor anyone else has taken into account. Related to this over-confidence problem, his personality is what I'd describe as a bit "arrogant". That said, it's very easy to argue he has a right to be arrogant. He has 12 honorary doctorates, and has been awarded honors by 3 US Presidents for the inventions he's come up with. He's been described by some as the Thomas Edison of our age. He originally got into projecting future trends as a practical means of determining what things he should start designing now so he can build them as soon as the technology to do it arrives. In doing this, he was naturally led to the conclusion that AI would surpass humans in another few decades, and after another couple the singularity will begin whether we're ready or not.
Well, I find myself again wanting a break even though I only got done describing the very first speaker... but this was the most I have to say on any of them, so the rest should go more quickly. I'll save the others for "part three".
===================================
http://sss.stanford.edu
See also
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
Part one of this was friends-only because it's a little more personal. The Stanford Singularity Summit held yesterday was the world's largest conference held on the topic of the possibly impending technological singularity in all of history (about 1200 people). Personally, I prefer Eliezer Yudkowsky's term "intelligence explosion" to the more popular term "singularity" but so many people call it the singularity that I've pretty much resigned myself by now to using that term since it's already too far ingrained in people's vocabularies to change. It's important to realize that it has no relation whatsoever to the term as used in physics or math (aside from Vernor Vinge's original confusion between event horizons and singularities).
The following is a list of the people who gave talks yesterday, and what I thought of them.
Ray Kurzweil:
Ray kicked off the conference with quite a bang. His talk was nothing less than phenomenal. He's a very good speaker and (somewhat to my surprise) everything he said sounded pretty darn reasonable, and he had some stunning demos for us that completely blew me away. In the past, I've expressed doubts as to whether some of the claims he makes might be a bit unfounded or even downright "nutty". But I've said all along, I will not make any official judgements until I've read at least one of his books. And hearing his talk yesterday did a lot to quell my doubts about him. However, that's not to say I think that everything he says must be right. It's just that I think he's going about this in a pretty reasonable way, that he's done quite a bit of research on it, and like anyone else with a theory with testable predictions, he might be right and he might be wrong... only time will tell for sure. He projects that machines will surpass human intelligence by approximately 2029 (this is referring to "general" intelligence, as they've already passed human intelligence in some limited ways such as ability to play chess), and we will enter the singularity era around 2045. After that, according to him, the entire world will be transformed so much by continually self-improving AI that there's no way to predict anything. Whatever happens, I think it's a safe bet that posthumans will be the dominant species on the planet by then (if the rest of the timeframe is correct) and humans (who haven't been uploaded or merged into machines) will only play a minor role, if they survive at all. The first figure is surprisingly quite near mine, even though I pulled mine out of a hat and have very little confidence in it.
The first demo that Kurzweil did was a camera his company recently designed that takes pictures of documents, signs, etc. and then immediately in real time translates the image into text and the text into voice and reads it aloud in a computer-voice that sounds very close to naturally human. This is the latest in a series of inventions he's made for blind people. Later he demoed a language translator, where a person speaks in one language (say english) and the computer translates what they're saying into another language like french, german, or spanish. The person speaking had to speak very slowly and carefully, but it was still very impressive and again, the computer voice sounded close enough to human it's hard to tell the difference.
I think my gut feelings on Kurzweil at this point are that he has made the best estimate anyone can make for when this stuff is going to happen. But his problem is that he's a bit too sure of himself. There are a lot of unknowns here, so there might be things nobody has thought of yet that neither he nor anyone else has taken into account. Related to this over-confidence problem, his personality is what I'd describe as a bit "arrogant". That said, it's very easy to argue he has a right to be arrogant. He has 12 honorary doctorates, and has been awarded honors by 3 US Presidents for the inventions he's come up with. He's been described by some as the Thomas Edison of our age. He originally got into projecting future trends as a practical means of determining what things he should start designing now so he can build them as soon as the technology to do it arrives. In doing this, he was naturally led to the conclusion that AI would surpass humans in another few decades, and after another couple the singularity will begin whether we're ready or not.
Well, I find myself again wanting a break even though I only got done describing the very first speaker... but this was the most I have to say on any of them, so the rest should go more quickly. I'll save the others for "part three".
no subject
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
The Singularity and Peak Oil
This can't be the case. We're also quickly approaching a point in time for Peak Oil. Some say we have already hit it. In all estimations, it seems that Peak Oil will be hit before the date for Singularity. If this is the case, we very simply will not have the power to drive the awesome technological revolution that the Singularity promises.
Worse, we may be in for some very hard times as energy resources become scare. The Industrial Revolution may come to a complete and total standstill, and future attempts to jumpstart it will always be doomed to failure, as there will no longer be any proper fuels available to be consumed.
Perhaps you could bring up the concept of Peak Oil to someone at the Summit?
Re: The Singularity and Peak Oil
no subject
(no subject)
no subject
Are you pretty confident that people will merge with machines? Or are you and the singularity people saying that humans may be made exstinct by pure machines?
In the latter case - why would machines want to have mobility or some kind of body? Why would they want to occupy a lot of space? This is probably gonna be phrased stupidly but it seems like (even) people want to live without a body - on the internet, in their head, in heaven as this floating soul.
I'd imagine an intelligent machine, if it knew it didn't need to worry about electricity and wasn't greedy or competitive (which might be necessary traits for evolution or intelligence, I dunno), would be content to sit in a closet and talk to other machines over the internet. Maybe I'm drawing too much from my idea of how a human genius like Albert Einstein would behave though.
(no subject)
(no subject)
(no subject)
no subject
Earlier he had 2030 and 2040. Now he's increased the gap. That sounds so dumb to me. I can't think why the singularity wouldn't happen within days if not minutes of the emergence of machines surpassing human general intelligence. I believe that's Yudkowsky's opinion as well. But then again, I haven't read any of Kurzweil's books, just some essays on the web, so I'll shut up.
(no subject)
Hard or Soft...
(Anonymous) - 2006-05-18 00:13 (UTC) - ExpandSelf Examination
Was it More?
(Anonymous) 2006-05-18 12:09 am (UTC)(link)In the agrument that has been going on here about the "Singularity", my understanding of the term comes from the combination of MEST and Information densities. The combination of these two create a singularity of sorts out of a combination of converging information upon the MEST substrate.
ie You get so much information coverging upon a single point in Place (Matter/Space/Time) that it creates its own form of (E)nergy, which in turn creats the singularity that creates the event horizon beyond which we cannot see...
So, it is not necesssarily the technology itself that causes the singularity, but the convergence of some many different things upon a small area that creates the singularity.
This is why they have created such things as the center for Interdisciplinary Studies at Stanford, because they realized that they need to have so much more than just physics, computer science and such to begin to control the events swirling around the MEST that is creating the singularity. You also need to understand language, and communication (two very different things) History, Sociology, Anthropology, Biology. Genetics, and so on...etc...
Re: Was it More?