spoonless: (avatar)
[personal profile] spoonless
Succinct Stanford Singularity Summit Summary
===================================
http://sss.stanford.edu

See also [livejournal.com profile] troyworks's entry for his summary of it.

Part one of this was friends-only because it's a little more personal. The Stanford Singularity Summit held yesterday was the world's largest conference held on the topic of the possibly impending technological singularity in all of history (about 1200 people). Personally, I prefer Eliezer Yudkowsky's term "intelligence explosion" to the more popular term "singularity" but so many people call it the singularity that I've pretty much resigned myself by now to using that term since it's already too far ingrained in people's vocabularies to change. It's important to realize that it has no relation whatsoever to the term as used in physics or math (aside from Vernor Vinge's original confusion between event horizons and singularities).

The following is a list of the people who gave talks yesterday, and what I thought of them.

Ray Kurzweil:

Ray kicked off the conference with quite a bang. His talk was nothing less than phenomenal. He's a very good speaker and (somewhat to my surprise) everything he said sounded pretty darn reasonable, and he had some stunning demos for us that completely blew me away. In the past, I've expressed doubts as to whether some of the claims he makes might be a bit unfounded or even downright "nutty". But I've said all along, I will not make any official judgements until I've read at least one of his books. And hearing his talk yesterday did a lot to quell my doubts about him. However, that's not to say I think that everything he says must be right. It's just that I think he's going about this in a pretty reasonable way, that he's done quite a bit of research on it, and like anyone else with a theory with testable predictions, he might be right and he might be wrong... only time will tell for sure. He projects that machines will surpass human intelligence by approximately 2029 (this is referring to "general" intelligence, as they've already passed human intelligence in some limited ways such as ability to play chess), and we will enter the singularity era around 2045. After that, according to him, the entire world will be transformed so much by continually self-improving AI that there's no way to predict anything. Whatever happens, I think it's a safe bet that posthumans will be the dominant species on the planet by then (if the rest of the timeframe is correct) and humans (who haven't been uploaded or merged into machines) will only play a minor role, if they survive at all. The first figure is surprisingly quite near mine, even though I pulled mine out of a hat and have very little confidence in it.

The first demo that Kurzweil did was a camera his company recently designed that takes pictures of documents, signs, etc. and then immediately in real time translates the image into text and the text into voice and reads it aloud in a computer-voice that sounds very close to naturally human. This is the latest in a series of inventions he's made for blind people. Later he demoed a language translator, where a person speaks in one language (say english) and the computer translates what they're saying into another language like french, german, or spanish. The person speaking had to speak very slowly and carefully, but it was still very impressive and again, the computer voice sounded close enough to human it's hard to tell the difference.

I think my gut feelings on Kurzweil at this point are that he has made the best estimate anyone can make for when this stuff is going to happen. But his problem is that he's a bit too sure of himself. There are a lot of unknowns here, so there might be things nobody has thought of yet that neither he nor anyone else has taken into account. Related to this over-confidence problem, his personality is what I'd describe as a bit "arrogant". That said, it's very easy to argue he has a right to be arrogant. He has 12 honorary doctorates, and has been awarded honors by 3 US Presidents for the inventions he's come up with. He's been described by some as the Thomas Edison of our age. He originally got into projecting future trends as a practical means of determining what things he should start designing now so he can build them as soon as the technology to do it arrives. In doing this, he was naturally led to the conclusion that AI would surpass humans in another few decades, and after another couple the singularity will begin whether we're ready or not.

Well, I find myself again wanting a break even though I only got done describing the very first speaker... but this was the most I have to say on any of them, so the rest should go more quickly. I'll save the others for "part three".

Date: 2006-05-15 06:21 am (UTC)
From: [identity profile] darius.livejournal.com
Hey, this is the second time I've seen you complain about Vinge's confusion here. I wonder where you saw that, because he explicitly disavowed any connection with black holes.

Date: 2006-05-15 07:26 am (UTC)
From: [identity profile] spoonless.livejournal.com
Well, if he was naming it after the purely mathematical term, then it seems to me he was just wrong entirely. The technological singularity is analogous to the event horizon of a black hole, but not at all analogous to any sort of singularity used in math or physics. The only explanation I can come up with for his mistake in naming it is that he was thinking "event horizon" and used the wrong word. But if he meant that it was going to be somehow like a mathematical singularity, that's certainly not right and so far I haven't encountered anyone who thinks that's what's going to happen. So either he's the only crackpot of the bunch, or he made a simple mistake not understanding black holes very well. The latter makes a lot more sense to me, and fits with what people have described online and in talks I've heard. If it was the former, then I don't know how it ever caught on... people should have realized he was crazy and used a completely different word.

Date: 2006-05-15 08:04 am (UTC)
From: [identity profile] spinemasher.livejournal.com
Yeah I think you are right in the sense that the event horizon is an entire 3-surface as opposed to a point on the space-time manifold. But one confusing issue that might justify interchanging the two words would be the different choices of coordinate systems/transformations that exist inside and outside the event horizon. While there are coordinates that exist and are finite at the EH, there are many that are not. Some singular coordinate systems are Boyer-Lindquist and Eddington-Finkelstein to name just a couple. While Kruskal-Szekeres are chosen deliberately for their non-singular nature at the EH, which translates into effect as two surface joined by a branch point.

Though not directly related, I heard rumors about SLAC doing an experiment on a rare archaeological item which is believed to contain some lost original formulae/theorems by Archimedes some of which my sources have claimed were very similar to or identical with Newton's laws of mechanics! It was written with some sort of berry on animal skin and erased with lemon juice to make room for a prayer book.

Date: 2006-05-15 03:47 pm (UTC)
From: [identity profile] spoonless.livejournal.com

Yeah I think you are right in the sense that the event horizon is an entire 3-surface as opposed to a point on the space-time manifold.

That's not what I was referring to. You can of course have singular surfaces in manifolds, not just points so that would not make any difference.

The technological singularity, as I've heard it explained, is a horizon in time beyond which we can't see past. Nothing becomes infinite or singular there, it's just that we're shielded from seeing what's beyond. This is why I don't see any connection between the technological singularity and mathematical singularities.

Date: 2006-05-15 04:17 pm (UTC)
From: [identity profile] spoonless.livejournal.com
Actually, upon thinking about it I realized one way in which relating the technological singularity to mathetmatically singular coordinates might make sense... if the progress is continuous, but there is a discontinuity in the derivative near where AI starts self-improving then the coordinates would no longer be "smooth" and hence non-singular in a sense, even though they aren't "blowing up" or anything. However, I think there is disagreement on whether there will be any discontinuity even in the derivative.

This is really why Yudkowsky's phrase "intelligence explosion" is much superior to Vinge, Kurzweil, and others "singularity" term. I think even in Kurzweil's models, everything is expect to be smooth and infinitely differentiable through the singularity. In other words, we currently have a double exponential e^a(e^bx) where the exponent of the upper exponential is such that it's barely starting to change. But by the time machines start self-improving the second feedback loop becomes more relevant and you can no longer approximate it by a single exponential. That's the impression I get, although I haven't actually read his book so I could be misunderstanding his model.

Date: 2006-05-15 04:24 pm (UTC)
From: [identity profile] spoonless.livejournal.com

would no longer be "smooth" and hence non-singular in a sense

meant to type "and hence singular in a sense"

Date: 2006-05-16 07:18 am (UTC)
From: [identity profile] darius.livejournal.com
Yeah, that's closer to how Vinge (and von Neumann) meant it. "One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." It's a cute analogy, not a model, like when you say "I'm done, modulo testing". I'm not too fond of the term either, but I wish you'd give Vinge some credit -- he's a subtle thinker, a lot more interesting than, for example, Kurzweil.

BTW, thanks for the reports! I wish I could have caught this too.

Date: 2006-05-16 04:16 pm (UTC)
From: [identity profile] spoonless.livejournal.com

I'm not too fond of the term either, but I wish you'd give Vinge some credit -- he's a subtle thinker, a lot more interesting than, for example, Kurzweil.

I guess I should apologize here for being so hard on Vinge. I have a tendency to give my opinions on things before I really know very much about them... it's something I need to work on not doing so much :) I still don't understand why he would pick that term if that's what he meant, but since I haven't actually read anything from him directly (expect vague statements like the one you give above that don't tell me much). It's just that I see the term being very misleading and I want to make sure people don't take it the wrong way. It could also be that I'm especially annoyed by it being in physics when I'm used to hearing it nearly every day at work/school in a very different sense. Other people in my department have expressed similar opinions, but perhaps someone outside of physics wouldn't find it such a conflict... or maybe the way mathematicians look at it is a bit different.

Another thing is that, even the event horizon analogy I don't personally believe in... since there may still be some things we can guess about what might happen afterwards, and even beforehand things are going to be pretty difficult to predict. Another word I wish they would have chosen instead of singularity is "phase transition". I like that almost as much as "intelligence explosion" even though it's less descriptive.

Date: 2006-05-17 06:06 am (UTC)
From: [identity profile] darius.livejournal.com
I agree that the term is both confusing and a hype magnet.

Here's some guessing about what might happen afterwards. I like his original essay (updated) too, it's a nice counterpart to your criticisms of Kurzweil: considering multiple ways it could happen, or might not happen -- neither Kurzweil's "this is your future, bitch" nor "gee, who knows?"

BTW "intelligence explosion" appears to have been coined by I.J. Good.

Date: 2006-05-18 01:37 am (UTC)
From: [identity profile] spinemasher.livejournal.com
I wouldn't be surprised if it was a discontinuity in the derivative. After all, there are far more physical objects which are sharply changing as opposed to smoothly changing than most physicists would like to admit.

Of course, I should say I know nothing about the technical theories put forth with regards to the future evolution of technology but I will also confess that I think it is all non-sense, only in that one cannot reasonably have access to the actual relative probabilities in the absence of a set of dynamical laws. In effect, my perception of the relative probabilities are completely tainted by my biased perception of human history and consequently any predictions I make will be so flawed as to be completely useless especially if the system exhibits chaos. In short, it is probably worse than trying to predict the weather. Of course, none of this is cause to abandon the attempts, after all that is what science strive for.

I am much more used to making sharp distinctions between point and surface singularities because in the real 4-space manifold the distinctions are vital as the lead to vastly different physical evolutions. If you have an entirely singular then you are dealing with a separated surface in the topological sense. This is different, of course, from a singularity in coordinates which can be transformed away. In fact, a singularity is a real or physical singularity in the rigorous mathematical sense if it cannot be transformed away. Though this viewpoint is neither necessary nor sufficient because the 4-space and the connection of its geometry to its matter via the Einstein equations requires a careful and rigorous definition. This definition was given by Schmidt and involves the termination of geodesics. Nonetheless, a surface being singular is also vastly different from a single point because the to 4-space are topologically distinct and physically distinct as well.

We seldom hear about the important differences in physics because unless someone is working in a very specific theory of gravitation/cosmology, rarely does any bother working out the physical significances of topologically distinct space-times. As an example, MTW cite an example where the arising of a singularity is due to the initial conditions of the universe in space known as Taub-NUT types.

Now which one of these greatly varied possibilities best describes the impending technological evolution is anyone's guess. I'm quite comfortable staying out of the whole business and saying that I simply don't know. But I will say this, from the biased manner in which I view history, I expect that nature is filled with surprises and I am guessing that no one will come even close to what actually unfolds.

Date: 2006-05-15 04:26 pm (UTC)
From: [identity profile] spoonless.livejournal.com

in the sense that the event horizon is an entire 3-surface as opposed to a point on the space-time manifold.

I should also mention that we're dealing with a 1-dimensional function here (progress versus time) so the distinction between surfaces and points is completely moot.

The Singularity and Peak Oil

Date: 2006-05-15 05:29 pm (UTC)
From: [identity profile] eurisko97.isa-geek.net (from livejournal.com)
Something that bothers me about Singularity/Transhuman enthusiasm, not that I don't think the idea is neat myself, is that it obviously takes for granted that the Industrial Revolution will be continuing as scheduled, and that the current amount of energy output through fossil fuels will be there, or obviously increase to the necassary levels provided.

This can't be the case. We're also quickly approaching a point in time for Peak Oil. Some say we have already hit it. In all estimations, it seems that Peak Oil will be hit before the date for Singularity. If this is the case, we very simply will not have the power to drive the awesome technological revolution that the Singularity promises.

Worse, we may be in for some very hard times as energy resources become scare. The Industrial Revolution may come to a complete and total standstill, and future attempts to jumpstart it will always be doomed to failure, as there will no longer be any proper fuels available to be consumed.

Perhaps you could bring up the concept of Peak Oil to someone at the Summit?

Re: The Singularity and Peak Oil

Date: 2006-05-15 11:23 pm (UTC)
From: [identity profile] spoonless.livejournal.com
Nobody brought that up at the Summit, but I hosted a meeting on alternative energy and peak oil a few months ago for our Santa Cruz Future Salon.

The speaker we invited for that gave a pretty strong case for why oil has either already peeked, or if it hasn't that it will very soon. This is why people should be getting off their asses and switching over to better renewable energy sources today, rather than sitting around and waiting. Either way, I don't think it will affect the timescale for the singularity, but perhaps it's worth debating.

There are several issues I'd mention here... first, oil is a pretty inefficient energy source, as well as being dirty and bad for the environment. So I hope we'll switch to better ones (nuclear, solar, wind, fusion, hydrogen, etc.) sooner rather than later. Even if we just used solar, the sun will burn for billions more years which is many millions of times longer than it needs to in order for us to reach the singularity. Another issue that's important here is to seperate technological progress from consumption and population growth. Information technology has always grown exponentially, whereas things like population and GDP do not. So while there may be some relation between the two, they're quite different things.

Let's assume that people are stupid and stubborn, and don't switch away from using oil during the rest of this decade. This means that oil prices will rise and it will put much higher selection pressure for them to switch during the next decade. Because of the scarcity, the profits to be made from developing renewable energy will be even greater so we will get even more money put into developing new technology. Which could have spillover benefits to information technology that will drive the exponential even faster. So any temporary lull due to a slowdown will be compensated soon by increased funding in the long run.

So getting back to the distinction between the growth of knowledge/technology and the growth of population and consumption. Let's say that for some bizarre reason, something goes wrong and even after the oil prices skyrocket, people still refuse to switch to cleaner renewable energy sources. In this case, much of the population will start dying off. However, there will still be a core segment of the rich population which can afford the incredibly high oil prices. While some of the resources may go to aiding the people who are dying off, surely a lot of them will still go into developing new technology. So here the situation is a little debatable, but I think the technological progress would still continue exponentially. Although the exponent could be slowed down in this case, it's a pretty artificial case to begin with since it's highly unlikely people would want to stick with oil to the point of killing themselves off.

One thing I'd mention also, just as a side comment, is how far ahead of the US the country of France is. While the US gets most of its energy from coal and oil, the majority of homes in France are powered by nuclear. In the US, because of one indicent on 3-mile-island, there is a lot of irrational fear regarding nuclear technology. I think the situation is exactly analogous to September 11th. Americans, for some reason, react out of fear to isolated random incidents and are willing to give up huge amounts of comfort (whether it be human rights or efficient energy) in order to feel like they're safe against another catastrophe. Even if such a thing is extremely unlikely.

Date: 2006-05-15 06:51 pm (UTC)
From: [identity profile] mpnolan.livejournal.com
I found TAOSM to be very easy reading, with a lot of food for thought per page. It didn't require much more of an attention span than reading LiveJournal, so it's good for wedge-reading between spurts of activity.

Date: 2006-05-15 10:59 pm (UTC)
From: [identity profile] spoonless.livejournal.com
Yeah, I'll probably do that pretty soon. I'm trying to decide between Age of Spiritual Machines, Law of Accelerating Returns, and The Singularity is Near. I'm leaning towards just reading the last one, since it's the latest and from what I hear contains much of the contents of the other two, but updated with new research data. I just wish the title didn't sound so apocalytpic. But I guess the more sensational your title, the more people read it.

Date: 2006-05-16 03:54 am (UTC)
From: [identity profile] sid-icarus.livejournal.com
posthumans will be the dominant species on the planet by then ... and humans (who haven't been uploaded or merged into machines) will only play a minor role, if they survive at all

Are you pretty confident that people will merge with machines? Or are you and the singularity people saying that humans may be made exstinct by pure machines?

In the latter case - why would machines want to have mobility or some kind of body? Why would they want to occupy a lot of space? This is probably gonna be phrased stupidly but it seems like (even) people want to live without a body - on the internet, in their head, in heaven as this floating soul.

I'd imagine an intelligent machine, if it knew it didn't need to worry about electricity and wasn't greedy or competitive (which might be necessary traits for evolution or intelligence, I dunno), would be content to sit in a closet and talk to other machines over the internet. Maybe I'm drawing too much from my idea of how a human genius like Albert Einstein would behave though.

Date: 2006-05-16 06:21 am (UTC)
From: [identity profile] spoonless.livejournal.com

Are you pretty confident that people will merge with machines?

I'm confident that I will. I can't really speak for anyone else, but if I had to guess I would say, yes, I expect the majority will do it eventually but there will still be groups, similar to the Amish today, that shun the use of technology in such a personal way.

Or are you and the singularity people saying that humans may be made extinct by pure machines?

Different people would say different things. I don't see extinction as a very likely possibility, but it is one possibility yes. I just added that as a qualification to indicate that I don't take it as guaranteed that they'll survive.

By dominant I just mean that humans will have little to no role in directing the course of future events on the planet and elsewhere in the solar system. Cockroaches currently outnumber humans, but I wouldn't call them the dominant species since humans are the ones who shape most of what goes on on the face of the earth. Similarly, humans may outnumber posthumans but they will be irrelevant in determining important events... as to whether they'll end up as pets, food, friends, or just ignored... I'd say most likely ignored but it's anybody's guess.

Date: 2006-05-16 07:22 pm (UTC)
From: [identity profile] sid-icarus.livejournal.com

Are you pretty confident that people will merge with machines?

I'm confident that I will.


Yet another thing to discuss with a potential life partner. "I don't want to have kids. My credit score is low. I'm going to eventually be 50% post-consumer metal. I plan to live to 225 years old and need someone who can keep up with me."

Date: 2006-05-17 02:42 am (UTC)

Date: 2006-05-16 03:18 pm (UTC)
From: [identity profile] arvindn.livejournal.com
"He projects that machines will surpass human intelligence by approximately 2029 (this is referring to "general" intelligence, as they've already passed human intelligence in some limited ways such as ability to play chess), and we will enter the singularity era around 2045."

Earlier he had 2030 and 2040. Now he's increased the gap. That sounds so dumb to me. I can't think why the singularity wouldn't happen within days if not minutes of the emergence of machines surpassing human general intelligence. I believe that's Yudkowsky's opinion as well. But then again, I haven't read any of Kurzweil's books, just some essays on the web, so I'll shut up.

Date: 2006-05-16 04:26 pm (UTC)
From: [identity profile] spoonless.livejournal.com

I can't think why the singularity wouldn't happen within days if not minutes of the emergence of machines surpassing human general intelligence. I believe that's Yudkowsky's opinion as well.

You might be right about Yudkowsky. I asked him at the conference if he thought Kurzweil's predictions might be a bit optimistic, and he said "actually, if anything I'm worried they're pessimistic... I'm worried it will happen a lot sooner and we won't be ready for it." The title of Kurzweil's talk was "a hard or a soft takeoff?" and he was advocating a soft takeoff.

I think I'd have to agree with Kurzweil here, but like you I haven't read much so it's mostly a knee-jerk opinion. The reason why I think it might take a while after they surpass human intelligence is humans aren't smart enough to understand how we think... in other words, even if we could self-improve our code we wouldn't do a good job because we just don't know enough about how we think. I'd expect it would be the same for the machines for a while, although they would slowly improve over time. Once they really figure out how their own brains work, things could take off very quickly. All this is of course wild speculation on my part, but since I've been doing plenty of that lately anyway... what's a bit more :)

Hard or Soft...

Date: 2006-05-18 12:13 am (UTC)
From: (Anonymous)
Yudowsky, Kurzweil, and others have posited that the Machine intelligences will be able to begin re-coding themselves because part of the abilities that they will need to reach human level IQ in the first place (IQ is really the wrong word for it, but please, just go with it for now) is the ability to re-code themselves (Self-examination)...

In other words: The Machine intelligences will begin with this ability, and with it, surpass our Intelligence

Self Examination

Date: 2006-05-18 04:33 am (UTC)
From: [identity profile] spoonless.livejournal.com

because part of the abilities that they will need to reach human level IQ in the first place (IQ is really the wrong word for it, but please, just go with it for now) is the ability to re-code themselves (Self-examination)...

But humans have self-examination, and we don't have much of a clue how are own brains work... at least, not enough to design a better version of them. I tend to think if we get human-level AI working, it will have to be mostly by trial and error, with only a little bit of design put in that we fully understand.

Was it More?

Date: 2006-05-18 12:09 am (UTC)
From: (Anonymous)
As in Max, who had the theory about "MEST" (Mass, Energy, Space, Time) to use as a model for information systems...

In the agrument that has been going on here about the "Singularity", my understanding of the term comes from the combination of MEST and Information densities. The combination of these two create a singularity of sorts out of a combination of converging information upon the MEST substrate.

ie You get so much information coverging upon a single point in Place (Matter/Space/Time) that it creates its own form of (E)nergy, which in turn creats the singularity that creates the event horizon beyond which we cannot see...

So, it is not necesssarily the technology itself that causes the singularity, but the convergence of some many different things upon a small area that creates the singularity.

This is why they have created such things as the center for Interdisciplinary Studies at Stanford, because they realized that they need to have so much more than just physics, computer science and such to begin to control the events swirling around the MEST that is creating the singularity. You also need to understand language, and communication (two very different things) History, Sociology, Anthropology, Biology. Genetics, and so on...etc...

Re: Was it More?

Date: 2006-05-18 09:44 am (UTC)
From: [identity profile] spoonless.livejournal.com
It was John Smart. I must confess that I have little clue what most of the acronyms Smart throws around mean to him. He seems to use a lot of buzzwords, either made up by him or other people I'm not sure. While it could just be that I'm not used to his terminology (sounds a lot like corporate-speak to me... doesn't inspire much confidence in me when people talk like that). He also put a slide up referencing a book by James Gardner called Biocosm which I'm pretty certain is total crap... which also kind of turns me away.

That's not to say that what he's saying isn't right. It could be. It's just that, at least right now, I don't "get" what he is saying at all.

I found this paragraph on his website just now through searching:

It appears that MEST compression creates intelligence as it "runs down pre-existing potential hills" in the universe, ending in trillions of local black holes, each containing the highest local evolutionary developmental intelligence possible given their time-to-formation and local environmental constraints.


There are several things that baffle me about this, which makes the whole thing sound a bit like gobblydgook to me. How can black holes be "intelligent"? There's a lot more to intelligence than compressing matter into a tiny amount of space. While it's now fairly widely accepted that black holes don't destroy the information that goes into them, they at least mangle it so badly there's no way to recover it. Kind of like tossing a photo into a fire. So I don't see how you could store anything useful in them, let alone intelligence. I'm also not sure what a substrate means in this context. Sounds almost as vague as "MEST" here.

And I think it's highly unlikely that we would be able to access Planck scale physics within the next century... even with an aggressive exponential decrease in size every decade. Actually, I should do this calculation... I have a rough feeling for how fast our advances are here, but I don't have the data in front of me. I'll make a post about this if I find the actual numbers. Off the top of my head, I'd say it would have to be at least 200 years of exponential growth at the current rate--unless something else we hit the singularity (in the super-intelligent machines sense) first in which case all bets are off.

Profile

spoonless: (Default)
Domino Valdano

May 2023

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 13th, 2025 05:10 pm
Powered by Dreamwidth Studios