Singularity Summit 2012

This year the annual transhumanist/futurist/AI/lesswrong conference was expanded to two full days.  In terms of logistics, execution and turnout this was probably the best iteration of the summit I’ve been to, but the price has increased roughly in proportion.  The masonic center in nob hill has a single main auditorium, but it is a most excellent room and location.

I missed some of the early morning talks, but here are some highlights in no particular order:

Robin Hanson

Hanson’s talk gave a rather detailed exposition of his ’em’ (upload) futurist scenario.  I’ve only ever read bits and pieces of his em vision from his blog, Overcoming Bias, so this was new at least in details.  He covered the implications of subjective time dilation, an interesting subject I have previously written about several times:

One of the more entertaining parts were some slides sketching some possible mind branching patterns for various types of ems.

He used 1 thousand X and 1 million X subjective temporal speedups and compared latency considerations to derive likely physical community size (bounded by real-time communication constraints), much like in my articles above.  He also estimated a relative body size for humanoid robots, the idea being that faster thinking minds will want to inhabit smaller bodies (to move at the same relative speeds).  That particular point seems dubious – what’s the point of the physical world for an em?

Steven Pinkner

This talk was basically a summary of his book “The Better Angels of Our Nature” (or at least so I am guessing, I just looked up the book for the first time).  The main point: we are becoming less violent over time.  The trend is strong and fairly smooth.  The only big blips are the two world wars, and in the grand scheme they aren’t that big.  The potential explanations are just as fascinating as the data itself – namely it is all driven by technological change.  The main points of this talk fit in well with the systems theory mindset (the world is getting better in many ways simultaneously).

Jaan Tallin

Jaan’s talk was illustrated like a cartoon, which I found distracting at first.  His talk suddenly got much more interesting when it dived into Anthropic reasoning and Simulationism, something I’ve been meaning to write more about (again).

Ray Kurzweil

Ray gave almost the same talk Ray always gives: the exponential talk with charts.  He had what seemed to be a huge number of interesting, well illustrated, and information rich slides and then somehow formed a talk based on a random sampling of those slides biased against interesting-ness.  Some of the slides were about his forthcoming book, “How To Create a Mind”, and perhaps he didn’t want to leak too many details.  The talk was perhaps 80% exponential and 20% brain stuff related to his book.

The brain related part of his talk immediately reminded me of Jeff Hawkins and On Intelligence.  In fact, one or two of Kurzweil’s sentences describing the neocortex as a thin sheet about the size of a tablecloth pattern-matched as an exact repeat of something Hawkins either wrote or said in a talk somewhere.

The one novel slide that stood out was about some new research identifying a very regular grid pattern as an underlying connective structure in cortical wiring.  Infuriatingly his slide didn’t mention the actual article name, but after a little searching I”m betting he is referring to “The Geometric Structure of the Brain Fiber Pathways”.  Interestingly this research is already being contested.

Peter Norvig

Norvig’s talk was perhaps the most interesting, because he basically gave a rather detailed overview of recent progress towards AGI, focusing in particular on some mainstream AI research at google that he sees as likely future relevant.  If you have already been following up on this literature (visual cortex, deep belief nets, convolutional nets) it wasn’t entirely new, but it was enlightening to see how google could brute force some things to make progress in ways that are simply not possible for most researchers.

He also referenced his 2007 talk where he outlined about 6 research areas important for AGI, and of those he no longer views one as important (probabilistic logic) and he has seen steady progress in all the rest.  I didn’t find much of anything to disagree with.

On that note I had already come to the conclusion that logic is actually part of the problem (at least for natural language understanding).  Natural languages are ambiguous which causes headaches.  So its seems sensible that NL should be parsed into something like first order logic (or whatever new logic flavor floats your boat).  The problem is that the ambiguity of NL is entirely entangled with its statistical expressive power.  Moreover, for systems that employ the type of hierarchical statistical approximative generative modeling that appears to be key to intelligence (human or AI) – for these systems – natural language ambiguity is just not a problem, its a non-issue.  So if your AI design is built on some sort of regular formal logic because that is all it can handle, it is probably doomed from the start.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s