Who’s Afraid of Smart Machines?

On top of the discussion about robo-ethicists wanting to revisit Asimov’s Three Laws of Robotics I saw several pointers to a New York Times article called Scientists Worry Machines May Outsmart Man. Will we have thinking machines that are self-aware and smarter than humans? One of the famous papers on this is The Coming Technological Singularity: How to Survive in the Post-Human Era by Vernor Vinge. Vinge postulates that by the year 2030 we will see “the imminent creation by technology of entities with greater than human intelligence.” OK now to me the term “post-human era is a little scary. But personally I don’t see it happening.

Now the topic of creating machines that really think; that are creative and inventive; and that are self-aware has been around forever. In fact I’m sure it long pre-dates computers. But I do know that it was a topic of discussion when I was an undergraduate student over 35 years ago. People were predicting this “singularity” within 30 years even them. That prediction seems to have been wrong though. At first people thought we’d come up with machines that think the way people do. That turned out to be difficult at least in part because we still don’t understand how people think. I’m not sure we’ll figure that out in the next 20+ years either. As fast as technology moves we don’t seem to be figuring out people that quickly.

And there is the question of what is thinking? What is creativity? What is self-awareness? Yes, friends, we are delving into the arena of philosophy! Ha, you thought you were done with that or that you’d never need it didn’t you? I think you are wrong there at least if you want to talk about smart machines. We have to have some way of knowing when we get there even if we don’t know how we are going to get there. Or if we are going to get there.

Interestingly enough I have known several computer scientists who started in philosophy. Philosophy majors often make great programmers you know. Some claim it is because they are good logical thinkers. My theory is that it is because they have an easier time grasping abstractions. Dealing with abstraction is key to modern computer science. I wish I’d paid more attention in philosophy classes but the older I get and the more I get into computer science the more grateful I am for the courses I did have.

Anyway. I am skeptical of the notion that people who can’t understand how they think can create computers that think better than humans. I do not believe in the “and here a miracle happens” school of science and engineering either. We have computers that do special purpose things, play chess for example, and beat humans. Are those programs thinking? Are they creative? I don’t think so. I think they are good at dealing with rules and calculating lots and lots of steps. Humans get good results with less work though. One can teach a child to play chess in a half hour. They will (or can) learn on their own how to get better and better. While we have been programming heuristics into software for decades we have not made the sort of progress that was talked about 30-40 years ago. I think it unlikely. I’m not sure if that makes me an optimist or a pessimist. Your call. 

Perhaps there is something not easily reachable about what thinking and creativity are all about. Or perhaps it is something simple just out of view that will be discovered any day now. I think it may be the former. I also think that philosophy is going to be more helpful in getting us to true thinking machines or at least proving if they are possible or not.

So what do you think? What do your students think? Is this a topic that comes up in computer science courses? Does it come up in faculty lounges or places where students congregate? I think perhaps it should.