Intelligent Agent Nirvana

At the dinner last week, we got into an interesting discussion about intelligent agents.

The discussion is around whether intelligent agents will be able to gather the information that we want to see automatically, or whether human intervention is necessary.

I should probably note here that I’m simplifying the discussion, to keep things short and to make me look better.

My assertion is that intelligent agents not only aren’t there yet, but are unlikely to be there (wherever “there” may be) in the forseeable future.  Now, in making that pronouncement, I am aware that the track record of people saying that thing are impossible – such as the crazy notion of “heavier than air” flight – isn’t exactly stellar.

So, why don’t I think that intelligent agents are going to work – at least for me? Well, a few reasons.

The first is my skepticism around anything that requires AI. Way back in the early 80s, there was lots of press around AI systems, with a ton of money being spent both by DARPA and the Japanese, and no real results. I think that’s a good demonstration that “AI is hard”, and I don’t expect any breakthroughs in that area.

Another challenge is that I really don’t know what I want to know. I continue to find offbeat information in blogs that I didn’t know that I wanted to know, so I don’t see how I can expect an agent to filter in that manner.

Finally – and related to the last point – I think that coming up with a categorization system that works well is likely to be very, very difficult. If you’ve ever fought with Google trying to find the one page in 100,000 on a specific topic that you looked at a few weeks ago, then you understand what I’m talking about.

I expect that a human filter will remain a necessity to get good information for quite some time.

So, what do you think? Will agents make human editors obsolete?

FYI, here’s a brief history of AI, and a wikipedia article

Comments (12)

  1. haacked says:

    At some point I think an AI will reach "there" in gathering information, but we’ll have to change our approach a bit. A lot of AI filters use a form of Collaborative filtering. This works pretty well to a degree. If people similar to you like it, you might like it. The problem is that it’s very hard to "profile" an individual. Just as you find offbeat information in blogs (which chances are feature people similar to you or you wouldn’t find it), Collaborative filters don’t do so well in that regard. I think it will require advances in psychology producing better profiles on how people make choices and preferences as well as adding some more randomness into filtering algorithms. "Hi, I’m your AI filter today. I have no reason to believe you’d like this, but I’ll go out on a whim and give it a shot." Heck, I don’t think human editors do all that great a job.

  2. Tim says:

    I am also a skeptic on the subject. I don’t think humans have enough understanding of what intelligence is to make a useful artificial one. And I too find myself wandering offbeat paths on the Web, just because something caught my attention. How would you tell an AI agent to do something like that?

    I do think that an AI agent could be made to find that one page on a specific topic you read a few weeks ago (Where was that page on Trebuchets??) Or perhaps to collect and summarize thousands of pages on a subject, and provide references? Or provide suggestions on weeding out irrelevant hits (a search on the subject of regular expression parsing has hits for Perl, PHP, grep, & C#. Can I provide details for any one of these?)

    The funny thing about categories is that everyone has a different idea of what fits into where. I hate going to grocery store I have never been to before looking for a particular item. Parmesan cheese, which I associate with spagetti, in my mind should be in the same place as spagetti and pasta. Sometimes it is, other times not.

    Or maybe I am just strange.

  3. Wesner Moise says:

    I disagree with your that AI won’t happen…

    1) There were successful AI projects in the 80’s, but what often happens with AI… is that it ceases to be called AI when the problem is solved.

    2) AI often tends to be memory and CPU intensive, and that barrier has been coming down at an exponential rate since the 80’s. Maintaining an ontology in memory just 10 years ago was prohibitive because machines only had 8MBs of memory, much of which was occupied by the OS; how then can an ontology which would need to occupy that much RAM be expected be practical in such a machine. The ease of developing AI applications have dramatically improved over the past decade with corpuses and resources from LDC, WordNet. There’s also the advent of the Web that by itself is a huge linguistic resources.

    One problem is that there is a lot of plumbing necessary and Microsoft and other platform companies have come through a set of API.

    Just imagine how much easier, for example, it would be if we actually had an ontology built into the OS; actually, there’s. Ontologies are hard to build; plus there is the issue of how to build one across multiple human languages. Other things include a faithful natural language system.

    These things should be built into an OS, just like a rich text system, document indexing, or a HTML-browsing controls. Few companies would embark in building such systems, but there are numerous applications that can be built with such technologies.

    Now, Microsoft is coming up with a System.NaturalLanguage API in the next Longhorn, but it will be limited in function, only providing spell checking and tagging support, but no parsing capability. As a result, developers will still have to build their own natural language systems, and any advances that occur in NLP in the next few years will have to happen without Microsoft.

  4. Wesner Moise says:

    I also do believe, that although AI appears hard right now, I think that the solutions will actually appear simple.

    I think that it’s simply a matter of introducing human concepts like words, senses, relations between words (hypernymy, meronymy, synonymy, etc) into the OS.

    The concept of an ontology may seemed weird, but, really all our basic types (ints, strings and so on) form an ontology. Computers don’t really understand numbers; we standardized a representation for numbers and created a set of instructions that manipulate sets of bits to behavior as numbers do. All our instructions such as addition and negation are really relations between numbers or pair of numbers.

    Also, adding a accurate natural language parser and text generator would be helpful too.

    In Longhorn, the OS is adding new concepts like Contacts; it’s system support for such standard concepts that makes difficult technology widely available.

    Before gradients appeared in GDI and GDI+, very few applications employed gradients despite the fact that they are not hard to fake. If Windows didn’t support playing movies, no application would show video; similarly, for ink.

    In Longhorn, we will see lots of animations and 3D effects, where currently they are non-existent in today’s applications. It’s not because these features are hard; it’s because there is no platform support yet.

    Before humans conceive of numbers, it made it hard to perform transactions. Now, how hard is it to write natural language applications, when the WinFX doesn’t even have the concept

  5. andrew says:

    i think we should worry about two things before trying to build powerful agents

    1. soft computing. we need to get past "if" statements & von neumann logic. my wife manages to use very little logic and seems to get by just fine 🙂 the 60’s style game/decision tree stuff just doesn’t work on open ended problems, and we have neither the compilers or the massively (in the millions) multithreaded hardware needed to really do soft stuff.

    2. in nature we had life before intelligence. alife must exist before ai.

  6. Hi Scott,

    Airplanes are designed by people, built by people, and flown by people.

    My point? All generative knowledge comes by people, is shared by people, and learned by people.

    The question is not if a pile of bits knows all of your tastes, the question is can the bits find out who does know your tastes and get the information from them.

    The trick is an aggregator/browser that can match your "pattern" to those of others and use these as a generative source re your tastes. The network really is the computer.

    Talk to Lilly at MSR there in Redmond, Dare from the SQL team, and Scoble. Throw some ideas at the graph theory geeks – buy Neil Roseman from Amazon an expensive lunch. Come up with a manifesto, send it to Bill, profit.

  7. Drew says:

    Could you simplify the discussion a little more, please? What do you mean by "intelligent agents"? Do spam filters qualify? Does Google News? Maybe the definition of "intelligent agent" isn’t what I thought it was. Then again "intelligence" itself seems to be redefined whenever we discover some other species that qualifies(use tools? language? &c.).

    Maybe we need a different, less personal, or less human-describing term for intelligence. Or intelligent agents. Or whatever. I think the root of the problem is twofold:

    1. Humans have too much ego to ascribe intelligence to anything except themselves.

    2. Hey – why isn’t this just like living in Star Trek? Star Trek is cool. I want to be a Vulcan. If my computer isn’t sentient it’s not intelligent.

  8. CesarGon says:

    Perhaps you would like to check my post on agents in software engineering:!1pGomXRO1m3e0JP8HmKF05Ag!194.entry



  9. Drew says:

    On further consideration I belive there is also factor #3 (implied by Christopher):

    "Hi Scott"

    Meaning maybe we’ve met intelligent agents but failed to recognize them. Or maybe they’ve failed to recognize *us*.

    I think Christopher is a bot, which might explain the name mixup. Ha! You are defeated by Turing yet again, bot. (insert sarcastic grin here, but please – not an emoticon)

  10. Oli D. says:


    I think when you are surfing more or less randomly on the internet for entertainment you wouldn’t want a bot/agent to help you?

    As I remember quite some time ago, I saw a good application for intelligent agents. It was a service called firefly, where you would enter your tastes on music, and it gave you some hints wich bands/cds you would also like. And it worked a lot better than the hints from amazon.

    And no, I don’t think we desperatly need support from the OS. Intelligent agents don’t have to be able to understand natural language. Sure, it would help, but it is feasible for english, perhaps other european languages, but one of the chinese languages? Whats missing is a "killer" application, which shows the general value of the concep to the masses.


    Oli D.

  11. Darren Oakey says:

    Ha – I disagree,

    pay me to develop for a year and I’ll prove it 🙂

    I believe intelligent agents are very, very close, close enough that I think I could develop one that you would consider fulfilled your above specifications in under a year – in fact, I believe strongly enough that I’m unwilling to say how 🙂

    We interact with "intelligent" things every day, from computer game algorithms, voice recognition, google, MS search categorisation, and all sorts of other things. There are reasonable algorithms for classification, summarisation and learning – look at the spam tools etc.

    However, there are still three elements missing for intelligent agent to be what you want:

    a) a good way of tying all the capabilities out there together into a cohesive unit

    b) a good way of interacting with it

    c) an easy way of adding to it’s capabilities (ideally automatically)

    I have an answer to all three of those, and believe that it would be quite straightforward to build an "intelligent" agent that was immediately useful, and became moreso as time went on.

  12. andychu says:“>“>“>”>“>“>”>“>“>”>“>“>”>“>“>”>“>“>”>“>“>“>“>“>“>“>“>“>“>“>“>”>“>”>“>”>“>“>”>“>“>”>“>“>“>”>“>”>“>”>“>“>“>”>“>“>“>”>“>“>“>”>“>”>“>”>“>“>“>”>“>“>”>“>“>“>”>“>”>“>”>“>“>“>”>“>“>“>“>“>“>“>“>“>“>“>“>“>”>“>“>“>”>“>”>“>”>“>“>“>”>“>“>“>”>“>“>“>”>“>”>“>”>“>“>“>”>“>“>”>“>“>“>”>“>“>”>“>“>“>”>“>“> google“> google“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>”>“>“>“>”>“>“>“>“>”>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>“>



Skip to main content