Skip to content
 

A.I. parity with the West in 2020

Someone just sent me a link to an editorial by Ken Church, in the journal Natural Language Engineering (who knew that journal was still going? I’d have thought open access would’ve killed it). The abstract of Church’s column says of China,

There is a bold government plan for AI with specific milestones for parity with the West in 2020, major breakthroughs by 2025 and the envy of the world by 2030.

Something about that plan sounded familiar. Then I remembered the Japanese Fifth Generation project. Here’s Ehud Shapiro, writing a trip report for ACM  35 years ago:

As part of Japan’s effort to become a leader in the computer industry, the Institute for New Generation Computer Technology has launched a revolutionary ten-year plan for the development of large computer systems which will be applicable to knowledge information processing systems. These Fifth Generation computers will be built around the concepts of logic programming. In order to refute the accusation that Japan exploits knowledge from abroad without contributing any of its own, this project will stimulate original research and will make its results available to the international research community.

My Ph.D. thesis, circa 1989, was partly on logic programming, as was my first book in 1992 (this post isn’t by Andrew, just in case you hadn’t noticed). Unfortunately, by the time my book came out, the field was pretty much dead, not that it had ever really been alive in the United States. As an example of how poorly it was regarded in the U.S., my first grant proposal to the U.S. National Science Foundation, circa 1990, was rejected with a review that literally said it was “too European.”

7 Comments

  1. Adam says:

    Bob,

    What killed logic programming? I’ve never seen anything on the history of this, and given that it died around the time I was born, I wasn’t there first-hand.

    • What “killed” it was the hype, in the sense that it was riding too high, with too much promise. That, and the fact that it takes a certain kind of thought process to wrap your head around it, which appeals to theorists but not much to the everyday programmer that’s the bread and butter of the industry.

      Of course, it wasn’t really killed. For example Erlang is a modern language being used in high reliability contexts, and it’s based on Prolog.

      • Emmanuel Charpentier says:

        More precisely, what “killed” the field (and, more generally, symbolic logic-based approaches to difficult problems) was the combination of :

        * Excessive hype, leading to disappointment.

        * Smarter hype from competitors, whose stated goals were not as assessable as logic programming’s, thus deflecting the disappointment that the modesty (to stay polite) of the achievements would have caused.

        * Hyper-competitive environment set up by the “management” of research : the “desire for efficiency”, a dubious definition of said “efficiency” and the short-term setting of goals led to killing anything that did not rapidly produce “deliverables”.

        One can note that the current hype on neural networks-based approach is currently reaching the same critical level. This hype being more cautious that in the case of logic-based approach, we probably won’t see a”nuclear winter” of them, but I still think that there will be some bullback.

      • “…it takes a certain kind of thought process to wrap your head around it…”

        I remember in the mid-’80s, someone hearing me say “…programming in it is like turning off 90% of your brain and just thinking with a tiny piece right at the back…” and guessing I was talking about Prolog.

      • Curious says:

        It does appear to be used in some areas:

        https://www.erlang.org/about

    • What killed Prolog is that it’s a terribly inefficient and hard-to-program language. It could do a few things based on pattern matching and recursion very elegantly, but most algorithms were a huge pain (see O’Keefe’s book Craft of Prolog for examples). Now languages like ML, Haskell, and even C++ templates use similar pattern-matching dispatch techniques, so all that practice with Prolog finally paid off when we built Stan in 2011—and the new parameter pack functionality in C++11 leads to entirely Prolog-like programs in C++, which always makes me smile.

      The real problem was that it billed itself as a logic programming language, whereas in fact it was depth-first search with pattern-matching based branching. That is a very hard paradigm in which to code efficient algorithms. No arrays! The best you could hope for was hashing for roughly constant-time access, but with very high overhead. So other than toy examples, pretty much everything is better done in other languages, even programming theorem provers.

      I don’t think hype kills things. Hype may bring more people into a field than it can support, like say web development circa 2000, many practitioners of which were unemployed by 2002 after the bubble burst. But it hardly killed web development. Prolog never had the bandwagon problem—most people couldn’t make heads or tails of it or its purpose.

      I was using (variants of) Prolog to express natural language grammars and to solve constraint problems (like the Zebra Puzzle [the Swede lives two houses from the Norwegian and doesn’t smoke Pall Malls, …, Who owns the zebra?]). It was a decent application, that resulted in my first open-source project, the Attribute Logic Engine. I built it because a lot of people didn’t believe the theory in my book, which was largely about a linear type inference system for logic programming languages with inheritance (and proper extensional circularity syntax, which was the big proof that nobody cared about). It let me play with a lot of fun domain theory mathematics, which is a very elegant model of computation and information using topological methods.

  2. Jake says:

    The bit that struck me from the quote is “the accusation that Japan exploits knowledge from abroad without contributing any of its own”, which is a good reminder that the kind of just-so reckons that get dropped about China (up until the last couple years or so, happily I haven’t heard them recently) and China’s ability to “innovate” are indistinguishable from the ones people were saying about Japan back when.

Leave a Reply