The PRL project aims to naturally express the reasoning of computer scientists and mathematicians when they are justifying programs or claims. It represents mathematical knowledge and makes it accessible in problem solving. These goals are similar to goals in AI, and our project has strong ties to AI. We want to examine these ties in this section. To describe them we first present a slice of AI using our terminology.

We see AI as having a definite birth event and date: the Dartmouth Conference in the summer of 1956 organized by John McCarthy and Marvin Minsky. From the beginning it dealt with reasoning, in both an inspirational and a controversial way. Because of its sweeping and starry vision - linking machine intelligence to human intelligence and speaking of endowing machines with reasoning abilities - some people consider AI the "soul" of computer science. But others have interpreted the claims and goals as hyperbole and prefer to take a more down-to-earth view. We have tried to be very "down-to-earth" but keeping our eyes on the stars for guidance.

According to many AI researchers, they have set the agenda for a large part of Computer Science in the sense that AI successes seeded separate research areas of CS, some no longer even associated with AI. For example, interactive "functional" programming language Lisp or early program verifiers or natural language processors all seeded separate research areas of CS.

It is clear that our project has benefitted substantially from AI research, and that will continue. There is also no doubt that AI is one branch of Computer Science that has caught the public imagination. It is an area like space exploration where science is wrapped up in science fiction. For AI it is the fiction of robots (R2D2, C3PO), intelligent machines (HAL, Proteus), of group minds, and digital reproduction of human minds (Gibson's Neuromancer), of minds and machines. It is the one area of Computer Science that will not run out of problems and dreams as it aspires to move from intelligence to super-intelligence and beyond.

For us, the science fiction is a world in which the progeny of Nuprl are helping computer scientists crack the P = NP problem or helping mathematicians settle the Riemann hypothesis. It is a world in which many new articles in mathematics are checked by systems like Nuprl before they are published and in which people build directly on each other's formal proofs worldwide. It is a world in which significant algorithms and systems are developed by extraction from proofs and in which new algorithms are discovered by interaction with Nuprl. In this world the input to the provers will be in spoken natural languages supplemented by graphics, and the output will come in an even greater variety of forms accessible to people with various handicaps or blessed with special ways of sensing.

One way to describe AI is to see it as an attempt to explain aspects of human intelligence computationally. On the other hand, it can be seen as an attempt to find out what a machine is. By all accounts, problem solving using symbolic representation of knowledge is a central part of AI; this is the part that is relevant to PRL. Some of the topics typically considered part of AI circa 1999 are these:

- Knowledge Representation (KR)
- Natural Language Processing (NLP)
- Search
- Planning
- Expert Systems
- Machine Learning (ML)

In our project we see applications and new approaches to these topics as follows:

Typically this task is approached by showing how to encode knowledge in the predicate calculus and in "semantic nets;" these nets can be seen as the predicate calculus augmented with pointers. Much of the published work is done in first order logic (partly because John McCarthy insisted that this was the right calculus for the task and for automated deduction). In our project we encode mathematical knowledge using a higher order type theory rather than a first order theory. We include pointers as part of the library, so knowledge is stored as a semantic net.

Nuprl includes first order logic, and using the pointers in a theory structure we can express what a semantic net can.

The Nuprl type theory is natural for formalizing mathematical domain knowledge. Its libraries ARE mathematical knowledge. It could express theories like classical mechanics or quantum mechanics. Martin-Lof type theory has been used to formalize accounts of grammar and discourse. It can express knowledge about itself. (Citation.)

We have also exploited the built-in computing primitives to express computational information in our knowledge base. This is a distinctive feature of our work, discussed in Formalizing Decidability Theorems about Automata. (Citation.)

The extensive domain knowledge from mathematics that we have coded into the system provides a rich environment for experimenting with expert systems. The ML language already provides many of the support functions needed to build expert systems, as Allen's ML Proof Expert demonstrates.

Ranta has shown that type theory is a good language for expressing the semantics of natural language. See also the works by Glynn Morrill, Type-Logical Grammar, 1994 as well as Pereira and Warren, "Parsing as deduction," 1983; Pereira, "Semantic interpretation as higher-order deduction," 1991. We have not yet exploited this in our work. Instead, Amanda Holland-Minkley focuses on using the FUF language generation tool to translate formal proofs into natural language.

All theorem provers rely on search to help find proofs. Many of the important early results in automated deduction, such as unification, resolution, hyper-resolution, and so forth were concerned with improving search through the Herbrand universe underlying a theory. In the Nuprl prover, search is guided by heuristic procedures called **tactics** built from search combinators called **tacticals**. For more detail, see our notes on Automated Deduction. Also see the work by Caldwell, Gent and Underwood on synthesizing an interesting search algorithm.

The language of tactics and tacticals seems like a very general way to express search, but we have not attempted to apply them outside of theorem proving.

The Oyster coding of Nuprl in Prolog has been used extensively by Alan Bundy's group at Edinburgh to study proof planning. They have focused on plans to automatically guide rewriting in the course of proofs by induction. They invented techniques for trying to match the shape of a goal to the shape of the induction hypothesis. They call the method "rippling." Brigitte Pientka and Christoph Kreitz have created tactics to express the rippling strategy in Nuprl.

The analogy tactics and the similarity tactics actually use the structure of existing proofs to create plans for new ones.

Stuart Allen's expert interface to the Nuprl prover encodes knowledge of the tactic collection and is able to display to the user a context-sensitive menu of which tactics are applicable at a goal. This interface is a small expert system. It is the basis for further work on expert tactic selection and expert advice about tactics. This kind of work is facilitated by the functional nature of the prover and the organization of the collection of over 500 key tactics.

Most of our work in AI is on the subject of Automated Deduction which is described separately.

- Aitken, William and Robert L. Constable. Reflecting on Nuprl Lessons 1-4, Cornell University, Computer Science Dept., Ithaca, NY, 1992.
- Aitken, William and Robert L. Constable. An Inductive Definition of Evaluation in Nuprl, CS 611 Notes, Cornell University, 1993.
- Allen, Stuart F.,, Robert L.~Constable, Douglas J.~Howe and William Aitken. The Semantics of Reflected Proof, Proceedings of the Fifth Symposium on Logic in Computer Science, IEEE, 1990, pages 95-197, June, year?
- U. Berger and H. Schwichtenberg, eds. Formalizing Decidability Theorems about Automata, Series F: Computer and Systems Sciences, NATO Advanced Study Institute, International Summer School held in Marktoberdorf, Germany, July 29-August 6, 1997, Springer, 1998.
- need Davis thesis
- Glynn Morrill, Type-Logical Grammar, 1994
- Pereira and Warren, "Parsing as deduction," 1983;

- Pereira, "Semantic interpretation as higher-order deduction," 1991.
- Ranta book
- Wang, H., Toward Mechanical Mathematics, IBM Journal of Research and Development, 1960, pages 2-22.