Meeting 26: Learning Logical Knowledge

Reading: AIAMA 19.1-19.5.2 (you can skim 19.3 and 19.4, and skip 19.5.3 onward)

Learning means creating or updating a knowledge-base based on examples. As such it applies to any knowledge base representation. However, many machine learning methods do not apply to logical representations such as FOL. In fact most ML courses do not consider hypothesis spaces described by FOL at all, instead preferring either emprirical or parametric functions (black-boxes) to define the set of candidate hypothesis. This section describes the challenges of learning in logic and introduces inductive programming. Despite these challenges, inductive programming provides an important feature over black-box methods, explainatory power.

Questions you should be able to answer after reading are:

  1. What is the size of the hypothesis space in FOL?
  2. How does version space learning work in principle?
  3. Why is noise so lethal to version space learning?
  4. Does explaination-based learning really learn anything other than shortcuts?
  5. How is explaination-based leaning similar to memoization of functions?
  6. What is the essential difference between explaination-based and relevance-based learning?
  7. What is the essential difference between relevance-based and knowledge-base inductive learning?