Professor of Computer Science
Affiliate of the Electrical and Computer Engineering Department
                 Beckman Institute
                 Coordinated Science Laboratory
University of Illinois

Email: mrebl uiuc edu with AT and DOT in their intuitive positions

Current course: CS/ECE 548

Who is Mr. EBL?

1995-present Professor, Department of Computer Science & Department of Electrical and Computer Engineering, University of Illinois
1985-1995 Associate Professor, Department of Computer Science & Department of Electrical and Computer Engineering, University of Illinois
1981-1985 Assistant Professor, Department of Electrical and Computer Engineering, University of Illinois
1980 Instructor, Computer Science Department, Yale University
1979 Ph.D., Computer Science Department, Yale University
1974 B.S., Physics, University of South Dakota

What is EBL?

Explanation-Based Learning (EBL) is an approach to supervised machine learning that relies on prior domain knowledge. We now know that Lockeian tabula rasa learning is mathematically self-inconsistent, and bias (defined as any preference over outcomes not due to the training data) is an unavoidable part of every learning algorithm. EBL can be seen as trying to maximize the bias of a learner. This philosophy sets EBL apart from much of today's statistical machine learning. Importantly, EBL learns so as to be as consistent as possible with both the expert-supplied prior domain knowledge and the training examples; the prior domain knowledge is treated as a kind of analytic evidence to augment the empirical evidence of the training examples. These distinct evidence sources interact nonlinearly and so can provide guidance to the learner far greater than the sum of the information in the training set and information of the prior knowledge.

In EBL, training examples are explained using the domain theory. An explanation is a justification for why a subset of training examples merit their assigned training labels. Other training examples are used to evaluate these conjectured explanations. The informational effect of a confirmed explanation is equivalent to all examples covered by the explanation. Since this can be significantly larger than the training set, the learner can draw much stronger statistical conclusions than would be warranted from the training data alone. From one perspective, EBL uses prior knowledge to magnify the information content of the original training set. From another, EBL uses the training data to unlock or interpret the domain theory so as to limit its "deductive" conclusions to those that are likely to be robust and useful in the real world.

This modern version of EBL works well even with noisy training data and imperfect approximate domain knowledge. Sadly, current textbooks, if they cover EBL at all, present a banal version dating from ten to twenty years ago.

Where can I read more about EBL?

Explanation-based feature construction. BIBTEX Shiau Hong Lim, Li-Lun Wang, and Gerald DeJong. In IJCAI07, the Twentieth International Joint Conference on Artificial Intelligence, pages 931-936, 2007.
Toward robust real-world inference: A new perspective on explanation-based learning. BIBTEX Gerald DeJong. In ECML06, the Seventeenth European Conference on Machine Learning, pages 102-113, 2006.
Explanation-based learning for image understanding. BIBTEX Qiang Sun, Li-Lun Wang, and Gerald DeJong. In AAAI06, the Twenty-First National Conference on Artificial Intelligence, pages 1679-1682, 2006.
Explanation-based acquisition of planning operators. BIBTEX Geoffrey Levine and Gerald DeJong. In ICAPS06, the Sixteenth International Conference on Automated Planning and Scheduling, pages 152-161, 2006.
Qualitative reinforcement learning. BIBTEX Arkady Epshteyn and Gerald DeJong. In ICML06, the Twenty-Third International Conference on Machine Learning, pages 305-312, 2006.
Generative prior knowledge for discriminative classification. BIBTEX Arkady Epshteyn and Gerald DeJong. Journal of Artificial Intelligence Research, Vol. 27 pp. 25-53, 2006.
Feature kernel functions: Improving svms using high-level knowledge. BIBTEX Qiang Sun and Gerald DeJong. In CVPR05, International Conference on Computer Vision and Pattern Recognition, pages 177-183, 2005.
Explanation-augmented svm: an approach to incorporating domain knowledge into svm learning. BIBTEX Qiang Sun and Gerald DeJong. In ICML05, the Twenty-Second International Conference on Machine Learning, pages 864-871, 2005.
Towards finite-sample convergence of direct reinforcement learning. BIBTEX Shiau Hong Lim and Gerald DeJong. In ECML05, the Sixteenth European Conference on Machine Learning, pages 230-241, 2005.
Rotational prior knowledge for svms. BIBTEX Arkady Epshteyn and Gerald DeJong. In ECML05, the Sixteenth European Conference on Machine Learning, pages 108-119, 2005.
Explanation-based learning. BIBTEX Gerald DeJong. In A. Tucker, editor, Computer Science Handbook. CRC 2nd edition, 2004.
The influence of reward on the speed of reinforcement learning: An analysis of shaping. BIBTEX Adam Laud and Gerald DeJong. In ICML03, the Twentieth International Conference on Artificial Intelligence, pages 440-447, 2003.
AI can rival control theory for goal achievement in a challenging dynamical system. BIBTEX Gerald DeJong. Computational Intelligence, 15 no. 4 pp. 333-366, 1999.
Explanation-based learning: An alternative view. BIBTEX Gerald DeJong and Raymond J. Mooney. Machine Learning, 1 no. 2 pp. 145-176, 1986.
Generalizations based on explanations. BIBTEX Gerald DeJong. In IJCAI81, the Seventh International Joint Conference on Artificial Intelligence, pages 67-69, 1981.