Inductive Reasoning

While Sherlock’s over there blogging The Science of Deduction, I’d argue most of human reasoning is inductive. We see lot’s of examples (e.g., 10 million white swans) and then try to explain them (swans are white). Checks out, right?

Primary Readings

Everyone should read these and be prepared to discuss:

Goodman, N. (1955)
The Riddle of Induction. Chapter 3 of Fact, fiction, and forecast (pp. 59–83). Cambridge, MA: Harvard University Press. (On Learn) This is a classic. Nelson Goodman was an influential philosopher of science (not to be confused with the living Standford based scientist Noah Goodman below). His thought experiment about Grue, in particular has been a persistant challenge for theories of induction.
Wason, P. C. (1960).
On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology, 12(3), 129-140. This is another classic. Peter Wason who we met in Representation topic has had at least two lasting impacts on cogntibve science: One with his card selection task analysis and another with this inductive reasoning task.

Secondary Readings

The presenter should read and incorporate these:

Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011).
How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279-1285.

Here’s another relatively modern paper as a comparison. Again it is interesting to compare the writing styles. It would seem like hierarchical Bayesian inference can help make more sense of how our inductive reasoning works, but does it really solve the riddle of induction? Note this paper is also quite relevant to the Rationality topic.

The article explores how the mind develops through the use of statistics, structure, and abstraction. It emphasizes the importance of understanding how the mind processes information and how it can be applied to various fields such as artificial intelligence and cognitive science. It focuses on the challenges of learning from sparse, noisy, and ambiguous data and highlights the ability of children to learn new words and concepts from just a few examples. It explains how Bayesian principles in the human mind are applied to specific cognitive capacities and modules. The article discusses the importance of abstract knowledge in learning, how learners acquire this knowledge, and different forms of abstract knowledge representation used in Bayesian cognitive models. It also discusses the concept of hierarchical Bayesian models and their role in learning abstract knowledge. The article concludes by highlighting the potential of Bayesian approaches in understanding cognition and its origins.

Goyal, A., & Bengio, Y. (2022).
Inductive biases for deep learning of higher-level cognition. Proceedings of the Royal Society A, 478 (20210068).

This paper examines the hypothesis that human and animal intelligence can be explained by a few principles. It focuses on the inductive biases used in deep learning for higher-level and sequential conscious processing in order to close the gap between current deep learning and human cognitive abilities. It emphasizes the need for additional inductive biases to achieve flexibility, robustness, and adaptability in deep learning models, particularly through knowledge representation. It also discusses the limitations of current machine learning systems in terms of their performance, generalizability, and robustness.

Here’sa recent paper by some of the big names in machine learning. It highlights very similar issues and solutions to problems of inductive generalization and the role of priors or inductive biases as arise in the human cognition literature.

Questions under discussion

  • How/do humans rationally solve the problem of induction?
  • What are inductive biases and what are they good for?