This special issue offers a broad range of papers showcasing the way cognitive psychologists use formal models to guide their theoretical and experimental investigations of human reasoning. The timing could not be more appropriate, for the psychology of reasoning is at a time when formal models are at the forefront of contemporary research. The psychology of reasoning was long characterized by an extreme focus on deductive reasoning, to such an extent that it could almost be identified with the psychology of deduction (Evans 2006). Since the founding result of the field was that people did poorly on simple deductive tasks (Wason 1968; Wason and Brooks 1979), researchers were generally suspicious about the relevance of formal models to their work. More specifically, it seemed that classic deductive logic, which had long stood as the natural formalism on which to model human reasoning, could not do a proper job after all.
For about two decades (roughly, the 1980s and the 1990s), research on human reasoning was primarily concerned with laying bare the cognitive roots of deductive errors. Researchers made an extensive use of conditional syllogisms and truth-table verification tasks, in order to identify which deductions were harder than others, and to offer a cognitive explanation for their difficulty. Major theories were developed during this period (Braine and O’Brien 1998; Johnson-Laird and Byrne 1991), which succeeded (in particular) in accounting for subtle changes in deductive difficulty. The Mental Model Theory (developed by Philip Johnson-Laird and collaborators) was among these major theories, and has continued fueling research until the present day. This longevity, though, came with so many adaptations and refinements that it became hard to speak of the Model Theory. Indeed, the theory progressively called for many independent changes aimed at accounting for specific phenomena, and the need increased for some unification of the mental model theories that co-existed in the field. The article in this special issue by Sangeet Khemlani and Philip Johnson-Laird provides such a unified computational implementation of the theory, called mReasoner.
During the late 1990s and the early twenty-first century, the psychology of reasoning slowly moved towards a broader range of problems than deduction, offering new challenges and new methods, which eventually coalesced into what some call the New Paradigm psychology of reasoning (Bonnefon 2011; Elqayam and Over 2012; Evans 2012). In essence, this new paradigm psychology turned the table on deduction. It became increasingly common for psychologists to observe that people did very little deducing in everyday life, and that the study of deduction was accordingly unlikely to deliver general insights about how people reasoned.
Although it might be too early to precisely characterize this new paradigm psychology, most would agree that it rests on three ideas: that the mind can process information in two ways, and thus that there are two ways people can reason from a given problem (Evans 2008; Kahneman 2011); that preferences over states of the world matter when reasoning about these states of the world (Bonnefon 2009); and that people mostly process uncertain information, to reach uncertain conclusions (Oaksford and Chater 2007). These three building blocks of the new psychology of reasoning have direct consequences for the formal models psychologists consider in their work, and some of these consequences are explored in this special issue.
For instance, the idea that different reasoners might engage different cognitive processes when solving the same problem, makes it hard to interpret responses at the group level, since the average response to a problem might only reflect a mixture of qualitatively different responses. This concern has led psychologists to reflect on which statistical model of their data might best reflect their theoretical models of cognitive processing. The contribution of Andrew Fugard and Keith Stenning offers a critical survey of how various statistical models of individual differences in reasoning have attempted to embed some notions of cognitive processing.
Other contributions in this special issue draw on the idea that everyday reasoning is primarily aimed at processing uncertain information. For example, the contribution of Ulrike Hahn, Adam Harris and Mike Oaksford demonstrates how Bayesian probability can provide a normative standard by which to evaluate the strength of everyday arguments, and unify research on reasoning and argumentation under a common theoretical and experimental framework. In line with this ambition, the article by David Lagnado, Norman Fenton and Martin Neil demonstrates how the qualitative component of graphical Bayesian networks can be applied to model evidential reasoning in a legal context, focusing on witness and alibi testimonies, as well as their interaction with other pieces of evidence.
Finally, two contributions to this special issue address the important problem of how to model human causal reasoning. York Hagmayer and Ralf Mayrhofer offer an introduction to the use of hierarchical Bayesian models for modeling human causal reasoning, review the empirical state of the art in favor of this strategy, and discuss its limitations. They argue in particular that while hierarchical Bayesian models can show what optimal inferences would be, cognitive process models are required to explain how these inferences can be realized given our cognitive limitations. The article by Philip Fernbach and Bob Rehder rises up to this very challenge, by integrating a notion of cognitive effort with Bayesian causal networks. In three experiments, they show that human reasoners deviate from the normative conclusions of a Bayesian network because of the cognitive shortcuts they take in order to reduce processing effort.
Overall, the articles in this special issue offer an accurate snapshot of the nature and role of formal models in contemporary psychology of reasoning. Psychologists can develop their own formal models, suited to their specific needs; they may rely on statistical models for capturing individual differences in reasoning; and they often draw on Bayesian formalisms, qualitative or quantitative, in order to model everyday reasoning for causal, evidential, and argumentative purposes. While it is still uncommon for scholars of reasoning to seek inspiration in non-Bayesian formalisms developed in Artificial Intelligence (Pfeifer and Kleiter 2009; Rahwan et al. 2010), endeavors such as this special issue hold the promise to strengthen the dialogue between the two communities. The exciting task lays before us to join forces and fully integrate computational and experimental approaches to human reasoning.
Bonnefon, J. F. 2009. A Theory of Utility Conditionals: Paralogical Reasoning from Decision-Theoretic Leakage. Psychological Review, 116: 888–907. (doi:10.1037/a0017186)
Bonnefon, J. F. 2011. Le raisonneur et ses modèles, Grenoble, France: Presses Universitaires de Grenoble.
Braine, M. D.S. and O’Brien, D. P. 1998. Mental Logic, Mahwah, NJ: Erlbaum.
Elqayam, S. and Over, D. E. 2012. Probabilities, Beliefs, and Dual-processing: The Paradigm Shift in the Psychology of Reasoning. shape Mind and Society, 11: 27–40. (doi:10.1007/s11299-012-0102-4)
Evans, J. S.B.T. 2006. Logic and Human Reasoning: An Assessment of the Deduction Paradigm. Psychological Bulletin, 128: 978–996. (doi:10.1037/0033-2909.128.6.978)
Evans, J. S.B.T. 2008. Dual-processing Accounts of Reasoning. Annual Review of Psychology, 59: 255–278. (doi:10.1146/annurev.psych.59.103006.093629)
Evans, J. 2012. Questions and Challenges for the New Psychology of Reasoning. Thinking and Reasoning, 18: 5–31. (doi:10.1080/13546783.2011.637674)
Johnson-Laird, P. N. and Byrne, R. M.J. 1991. Deduction, Hillsdale, NJ: Lawrence Erlbaum Associates.
Kahneman, D. 2011. Thinking, Fast and Slow, New York: Farrar, Straus and Giroux.
Oaksford, M. and Chater, N. 2007. Bayesian Rationality: The Probabilistic Approach to Human Reasoning, Oxford: Oxford University Press.
Pfeifer, N. and Kleiter, G. D. 2009. Framing Human Inference by Coherence Based Probability Logic. Journal of Applied Logic, 7: 206–217. (doi:10.1016/j.jal.2007.11.005)
Rahwan, I., Madakkatel, M. I., Bonnefon, J. F., Awan, R. N. and Abdallah, S. 2010. Behavioral Experiments for Assessing the Abstract Argumentation Semantics of Reinstatement. Cognitive Science, 34: 1483–1502. (doi:10.1111/j.1551-6709.2010.01123.x)
Wason, P. C. 1968. Reasoning about a Rule. Quarterly Journal of Experimental Psychology, 20: 271–281. (doi:10.1080/14640746808400161)
Wason, P. C. and Brooks, P. G. 1979. THOG: The Anatomy of a Problem. Psychological Research, 41: 79–90. (doi:10.1007/BF00309425)