In this chapter, I outline a way of approaching rationality assessment which I call the context-sensitive consequentialist approach and provide an example of how it can be applied to empirical data obtained from experimental studies based on reasoning tasks. As is well-known, in the last decades, different kinds of contextualist approaches have been employed in various philosophical fields, particularly epistemology, ethics, and the philosophy of language. Up to now, however, there has been no systematic attempt to apply such an approach to questions about human reasoning and rationality. Rather, researchers who have tried to explain away particular cases of alleged errors in reasoning by appealing to contextual and conversational factors (e.g. Hilton 1995; Politzer 1986; 2004), have failed to provide a general normative framework for their analyses. My attempt to develop a context-sensitive consequentialist approach to rationality assessment would like to be a way to fill this gap.
The chapter is divided into three main sections. In the first one, I characterize the general idea underlying the context-sensitive consequentialist approach to rationality assessment. I first examine the conversational approach to reasoning performance and the cognitive conception of context which its supporters assume. Then, I argue that such an approach has to be refined and integrated in order to provide a suitable normative framework for rationality assessment. In particular, starting from some considerations as regards the situatedness of speech acts, inspired by the work of J.L. Austin, I oppose an objective notion of context to the cognitive one assumed by conversational pragmatists, and then explore the implications that the adoption of an objective notion of context would have for the development of an approach to rationality assessment. In the second section, I characterize the main assumptions and purposes of the context-sensitive consequentialist approach to rationality assessment and provide its building blocks. By focusing on the strong interdependence between the particular structure of a reasoning task and its understanding on the part of the subjects, I outline a two-step normative framework for situationally establishing the legitimacy of the subjects' interpretations of the task and the normative appropriateness of their responses. In the last section, I consider how the context-sensitive consequentialist approach can be applied to empirical data obtained from a well-known experimental reasoning task, that is, the Wason selection task. After introducing an overview of the main issues related to the studies on this task, I examine whether in its standard version the most common answer is indeed as irrational as many psychologists have claimed. Applying the two-step normative framework, I maintain that although, relative to their ordinary goals, the subjects' most common response can be considered as rational, their real error resides in their framing of the problem which does not match the information as explicitly presented in the experimental context.
In this section, I focus on the role of context in questions about human reasoning and rationality. First, I explain why reasoning cannot be detached from the context wherein it occurs. Second, I examine the conversational pragmatists' approach to rationality assessment and criticize their cognitive conception of context. Third, I turn to John L. Austin's analysis of the situatedness of speech acts and to the related idea of objective context. Finally, I consider the implications that such a notion of context could have for rationality assessment.
In the previous chapter, I have assumed that human reasoning invariably occurs within what I have called a situational context (see Chapter 4, Section 4). According to a strict cognitivist point of view, however, one could consider human reasoning as an independent and self-centred cognitive activity. For example, as is pointed out by Denis Hilton (1995: 248), "[m]ost psychologists conceive of judgment and reasoning as cognitive processes, which go on 'in the head' and involve only intrapsychic information processing". Indeed, the processing by which a conclusion or decision is reached takes place entirely within people's cognitive systems. Accordingly, the proponents of such a view would hold that cognitive processes underlying reasoning work in the absence of context, even if some contexts can adversely or positively affect some instances of reasoning. As Daniel Andler (1993: 291) observes, the idea that reasoning can occur in the absence of context calls for a clarification as to what "absence of context" means. On the one side, if you take this expression literally, it seems to be at least incoherent as any event takes place somewhere. On the other side, according to a more "technical" interpretation, "absence of context" can be understood as "something analogous to the notion of a physical process occurring in a vacuum" (Andler 1993: 291). According to Andler, many scholars would identify the vacuum where reasoning occurs with formality, namely absence of content. But this is a misleading equation. When faced with a reasoning task (even an elementary exercise coming from an introductory textbook of logic), people are attempting to solve it within a certain kind of context and formality itself is one aspect of the context. In Andler's view, it is clearly wrong to assume that reasoning may have a context-free mode, while sometimes is context-sensitive. According to him,the confusion arises from thinking of 'context-free' as 'occurring in normal, unmarked circumstances'; so context-sensitivity in the mild sense would amount to the existence of marked, non-default contexts leading to an output differing somewhat from the output obtained in the default context. (Andler 1993: 291)
Contextualization is at the core of human activity: whatever kind of activity people are engaged in, they are doing it within a context. Indeed, there is no such thing as solving a problem, deciding what to do next and the like, outside of some context where the information, instructions and other similar cues are given. Accordingly, what is needed is a closer examination of the contribution of context to human activities such as inferring conclusions, making predictions, judging the likelihood of a particular event, making decisions, and testing hypotheses. As with other widely used notions that are commonly referred to in analyzing everyday activities, however, context is difficult to define and grasp in all its features and roles.
In the study of human reasoning and rationality, the role of context has been widely discussed and analyzed by the supporters of the conversational or pragmatic approach to the analysis of reasoning performances (e.g., Hilton 1995; Politzer 1986; 2004; Politzer & Macchi 2000; 2005; Schwarz 1996; for a survey, see Lee 2006). Their pragmatic analyses are, above all, a reaction to the approaches to reasoning performance interpretation and evaluation employed in earlier psychological research on human reasoning. For a long time, researchers have maintained that subjects' interpretations of the task have always to fit with the experimenter's representation of the task and so normative standards have to be appropriate to this representation, regardless of how they understand or interpret the task. It has been implicitly assumed that the task with which subjects are faced is a well-defined one, that is, it explicitly provides all information necessary to solve the task according to the representation the experimenter has assumed to be the right one. As is noted by Evans and Feeney (2004: 78), "any influence of prior knowledge or belief about the problem content or context [has been taken] to be normatively irrelevant to the definition of a correct answer". Recently, however, these classical assumptions about the interpretation and evaluation of reasoning performances have been widely re-examined and criticized. Several researchers now claim that the evaluation of subjects' performances on a reasoning task should be always relativized to their interpretation of the task and the conclusions they draw must be evaluated by considering both their goals and the background assumptions that they have selected as relevant to solving the reasoning problem (see, e.g., Evans & Feeney 2004; Girotto 2004). Such an approach invites a main worry: it could be the case that experimenters are too permissive with respect to the evaluation of subjects' reasoning performance, that is, they may explain away any normatively inappropriate response by assuming that subjects have interpreted the reasoning problem in ways that are coherent with their responses. If they do so, experimenters deprive standards of rationality of all their normative force (see also Chapter 1, Section 3.5). To prevent that, the supporters of the conversational approach have proposed to explain reasoning performances by appealing to contextual and conversational factors. They consider reasoning as an activity which takes place in a context, be it either linguistic or interpersonal or both, and cannot be detached from it. On this view, the inseparability between reasoning and context is not only a practical concern, but also a theoretical stance.
Supporters of the conversational approach maintain that studying reasoning and rationality from the point of view of pragmatics allows us to discover new factors that are likely to determine subject's reasoning performances (see Politzer & Macchi 2000; 2005). In their view, what received approaches have not sufficiently examined is the context which arises from the definition of a problem. Therefore, before any experiment on reasoning can be made, it is necessary to consider the possible ways in which subjects may understand and interpret the reasoning task they are faced with. As Politzer and Macchi (2005 :120) observe, if after a reasoning problem has been analyzed from the point of view of pragmatics, it is discovered that subjects may have understood the reasoning problem in ways that differ from what the experimenter has assumed to be the case, and thereby subjects have approached a problem that has a different nature from that devised by the experimenter, that may have deep consequences on the assessment of subjects' reasoning performances. In particular, according to Politzer and Macchi (2005: 120-121), experimental tasks for which a normatively correct response has been defined should be examined at two different levels:
One, carried out at a micro-structure level, consists of a linguistic analysis of the premises or of the problem statement in order to make sure that they convey the meaning intended by the experimenter. A typical outcome of such an analysis is the identification of different possible interpretations due to the generation of conversational implicatures [...]. The other examination, at a macro-structure level, consists of identifying the representation of the task that participants are likely to build: a typical outcome of this examination is the identification of the kind of skill, knowledge, or ability that participants think they must exhibit in order to satisfy the experimenter's request.
The second stage of analysis focuses on the relationship between experimenter and subject. Their relationship is taken to be asymmetrical because it is the subject who tries to understand what the experimenter's intentions are. However, the intentions of the experimenter are not always completely transparent to the subject and thereby the latter may attribute intentions to the experimenter that may be very different from the experimenter's expectations. If this occurs without being recognized by the experimenter, the subject's interpretation of the task may affect the experimenter's analysis of the results and her evaluation of the subject's reasoning performance. Such a pragmatic method is situational in the sense that it focuses on situational (experimental) constraints in order to judge the normative appropriateness of subjects' reasoning performances. The explanatory situation is constituted by the experimenter and her relationship with the subject. Opposing pessimism on human rationality, supporters of the conversational approach hold that, by appealing to contextual and conversational factors, subject responses may be said to be very often "conversationally rational". As Hilton (1995: 264) points out, "many of the experimental results that have been attributed to faulty reasoning may be reinterpreted as being due to rational interpretations of experimenter-given information". But what does it mean to be "conversationally rational"? What kind of context do conversationally inspired analyses of reasoning performances imply? At this point, I need to say something more about the nature of the context which is focused upon. Supporters of the conversational approach, as I understand their claims, define the context as the set of assumptions that the reasoner supposes herself to share with the experimenter. Indeed, as seen above, they regard the participant as trying to understand the experimenter's intentions. These assumptions do not come from the situational context but rather are part of the subjects' cognitive context.[1] If context amounts to the assumptions that the reasoner takes to be held in common by herself and the experimenter, then it is fairly internal and cognitive. On this view, a reasoning performance could be regarded as normatively inappropriate only in relation to the reasoner's reconstruction of the intentions of the experimenter about the reasoning task she is faced with. Doing so, errors in reasoning may be always explained away by appealing to the reasoner's cognitive assumptions and what we usually consider irrationality may become a conversationally rational way of reconstructing the goal set by the experimenter in the reasoning task. While such accounts are typically optimistic about human rationality, I hold that a contextualist account should not prevent criticism of subjects' reasoning performances. Indeed, with regard to the same reasoning performance, subjects might be reasoning rationally in conversational terms, and yet be subject to criticism (for example, if the subject's representation of the task context does not match the contextual information as explicitly presented in the experimental context). It seems to me that in order to provide an appropriate and complete evaluation of human reasoning it is necessary to consider the appropriateness of subjects' task interpretations. But it is only with respect to something external to the reasoner and independent of her cognitive assumptions, that it makes sense to assess, or attempt to assess, her interpretations. So, if we want to have any normative standards for the evaluation of reasoning performances, then we must also have the means to evaluate the legitimacy of subjects' task interpretations. To fulfil this need, the conversational account should be integrated into a more general normative framework.
Let us consider the first step that has to be made in order to develop such a general normative framework. According to supporters of the pragmatic approach, as seen above, the evaluation of the correctness of a response to a given reasoning task should be always relativized to the subjects' interpretation of the experimenters' intentions. As I understand their proposal, the reasoner's cognitive context plays a fundamental role because the context is regarded as the set of possible assumptions that the reasoner supposes herself to share with the experimenter. But, if the context of evaluation corresponds to the cognitive situation of the reasoner, the appropriateness of a given reasoning performance would demand of the reasoner too little: it would demand of her that she behaves in accordance with her own interpretation of the situation but not that her understanding of the task displays a correct grasp of the situation. In my view, what is needed is something that is external to the reasoner's cognitive context and independent of her cognitive situation in order to assess the legitimacy of her problem reconstruction, and I identify this "something" with the situational context by conceiving it as "objective". The idea that the context of evaluation has to be regarded as objective has been held by few philosophers. Indeed, it is a controversial question whether we can delimit objective context completely. According to Carlo Penco, for example, if we are really interested in using an objective notion of context, we should integrate it into a cognitive one. He holds that the objective context is, most of the times, the context we recognize as objective. We know both that there is some objective reality and that we might get it wrong. To describe an objective context as such, independent of a cognitive one, is therefore a risky enterprise. Any attempt to define it in an absolute way is misleading, because it takes a description - given always inside some theory or cognitive context - as an objective unrevisable description. Objectivity is always a result of our interaction, not a datum. (Penco 1999: 280)
I admit that identifying the context of a conversational event in an absolutely objective way is a risky (or maybe an impossible) enterprise. However, objective context may be characterized without requiring such an absolute point of view. Most prominently, as pointed out by Marina SbisA , the idea that the context of evaluation should be regarded as objective plays a fundamental role in the evaluation of speech acts, as originally characterized by John L. Austin (Austin 1975; SbisA 2002). Without entering into the details of speech act theory, I focus on the role that Austin attributes to the situational context in the evaluation of an assertion as true or false.[2] According to Austin (1975: 143), although an assertion such as "France is hexagonal" is usually considered to be perfectly determinate, it cannot be said to be true or false until the interlocutors' goals are specified, which usually happens tacitly. As a consequence, in order to qualify an assertion as true or false, the context in which it has been made, as well as the interlocutors' goals, has to be taken into account. So, the assertion "France is hexagonal" may be judged true if made in a certain context with a certain goal (i.e., a general considering the sides from which his army could invade France), but may be judged false if made in another context with a different goal (i.e., a geographer describing the borders of France in detail) (Austin 1975: 142; SbisA 2002: 426). Within this framework, it is assumed that the goals of the interlocutors determine the aspects of a situation against which the truth/falsity of a speech act concerning that situational context is to be evaluated. So, the fact that the truth or falsity of a sentence may vary from context to context shows that the situatedness of the assertion (like that of any other speech act) is strictly linked with the delimitation of its context. As Marina SbisA (2002: 427) points out, if a speech act is produced and understood in a context and is therefore a situated event, it seems reasonable to think that it should be evaluated with respect to that context. Consequently, in order to yield a definite evaluation of a speech act (in terms of felicity/infelicity, appropriateness/inappropriateness, truth/falsity), context must itself be delimited [...].
Three interesting lessons can be drawn from Austin's view of speech acts: (i) every speech act is produced and understood in a context and is therefore a situated event; (ii) its context of evaluation corresponds to neither the participants' cognitive contexts nor the contextual assumptions that they suppose to share with one another; rather, it depends on the context in which the interlocutors are situated; (iii) the delimitation of the context of evaluation is determined by the interlocutors' goals. In my view, the kind of normative framework Austin proposes for the evaluation of an assertion as true or false has interesting implications for the development of a context-sensitive approach to rationality assessment.
Consider now how Austin's lesson is relevant for the question of rationality assessment. First of all, I am conscious that speech acts and reasoning performances are two different kinds of human behavior, albeit sometimes overlapping in practice. But they share two fundamental characteristics, that is, they are both situated and goal-oriented. That has led me to wonder whether instances of reasoning, as well as speech acts, should be appropriately evaluated according to the context wherein they occur and the goals in the light of which they are performed. Consider the roles that goals and context play in rationality assessment. On the goal side, different purposes are served by different kinds of reasoning. In consideration of that, reasoning performances should not be assessed in abstraction from their different purposes: when assessing a reasoning performance, we should first ask what purpose reasoner's answers are aimed at and then consider whether they are right or wrong, correct or incorrect with respect to it. Indeed, without any previous reference to the goal pursued by the reasoner, assessing her reasoning performance seems to be pointless. On the context side, when an individual is about to engage in some activity, such as performing a task or solving a problem, there is an objective situation that determines the type and quality of information actually available. This situation should not be regarded as part of the task or problem at hand, but rather as what generates the task or problem on the subject's perspective. Without situational context, no reasoning task or problem can occur. However, a main worry with the situational context is that there are no clear criteria for its delimitation while it has to be limited. In order to account for the delimitation of context, the most natural candidate may be the fact that not all the situationally available information is actually relevant to the goal the reasoner is trying to achieve (a suggestion in this direction comes from Gauker 1998; 2003: 55-58). In particular, the reasoner's goal can be regarded as that which delimits the situational context, distinguishing the situationally available information which is relevant from that which is not relevant to reach it. Once the situational context is set by the reasoner's goal, the problem she has to tackle is in principle defined and its solution is determined (about which, however, the reasoner may be wrong in many ways). Thus, the result of the interaction between the situational context and the reasoner's goal gives rise to a frame of reference which constraints the ways in which the reasoner's goal is to be attained. In other words, such a delimitation of the situational context leads to a normative frame, regardless of how the subject conceives of it.[3] One may object that if the goal the subject aims at contributes to the delimitation of the relevant normative frame, such a frame is in part subjective. In my view, however, the fact that the goal of the subject plays a fundamental role in the delimitation of context does not keep the resulting framing of the problem from being subject to normative evaluation. Starting from the same objective situation and goal, one thing is how the reasoner conceives of the frame within which he is trying to achieve the goal and another thing is the normative frame which dictates the ways in which the reasoner's goal has to be attained, and about which the reasoner may be wrong in many ways.
What I have presented here is only a rough account of how goal and context might come into the rationality assessment. I have assumed both that reasoners have goals and that there is a distinction between those instances of reasoning that fit with a context and those that do not fit. Of course, greater effort has to be put into providing a more detailed account of how such a contextualist approach can be applied to specific situations, if we want to make a serious attempt at making the consequentialist picture of human rationality fully situated.
In this section, I characterize the building blocks of the context-sensitive consequentialist approach to rationality assessment sketched out in the last part of the previous section and explain in particular how the resulting approach can be applied to data on human reasoning obtained in experimental settings. First, by focusing on the particular structure of a reasoning task and its understanding on the part of the subjects, I provide some preliminary considerations as to how people approach reasoning problems and then point out the consequences that these considerations have for an approach to rationality assessment. Second, I propose a two-step normative framework for situationally establishing the legitimacy of subjects' task interpretations and the normative appropriateness of their responses. Finally, I explain why such a normative framework can both account for the situatedness of human reasoning and provide a satisfactory normative background against which to assess it.
To begin with, it is clear that the subject's answer to a reasoning problem is not self-explanatory, namely her answer itself does not give any explicit advice as to how she has arrived at it. So, if we want to assess how people reason, we have to move beyond the output of their strategies in reasoning tasks. In simple terms, it is impossible to assess a piece of reasoning as rational or irrational without first understanding it. Addressing this issue requires that we dig below the output surface. The outputs of people's reasoning strategies are the tip of an iceberg. It is not my aim here to discover and explain what is below the waterline, that is, develop a model of cognitive performance in reasoning tasks. However, in attempting to develop an approach to rationality assessment, the first task is to delimit and define better what counts as a reasoning performance. That is, we have to make explicit what is subject to assessment. Only when the object of evaluation has been identified, we can start setting the appropriate normative standards for it.
When analyzing empirical data on human reasoning, most cognitive psychologists have assumed that every reasoning task is associated with a single normative model which has been considered to be not only the benchmark of normatively appropriate performance in the reasoning task, but also the interpretive grid of that performance. In other words, normative models of rationality have been assumed in order to both articulate normative standards and describe reasoning performances. However, one task is to explain why subjects' answers are made and another one is to assess their rationality. Accordingly, these two tasks have to be neatly distinguished. Otherwise, misunderstanding may arise about what subjects are really doing in a given task. To understand a reasoning performance, it is important to single out (i) the reasoner's framing of the reasoning problem (i.e., the reasoner's understanding of what the problem is about) and (ii) the particular reasoning strategy which she adopts to solve it. Stage (i) has been usually regarded as not belonging to reasoning performance. As is showed by proponents of the conversational approach, however, it is an undeniable fact that the reasoner's framing of the problem determines the decision about which reasoning strategy to adopt. As a result, it could be argued that the subjects' understanding of the problem determines to a certain extent the outcome of their reasoning. It is a matter of fact that problem structure and subjects' understanding of it have a bearing on the question of rationality assessment.
Let us consider more deeply the structure of experimental reasoning tasks, in which all problem information is available in the experimental context, and the situation in which the problem is raised does not change over time. If we look at such tasks, we find that in the great majority of cases, problem presentations comprise a set of information (the premises, the context surrounding the premises, the instructions, the examples...) and a question, albeit one that is not fully explicitly formulated. When reasoning experiments are presented, subjects primarily deal with a task of discovery, that is, the task of discovering what the problem is about and what goal they are required to achieve. In other words, before all, subjects have to orient themselves. Questions that might be asked on their part are: what kind of task am I faced with? And then, what should I do with this particular case? To fulfil these questions, subjects fix a goal according to their interpretations of the question posed by the experiment and work towards it. Once the goal is determined, they usually frame the problem in a way that makes sense of part of the situationally available information: so, (i) subjects may ignore a piece of information or consider it to mean something else and (ii) they may also bring other information (retrieved from memory) into the problem.[4]
With regard to (i), various cases may occur. Firstly, as is pointed out by supporters of the conversational approach, subjects may interpret information obtained from the problem presentation and the experimental setting in ways that the experimenters may not have considered and that conflict with their interpretation of the problem. As a result, information, which is considered as irrelevant according to the experimenters, may be taken to be relevant by the experimental subjects. Such allegedly relevant information may change the form of the problem structure and thereby what the normatively appropriate response to the problem is (see, e.g., Sperber et al. 1995: 44). Secondly, in experimental contexts as well as in ordinary life, subjects do not necessarily take into consideration all the information available in the context. They may consider some pieces of information to be more relevant than others and, given the limitation of their cognitive capacities, may tend to use their time and effort for information from which they expect they could benefit (see van der Henst 2006).
As to (ii), subjects bring into the current situation their experience with similar problems, such as their knowledge related to the question posed by the problem. Sometimes that will lead to the recombination of contextual information in a way that changes the framing of the problem from how it was intended by the experimenters. Finally, all of the information that has been identified as relevant must be represented in a format that can fit with a reasoning strategy and this strategy will be applied to the represented information in order to reach the reasoner's goal. Even when a problem has been framed, there always remains the question of which reasoning strategy best applies in such a case. In order to reach the reasoner's goal, different reasoning strategies can be selected: in the case of hypothesis testing, for instance, one can adopt either verificationist or falsificationist reasoning strategies depending on the information available in the context.
Distinguishing between the subject's understanding of the reasoning problem and her choice of a particular reasoning strategy to solve it helps us understand how the context of the problem is mentally recognised and examined and why people could find the problem difficult. I am not addressing here the psychological question of how the information provided by the problem presentation is mentally processed and represented, or how cognitive processes underlying reasoning can be characterized.[5] I would like to remain neutral about that. My primary aim here has been to show that, because of its complexity, assessing a reasoning performance requires a more complex and refined normative background than those usually assumed. The comprehension of the task and the search for relevant information should be regarded as part of every reasoning performance (see, e.g., Sperber et al. 1995: 44). Focusing on such aspects should help us define what task subjects are attempting to perform and consequently what kind of a normative standard should be adopted and applied to them in assessing their performances.
The last section has been aimed at giving an overview of the factors which influence and determine any reasoning performance. In accordance with the picture given, I would like to provide a two-step normative framework that is applicable to both subjects' task interpretations and responses. In the vast majority of reasoning studies, it has been taken for granted that the selection of a normative standard for the assessment of a subjects' reasoning performance is completely distinguished from the question of the subjects' interpretations of the reasoning task. Thus, subjects' response accuracy has been regarded as the only object of evaluation in rationality assessment. On this view, rationality and reasoning have nothing to do with subjects' understanding of the task and their search for relevant information, which have been regarded as mere cognitive variables. As seen in the previous chapters, reasoning performances have been assessed in terms of one's ability either to comply with a set of normative principles (in the case of deontological approaches) or to reach a specific range of goals efficiently (in the case of consequentialist approaches). However, as suggested in the previous section, no normative claim about specific pieces of reasoning should be made without previously examining how subjects have framed the reasoning problem and what background assumptions they have made. Subjects may have different possible "understandings" of the problem context and the one they select influences how they approach the reasoning problem and what answers are the "correct" ones to give. As well as subjects' responses to tasks, it seems to me that subjects' task interpretations require a normative background against which to be assessed as to appropriateness or legitimacy. Otherwise, not only one but many normatively appropriate answers to a reasoning problem are possible and which one should be selected depends upon what the reasoners' understanding of the problem is. Any response should be regarded as normatively appropriate directly following from the subjects' personalized representation of the problem and its context. By assuming such a view, however, rationality assessments become so flexible that whatever subjects do, they may always be regarded as rational.
What is needed is half-way between classical approaches to rationality assessment and relativistic ones. In my view, such a half-way approach involves a two-step normative framework for situationally establishing the legitimacy of subjects' task interpretations and the normative appropriateness of their responses. This approach specifies two kinds of constraints that operate within the experimental context. On the one side, the subjects' understanding of the reasoning problem is always constrained by the information explicitly presented in the experimental situation. On the other side, as is held by consequentialists, the evaluation of success in reasoning cannot be separated from the evaluation of success in achieving goals. Let me explain these two points.
On the "understanding" side, what is of primary importance is the presentation of the problem, which mediates the subjects' understanding of it. When an individual is about to engage in some activity, such as performing a task, there is an objective situation that determines the type and quality of information actually available. For an experimenter, the first step is to engage in a descriptive exploration of the range of interpretations that the reasoning problem admits and their context-dependence. Otherwise, there is a risk that experimenters misconceive the subjects' reasoning performances. Consider two examples. With regard to the Wason selection task, the semantics of conditionals suggests that there is more than one possible interpretation for statements such as those which Wason used in the selection task, envisaging the conditional as representing different conditional relations (e.g. Stenning & van Lambalgen 2001; 2004). Over the years, several of these interpretations have been adopted in order to explain subjects' selections. The question is how to decide which interpretations may legitimately apply to the reasoning problem as proposed by the experimenter. In the same vein, depending on the context, words such as "probability" and "likely" may be interpreted in very different ways which do not strictly match their mathematical meaning.[6] In the case of the Linda Problem, Jonathan Adler (1991: 261) has demonstrated that different meanings of probability may "all legitimately apply to the problem as posed, and the evidence does not decisively show one of these to be uniquely applicable". More generally, as shown by proponents of the conversational approach, subjects do not usually have an explicit and complete grasp of the problem as intended by the experimenter, but most of them certainly have an implicit grasp of what the problem is about and that allows them to construct, with suitable contextual support, a wide range of interpretations of it. If they do not find a set of clear cues or instructions, the subjects try to frame the problem in the light of their previous ordinary and common experience. Indeed, insofar as all what they need is to understand what kind of task they are faced with and to figure out what they have to do in that particular situation, the subjects tend always to interpret the task giving to it practical significance (usually strictly connected with their ordinary activities) (see, e.g., Stanovich 1999: 190-207). Regardless of how experimenters interpret a reasoning problem, if the subjects' framing of the problem matches the contextual information available in the experimental situation, it should be regarded as a legitimate interpretation of that problem. Thus, the same problem may be framed in different ways on the condition that the resulting representation matches the contextual information as explicitly presented. For example, as we will see with respect to the selection task, when dealing with a reasoning problem, subjects usually make several background assumptions about how things actually are in the light of their previous experience so that the normatively appropriate response they draw can be due to their reliance on these assumptions rather than to the use of a particular reasoning strategy. Generalizing from these considerations, I hold that the first set of constraints on rationality assessment comes from the experimental setting which delimits the range of frames which subjects may legitimately apply to a problem.
I turn now to the goal-relativity of the normative framework. Depending on the goal-structure of any framings, in order to assess reasoning performances we should first ask what purpose reasoner's answers are aimed at and then consider whether they are right or wrong, correct or incorrect with respect to it. For what is the subject's underlying goal, and is she likely to achieve it? To ask this question is to take the consequentialist view. People show their rationality also in achieving simple and basic goals in ordinary activities. Contrary to the claims of evolutionary psychologists and pragmatists, I am not referring to the most important goals of one's life, or to reproductive success. Indeed, our long-term goals do not always meet our short-term decisions, which are usually made within a specific situational context. Our general goal of good health may be in conflict with our preferences for fat foods and cigarettes. Goal-directed activities are as diverse and varied as the whole range of legitimate human activities. In an experimental setting, subjects may engage in an activity such as inferring conclusions, making predictions, judging the likelihood of a particular event, making decisions, testing hypotheses and so on. In doing so, subjects are not necessarily attempting to satisfy the experimenters' expectations, but rather to solve the problem as they understand it. Thus, the correctness of a reasoning performance is determined in terms of its efficiency and effectiveness in attaining one's goals. Indeed, it is inevitable that in any goal-oriented activity some ways of achieving goals are better than others.
To sum up, the two-step normative framework of the context-sensitive consequentialist approach states that the information explicitly presented in the experimental context places constraints on the goals and frames that it will be appropriate for the reasoner to activate when dealing with the task, which in turn determine the type of reasoning strategy that will be effective and efficient.
Let us consider more attentively the role to be attributed to the subjects' understanding of the reasoning problem in rationality assessment, and how strictly it is related to how experimenters conceive of the situational context.
The subjects' understanding of a reasoning problem has always been the crux of the researchers involved in the rationality debate. Two different tendencies can be found. On the one hand, those who trace subjects' reasoning performances back to a predetermined normative standard assume that their subjects have understood the reasoning problem in the same way as they do and with the same background assumptions. However, as seen above, this approach is flawed as the same reasoning problem may be legitimately interpreted in different ways. On the other hand, other researchers hold that there is always an appropriate way to interpret the answers of the subjects as showing that they have understood the reasoning problem differently from how it was understood by the experimenter, so that their answers can be regarded as normatively appropriate to that which they respond to. In this second case, the priority is given to observed reasoning performance, not to the normative model: when assessing reasoning performances, the adoption of a normative principle has to be situationally justified on the basis of what people actually do. However, the flexibility of what may be viewed as normatively appropriate may appear to raise a fundamental problem for the entire approach. As I have argued, it is only with respect to something external to the reasoner and independent of her cognitive assumptions, that it makes sense to assess, or attempt to assess, her reasoning performance. Accordingly, I have proposed an approach to rationality assessment according to which if reasoning is a situated activity, being performed in a context, it seems reasonable to think that it should be evaluated with respect to that context. Such an approach allows us to see reasoning performances, which have typically been assessed in strictly cognitive terms, as having a complex relation with the context wherein they take place. As said above, context should not be regarded as a set of reasoner's cognitive assumptions but, rather, as something objective, determined by how the actual situation is and by the goals of the ongoing activities. In contrast with the two approaches mentioned above, before one can legitimately assess an instance of reasoning, one must know the frame within which the reasoning is being done. Within this theoretical framework, to be rational means to do the best one can do in the actual circumstances, and the actual circumstances are where the reasoner is situated. Given her goal, the reasoner ought to make the best use of all the relevant information at her disposal.
With regard to this point, consider the Linda Problem again (see Chapter 1, Section 3.1.2). There has been considerable debate about Tversky and Kahneman's interpretation of the empirical data they have obtained from this experimental reasoning task (see for a summary, e.g., Stanovich 1999: 121-124). In particular, many of their critics have argued that there are alternative interpretations of the problem that are, given the problem context, more appropriate than that which Tversky and Kahneman regard as the appropriate one. But what kind of goal subjects think is required of them to achieve in the Linda Problem? As Politzer and Macchi (2000: 87) observe, "since they are requested to produce a judgment of probability on the basis of the description of a character, in all likelihood participants wish to show that they possess the skills to find what maximizes psychological and behavioural coherence by identifying the kind of activity which provides greater relevance to the description of the character". If this is really what subjects aim at in tasks based on the Linda Problem, we may call into question the idea that they are reasoning in a normatively inappropriate way: given the way in which subjects understand the reasoning problem, they have not contravened any normative principle of rationality. If we assume that subjects, after having read the problem presentation, make the supposition that the total amount of information given about Linda suggests that the experimenter knows a lot of things about her, it becomes reasonable to understand the statement "Linda is a bank teller" as implicitly conveying that she is not active in the feminist movement. Obviously, if subjects understand the statement "Linda is a bank teller" in such a way, then the fact that they rate "Linda is a bank teller and is active in the feminist movement" as more likely than "Linda is a bank teller" cannot count as an error (Hilton 1995: 260; see also Dulany & Hilton 1991). Given all the information available in the experimental context, the subjects' interpretation of the task does not seem to be inappropriate, so that subjects can be considered as making the best use of all the relevant information at their disposal.
In this section, I consider how the context-sensitive consequentialist approach to rationality assessment can be applied to examine data on human reasoning obtained from a well-know experimental task, that is, the Wason selection task. In my view, the ways in which this task has been studied and discussed have strongly influenced discussions about human rationality. In particular, I will consider here the central question of whether in its standard version the most common answer should be considered as evidence of human irrationality as many psychologists have claimed.
The section is divided into four parts. In the first part, I reconsider the standard version of the selection task by focusing on Wason's use of the hypothetico-deductive model as normative standard against which to assess subjects' selections. In the second part, I provide an overview of the researches based on the selection task. In the third part, I point out to the differences between descriptive and deontic versions of the task and explain why in debating on human rationality descriptive versions give rise to greater problems than deontic versions do. I then turn to two attempts to explain subjects' performances on selection tasks with indicative conditionals and discuss them, proceeding to a critical assessment of them. Applying the context-sensitive consequentialist approach, I maintain that although, relative to their ordinary goals, the subjects' most common response can be considered as rational, their real error resides in their framing of the problem which does not match the information as explicitly presented in the experimental context. Indeed, as we will see, most of the subjects tend to evaluate the indicative conditional with reference to a wider domain of evidence than the narrow one provided by the experimental context, as they do in ordinary life.
In the first chapter, I pointed out that Peter Wason designed the selection task as one of hypothesis-testing (see Chapter 1, Section 3.1.1). In Wason's view, this kind of task calls for deductive reasoning based on a strictly logical interpretation of conditionals. As Oaksford and Chater (2002: 197) observe, "the assumption that the selection task is deductive in character arises from the fact that psychologists of reasoning have tacitly accepted Popper's hypothetico-deductive philosophy of science". Roughly speaking, Popper held that in empirical research the relevant observations are those that falsify, not those that confirm, scientific hypotheses. And a scientific hypothesis is falsified when predictions that can be logically drawn from it do not accord with empirical observations. As a consequence, Popper maintained that scientists should aim at designing experiments which can provide evidence falsifying the hypothesis under examination. Similarly, if experimenters apply the hypothetico-deductive method to the selection task, the only normatively appropriate selection consists of checking for the cards that may provide a falsifying instance of the conditional statement. As is said by Oaksford and Chater (2002: 197), "when viewed in these terms, the selection task has a deductive component, in that the subject must deduce logically which cards would be incompatible with the conditional statement". Since the subjects' most common choice (the combination of p and q cards) only confirm the conditional statement, Wason (1968) concluded that most subjects display what he called a verification (or confirmation) bias, that is, they looked for instances confirming the conditional rule, and neglected instances falsifying it. So, it is Wason's commitment to the Popperian hypothetico-deductive model that led him to refuse confirmation as an appropriate strategy in the selection task.
In this section, I present just a few highlights of the history of the studies on the selection task, focusing on some of its modified versions in which most subjects make the correct selection (p and not-q cards). The results of these researches show that subjects' selections vary in terms of how the problem presentation is formulated. In particular, when the selection task is framed in certain ways (about which we will speak in more details in this section), we can expect that most subjects (usually over 70% of them) will give the correct response.
Experiments based on the standard version of the selection task were replicated more than once and the data obtained were almost the same (see, e.g., Wason & Johnson-Laird 1972), confirming thereby the robustness of Wason's findings. However, Wason and Shapiro (1971) found that using thematic content in the task helps subjects to make the correct selection. In an experimental study subjects were asked to verify the conditional statement "If I go to Manchester [p], then I travel by car [q]" by examining four cards with the city destination on the one side and the transport used on the other side. The visible faces of the cards showed respectively: "Manchester" (p), "Leeds" (not-p), "Car" (q) and "Train" (not-q). The experimental results showed that this version of the task elicits a greater number of correct responses, that is, the p (Manchester) and not-q (train) cards (Wason & Shapiro 1971: 68). In order to explain this result, Wason and Shapiro propose a hypothesis, which Griggs and Cox (1982) have subsequently dubbed "thematic facilitation effect", according to which the use of realistic material facilitates subjects to consider different combinations of cards in order to identify the normatively appropriate solution.
Consider another modified version of the selection task. In a task devised by Philip Johnson-Laird, Paolo Legrenzi and Maria Sonino Legrenzi (1972), subjects were asked to imagine they were Post Office workers who had to check envelopes for violations of the conditional rule "If a letter is sealed [p], then it has a 5d stamp on it [q]". Four envelopes (instead of the usual cards) were presented in front of the subjects. Subjects could see the back side of two envelopes and the front side of the other two. As to the first two, one envelope was sealed (p) and the other was not (not-p). The other two had respectively a 5d stamp (q) and a 4d stamp (not-q) on their visible faces (both envelopes had an address printed on them). Approximately 90% of the subjects made the correct choice: they selected the sealed envelop (p) and that with a 4d stamp (not-q) (Johnson-Laird et al. 1972). This finding was confirmed by subsequent studies using both Italian and British stamps with their respective units of currency (in both cases, the subjects were English). Johnson-Laird and his colleagues interpreted these results as confirming the "thematic facilitation effect" hypothesis. But subsequent studies have shown that using realistic materials does not always elicit the correct response in the selection task (see, e.g., Manktelow & Evans 1979; Pollard 1982).
In a replication of the experiment based on the postal task, Griggs and Cox (1982: 411-414) discovered that most American subjects tended to select the sealed envelope (p) and the one with the stamp that corresponds to that cited in the conditional rule (q) (experimenters replaced the British stamps with American ones). Griggs and Cox held that the significant differences between choices made by British and American subjects was due to the fact that, while the former had had direct experience with that type of postal rule (indeed, a similar postal rule existed in the British postal regulation before the seventies), American subjects had never experienced that rule before because there was no postal regulation in the United States concerning amount of postage and the sealing of envelopes (Griggs and Cox 1982: 417). This explanation was confirmed by an experimental study made by Evelyn Goldman, who presented the same task to a group of British subjects who had never met the postal rule cited before because of their ages (indeed, this type of postal rule was eliminated in the seventies in Great Britain). The result of Goldman's experimental studies showed that only few subjects gave the correct response (reported by Griggs and Cox 1982: 418n). In order to account for such results, Griggs and Cox (1982: 417) proposed the "memory-cueing" hypothesis, according to which in certain versions of the selection task people give the correct response because they can retrieve from their memory relevant counter-examples to the rules to be tested.
Subsequent studies showed that the improvement in subjects' performance has to do with the nature and structure of the task, not with its thematic content or with its degree of familiarity (see, e.g., Manktelow & Over 1991; 1995). In particular, some experimental studies have shown that when asked to reason about certain kind of rules or regulations, most subjects give the normatively appropriate response. A well-known example of this kind of tasks is the "drinking age problem" (Griggs & Cox 1982). In this task, subjects were asked to pretend to be police officers being in a bar and checking whether the following conditional rule is being obeyed:
If a person is drinking beer [p], then the person must be over 19 years of age [q].
In this experiment the cards represented drinkers, showing the drink on one side and the age of the drinker on the other. The visible sides of the cards were: "Drinking a beer" [p], "Drinking coke" [not-p], "16 years of age" [q] and "22 years of age" [not-q]. The correct choice is to turn over the cards whose visible sides are "drinking beer" [p] and "16 years of age" [not-q]. Griggs and Cox (1982: 414-417) found that around 75% of the subjects made the correct selection. While they thought that this result confirmed their "memory-cueing" hypothesis, other studies have demonstrated that this and other similar versions of the task elicit good responses because of their logical structure. In particular, subjects drastically improve their performances when the task is framed in such a way that what they are asked to check is concerned with permissions, prohibitions and obligations (see, e.g., Manktelow & Over 1991). Starting from these empirical results, many researchers have argued that people are good at reasoning with deontic conditionals because they possess domain-specific cognitive mechanisms which are specialized to handle with permissions and obligations (see, e.g., Cheng & Holyoak 1985; Cummins 1996). As seen in the third chapter, Leda Cosmides and John Tooby have proposed the so-called Cheater-Detection Hypothesis in the context of their evolutionary perspective on human mind (Cosmides 1989; Cosmides & Tooby 1992; see Chapter 3, Section 2). According to them, the empirical data obtained from experiments based on the selection task show that subjects have a particular mental module whose domain of application is restricted to conditional rules that involve the detection of cheaters. In their view, in the "drinking age problem" an underage drinking beer can be taken to be a cheater. So, the structure of the task activates the mental module for cheater detection, leading to the correct selection of the cards. It is noteworthy that, according to Cosmides and Tooby, the module for detecting cheaters can be applied to all the versions of the selection task which involve social regulations, but not to the standard one. Accordingly, they hold that correct performances in this kind of tasks should not be attributed to a strictly logical interpretation of the task.
It is widely agreed upon that in the selection task participants improve their performances when faced with conditional statements expressing duties or rights. On the contrary, most subjects fail to check for the potential counter-examples "p and not-q" when evaluating indicative conditionals. Why do subjects behave differently in these two versions of the selection task? Why do people do better in the deontic version of the selection task?
Setting aside evolutionary psychologists' hypothesis, there is an alternative explanation as to why people do better when faced with deontic versions of the selection task. Indicative and deontic versions of the selection task have different logical structures, they present two different types of problems which include different kinds of conditionals and instructions. In the indicative versions of the task, subjects are asked to choose the cards which show whether the conditional statement is true or not. In the deontic version of the task, instead, subjects are asked to select the cards which show whether the conditional rule is being violated, as opposed to deciding whether it is true or false. In the first case, subjects are explicitly told that the conditional statement may be true or false and so they are not allowed to assume that the conditional statement is true. In the second case, the truth or falsity of the conditional statement is not called into question. Indeed, subjects must suppose that the conditional statement is a rule that truly exists and applies to the cases at issue (if a rule may be violated, it must be assumed to exist). As Botterill and Carruthers (1999: 121) observe, "subjects need to engage in an extra level of processing in the indicative selection tasks, because in order to solve them they have to ask themselves: 'Suppose this conditional applied. What would it rule out?'". In the deontic versions of the task, instead, this step is not necessary because subjects already know that the conditional rule applies. They have only to recognize whether it is being violated by examining the four cards. This is a fundamental difference between descriptive and deontic versions of the selection task. In particular, the indicative version presents a series of problems which the other task does not. Because of its complexity, in the next sections I will focus on the indicative version of the task. Indeed, some questions arise as to it: what are subjects doing in the indicative version of the task? What kind of strategy do subjects employ? Are the most common responses of the subjects in this task as irrational as many psychologists have claimed?
Dan Sperber, Francesco Cara and Vittorio Girotto (1995) have proposed an explanation of subjects' selections in the selection task based on a general theory of linguistic comprehension, that is, Relevance Theory (Sperber & Wilson 1995). They have argued that the selection task does not call for reasoning or other similar activities, but is simply a task of selection guided by considerations of relevance. According to Relevance Theory, an utterance raises the hearer's expectations of relevance, so that she tries to find an interpretation that may satisfy these expectations[7]. The relevance of an utterance is the result of a trade-off between its cognitive effects, that is, the implications that can be drawn from it within the context wherein it is processed, and the processing effort required to derive those implications.
Let us consider how Sperber and his collaborators' relevance-based explanation accounts for subjects' performance on selection tasks with indicative conditionals. To begin with, they assume that in this version of the task subjects interpret the conditional rule as a universally quantified conditional statement, that is, "x(Px A‰ Qx). If understood in this way, the conditional rule is not directly testable. So, if subjects aim at testing the conditional rule, they must infer testable consequences from it. At this point, Sperber and his collaborators argue that three main cases of interpretation of the conditional rule are possible (Sperber et al. 1995: 54-58). In the first case, the conditional is interpreted "biconditionally", that is, as implying its converse ("x(Qx A‰ Px)). In the second case, the conditional rule is interpreted as an existentially quantified conjunction, that is, $x(Px & Qx). These two interpretations lead respectively to selecting the p card alone and the p and q cards. Although the two interpretations are logically inappropriate, they may be regarded as conversationally rational in ordinary discourse. Indeed, as Sperber and his collaborators (1995: 55) point out, a universally quantified conditional statement that does not have at least some instances will be considered as irrelevant in everyday conversation. In the third case, the conditional rule is interpreted either as implying a negative existentially quantified statement of the form not-$x(Px & not-Qx) or as contradicting a positive existentially quantified statement of the form $x(Px & not-Qx). These two interpretations lead both to the logically correct answer, that is, the selection of the p and not-q cards. Sperber and his collaborators (1995: 56) argue that, though these two interpretations are logically equivalent, "they are not computationally or representationally identical". Accordingly, in order to elicit correct responses in the indicative version of the selection task, an experimenter should make subjects interpret the conditional statement as denying the existence of p-and-not-q cases. Clearly, because of its entailing two negations, this interpretation would be not so easily accessible in ordinary situations. According to Sperber and his collaborators (1995: 58-60), however, it is empirically demonstrable that if the experimenter manipulates the subjects' expectations about cognitive effects and the effort required to achieve them by varying the information available in the experimental context and in particular the content of the conditional statement, the appropriate interpretation will be readily accessible to the subjects. As regards to the standard version of the task, Sperber and his collaborators (1995: 52) hold that "the artificiality of the task is so overwhelming as to discourage any but the lower expectations of relevance". As a consequence, according to the relevance-based analysis, in such cases the subjects' comprehension is guided by considerations of least processing effort, leading to the selection of the p card alone or the p and q cards.
Michael Oaksford and Nick Chater (1994; 1996) have attempted to account for subjects' alleged normatively inappropriate responses in the indicative version of the selection task on the basis of what they call the Bayesian probabilistic approach to confirmation. According to their analysis, the selection task does not require deductive reasoning; rather, it calls for hypothesis testing involving optimal data selection. In their view, the subjects' most common response can be seen as "optimizing the expected amount of information gained by turning each card" (Oaksford & Chater 1994: 609).
Let me begin with a clarification. Speaking of probability with reference to the selection task, one should assume that subjects will take the four cards of the task to be a sample from a larger set. Consider then a new version of the selection task in which subjects are asked to evaluate whether the usual indicative conditional ("if p, then q") is true or false by presenting, for example, 50 cards (instead of the usual four cards) in front of them. Suppose, moreover, that the number of cards displaying not-q is far greater than those with q. Can we apply Popper's hypothetico-deductive model to this version of the selection task? Will it give us any normative recommendation about how many cards we should examine in order to verify whether the conditional statement is true or false? The usual selection, checking the p and not-q cards, does not seem to be sufficient to test the conditional statement. Raymond Nickerson (1996) has compared this situation to the raven paradox (Hempel 1945): if subjects were asked to test the hypothesis "if it is a raven, then it is black", they will probably consider ravens and examine whether they are black. In principle, there is another strategy that should equally increase their degree of confidence in that hypothesis, that is, to consider non-black things and examine whether they are ravens or not. So, the paradox here is that any time you find a non-black non-raven (e.g., a white shoe), as well as a black raven, your degree of confidence in the truth of "All ravens are black" should increase. However, this seems to be puzzling. One of the well-known solutions of this paradox was proposed by the philosopher John Mackie (1963), who held that in such cases we should not aim at examining whether our evidence is relevant to increase our confidence in the hypothesis under examination, but, rather, whether it supports one of two possible alternative hypotheses, that is, one which states the independence of the property "raven" from the property "black" and the other which states its dependence. According to Oaksford and Chater, a similar line of reasoning can be assumed when discussing subjects' selections in the indicative version of the selection task.
Michael Oaksford and Nick Chater have adopted an alternative model of hypothesis testing in determining the correctness of the subjects' most common response in the selection task, that is, the Bayesian probabilistic model of confirmation (see Horwich 1982). This model fits well with the solution proposed by Mackie with regard to the raven paradox. A fundamental assumption that Oaksford and Chater integrate to this model is what they call "the rarity assumption", according to which people usually think that the probabilities of both p and q are low. That is, "the categories that function in everyday hypotheses about the world apply only to very small subsets of objects" (Oaksford & Chater 2003: 293). On the basis of this assumption and the Bayesian probabilistic model of confirmation, Oaksford and Chater have developed a framework for analyzing subjects' responses to both standard and modified versions of the selection task. Indeed, according to their model, when faced with the selection task, subjects try to reduce their uncertainty about whether the conditional statement under examination is true by turning over the cards which maximise information gain, that is, the cards which maximally reduce their uncertainty about the truth of the conditional statement. In doing so, they consider two alternative hypotheses: (i) that the conditional statement "if p, then q" is true (the consequent [q] is dependent on the antecedent [p]) and that (ii) it is false (the antecedent [p] and the consequent [q] are independent). Just as a scientist may have two or more alternative hypotheses to examine and has to choose experiments which may provide the greatest "expected information gain" in order to decide among them, so too in the selection task subjects have to select the cards which are likely to provide the greatest expected information gain in order to decide about the two hypotheses mentioned above. So, when faced with the selection task, subjects would calculate the expected information gain of each card, which, Oaksford and Chater explain, amounts to the difference between the "prior" uncertainty about the dependency hypothesis, that is, the uncertainty the subjects have before they have decided which cards to turn over, and the uncertainty about the same hypothesis after the card selections have been made. Supposing that subjects assume that p and q are rare (the rarity assumption), Oaksford and Chater calculate that the order of the expected information gain of turning over each card is:
E(Ig (p)) > E(Ig (q)) > E(Ig (not-q)) > E(Ig (not-p))
As to the standard version of the selection task, this order corresponds to the order of the subjects' usual choices. As we already know, indeed, most subjects choose the p- and q-cards or the p-card alone. Where the rarity assumption does not hold, such as in some thematic versions of the task, the order of the amounts of the expected information gain just described changes. As Oaksford and Chater (1994: 610-614) have demonstrated, in such experimental situations subjects' cards selections reflect that changed order.
Consequently, according to Chater and Oaksford, the choices of cards that have been taken to be evidence of human irrationality should be considered as due to a highly rational reasoning strategy. In particular, this strategy fits well with our everyday situations where usually rarity is the rule, not the exception.
Although they have proposed different explanatory models of subjects' selections in the selection task, Oaksford and Chater and Sperber and his collaborators come to the same conclusion: namely, subjects do not choose the not-q card because they consider it as irrelevant with respect the problem they are faced with. On either line of argument, giving such a logically inappropriate response is rational. Accordingly, the most common response in the standard version of the selection task, usually interpreted as evidence of human irrationality, is reinterpreted as reflecting efficient cognitive strategies operating in a rational manner. Sperber and his collaborators (1995: 90) have maintained that subjects accused of providing normatively inappropriate responses in fact are giving pragmatically appropriate responses. On this view, subjects' performances on the standard version of the selection task should be considered as rational, and this can be confirmed by looking at the pragmatics of the situation. In conversationalist terms, subjects should be regarded as conversationally rational (as is also claimed by Hilton and Politzer; see Chapter 5, Section 1.2). On the other side, Oaksford and Chater (1994: 609) have maintained that the selection task should be interpreted as a problem of inductive hypothesis-testing involving optimal data selection. Seeing it in this way, selecting the p and q cards becomes the most appropriate choice, caused by the use of a rational strategy in the field of inductive reasoning. Consequently, people who select the p and q cards in the standard version of the selection should be regarded as rational hypotheses testers.
Remember that in a previous section (see Chapter 5, Section 2) I have proposed a two-step normative framework, to be applied to the subjects' task understanding and to their responses. Such a framework assumes that one's reasoning performance should be evaluated in terms of the legitimacy of her task interpretations and of the appropriateness of the reasoning strategy adopted. Interestingly, while providing different arguments about the normative appropriateness of the subjects' responses, Relevance Theory and Oaksford and Chater's approach focuses respectively on the subjects' comprehension of the task and on the reasoning strategy they use. In particular, as seen above, their analyses seem to provide robust considerations in support of seeing the subjects' understanding of the selection task and the reasoning processes which they adopt as both legitimate. On this view, subjects are making rational selections. Looking closely at both the relevance-based explanation and the Bayesian analysis of the selection task, however, it seems to me there is something wrong with their explanations of the subjects' most common response in the standard version of the selection task.
Let me begin with the relevance-based explanation. According to Sperber and his collaborators (1995), when faced with the selection task, subjects choose the cards guided by considerations of relevance, without employing any reasoning strategy. As seen above, given the fact that the standard version of the selection task gives rise to very low expectations of relevance in the subjects, Sperber and his collaborators have argued that the subjects' most common responses are the best they can give in such a situation. The appropriateness of the subjects' responses is evaluated on the basis of a hypothetical cost-benefit analysis of their cognitive economy, about which we cannot have any direct evidence. According to Sperber and his collaborators (1995: 62-89), however, this analysis is indirectly confirmed by a series of experiments they have devised. What they claim to have demonstrated in these experiments is that by changing the content and context of the conditional statement and the problem presentation in appropriate ways, the expected effects and the effort required to solve the task can be manipulated so as to prompt subjects to provide either correct or incorrect responses according to the experimenter's interests. While the standard version of the selection task discourages any but the lowest expectations of relevance in the subjects, some of the modified versions of the task give rise to strong expectations of relevance in them. In particular, such tasks make salient to the subjects that checking whether there are "p and not-q" cases, that is, counterexamples to the conditional rule, is more appropriate than checking whether there are "p and q" cases. For example, subjects were presented a story in which the leader of a secret religious sect, dubbed HarA© Mantra, "was accused of having had some his sect's virgin girls artificially inseminated" (Sperber et al. 1996: 63). While the head of the sect's goal is to create an elite of "Virgin-Mothers", he jokingly claims that "the women of his sect are, without exception, like any other women:
If a woman has a child [p], she has had sex [q]" (Sperber et al. 1996: 63).
Subjects are asked to imagine they are journalists who are trying to write an article on that sect. They are said that the women who are part of the sect have undergone a gynaecological survey, but the only evidence about the survey's results that is accessible to them consists of four cards left on the gynaecologist's desk, each recording, about a woman, whether she has children and whether she has had sex. These cards are half covered, so that in each, only part of the information is visible.
Subjects are told then that, while the doctor turns his back, they can take advantage by uncovering some of the cards. They are asked in particular to indicate (by circling them) the cards that should be uncovered in order to find out whether what the leader of the sect says ("If a woman has a child, she had had sex") is true, as far as these four women are concerned, and to indicate "only those cards that it would be absolutely necessary to unconver" (Sperber et al. 1996: 63). In this modified version of the task, since checking whether there are virgin-mothers in the sect is more salient than checking whether there are normal mothers, focusing on whether the "p and not-q" case (a woman who has children and had no sex) occurs is easier than in the standard version. As is predicted by the authors' relevance-based framework, the 75% percent of the subjects selected the p ("children: yes") and not-q ("sex: no") cards (Sperber et al. 1995: 62-66). The problem with this and other similar experiments devised by Sperber and his collaborators is that strong cues are provided in the experimental situation in order to prompt the subjects to select the correct counter-example. In some cases, the introductory story implicitly suggests the counter-example of the conditional rule which has to be tested. Making the task so easy, the interpretation of the resulting empirical data seems to be highly controversial. Indeed, other researchers have shown that these empirical results may be explained within other theoretical frameworks, which assume that people are really reasoning in the selection task and not only making a selection guided by considerations of relevance (see, e.g., Fiddick et al. 2000; Osman & Laming 2001). Going back to the interpretation of the standard version of the task, it seems to me that the empirical results presented by Sperber and his collaborators are not enough to support their interpretations of the subjects' most common selection.
I now turn to Oaksford and Chater's analysis. In order to explain the rationality of subjects' selections in the standard version of the selection task, Oaksford and Chater make some fundamental background assumptions about how the task is interpreted. Indeed, they assume that subjects both approach the task as a problem in inductive hypothesis-testing and assume that vowel and even numbers are relatively rare. According to these assumptions, the four cards presented in front of the subjects are taken to be as samples belonging to four general classes (a set of vowels, a set of consonants, etc.). Consider now an example by Oaksford and Chater (1994: 609). Imagine that a subject is evaluating the conditional statement "if you eat tripe [p], then you feel sick [q]" by examining four groups of subjects representing the four usual options: people who have eaten tripe (p), people who have not eaten tripe (not-p), people who are sick (q), people who are not sick (not-q). In order to verify whether the conditional statement is true or not, the subject would probably verify whether people who had eaten tripe (p) are sick. However, she can engage in another effective strategy, that is, to examine whether "people who are sick" (q) have eaten tripe. On the contrary, checking people who are not sick (not-q) in order to verify whether they have eaten tripe seems to be an unreasonable choice because this group of people will be too wide. So, Oaksford and Chater hold that the first two strategies will be more informative than the last one. And these are the strategies that Oaksford and Chater attribute to the subjects in the standard version of the selection task. As Stanovich (1999: 197) points out, however, they assume that in the standard version of the selection task subjects are thinking "in terms of sampling from classes of cards and have implicit hypotheses about the relative rarity of these classes". In doing so, Oaksford and Chater are adding details and context to the problem presentation, which are not present in the information as explicitly presented. Indeed, nothing at all has been said in the situationally available information about groups or classes of cards.
It is noteworthy that the 4% of the subjects select the p and not-q cards in the standard version of the selection task. This means that they have interpreted the task in the logically correct way (and not in the way assumed by Oaksford and Chater). So, who is wrong? Subjects who assume that the four cards of the task are a sample from a larger set of cards or those who interpret the task correctly? Or both? According to my contextualist criterion, if the experimental context does not make explicit that the conditional rule concerns only the four cards, both of these interpretations may be considered as correct. However, this is not the case: the conditional rule as explicitly stated concerns only the four cards in the task. As a result, subjects who consider the selection task as an inductive hypothesis-testing task are wrong. As said above, they have been explicitly told that the conditional rule applies only to the four cards, not to some wider set of which they might be a sample. In other words, if subjects interpret the task the way Oaksford and Chater propose, their interpretation of the task is not legitimate. It does not match the information as explicitly presented in the experimental context. On this view, the difficulty for subjects is not in their reasoning badly, but in their misconceiving the situational context. They are using a rational strategy in the wrong situation. The upshot is that Oaksford and Chater's analysis shows only that a certain percentage of subjects is rational according to their interpretation of the situational context. By assuming Oaksford and Chater's analysis, we might maintain that, relative to their ordinary goals, the subjects' most common response can be considered as rational, their real cognitive mistake resides in their representation of the task context which does not match the information as explicitly presented in the experimental context. Indeed, as seen above, most of the subjects tend to evaluate the indicative conditional with reference to a wider domain of evidence than the narrow one provided by the experimental context, as they do in ordinary life.
In this chapter, I have sketched out a context-sensitive consequentialist approach to rationality assessment. My approach provides a general normative framework which, I have maintained, is applicable both to subjects' task interpretations and their responses. The two-step normative framework specifies two kinds of constraints that operate within the experimental context. On the one hand, the subjects' understanding of the reasoning problem is always constrained by the information explicitly presented in the experimental situation. On the other hand, as is held by consequentialists, the evaluation of success in reasoning cannot be separated from the evaluation of success in achieving goals. Accordingly, the information explicitly presented in the experimental context places constraints on the goals and frames that it will be appropriate for the reasoner to activate when dealing with the task, which in turn determine the type of reasoning strategy that will be effective and efficient. In the last part of the chapter, I have applied it to the data on human reasoning from a well-known experimental reasoning task, that is, the Wason selection task. There is some ambiguity in the analyses of the subjects' responses in the standard version of the task examined in this chapter: selections of p and q cards and, respectively, of p and not-q cards may be considered as appropriate depending on the reasoning problem's interpretation that experimenters attribute to subjects. Such a way of approaching the subjects' performances leaves the assessment of their rationality basically undecided, because there is no real agreement about which response is the correct one. In order to solve such an ambiguity, we need a more complex and refined normative background against which to assess the subjects' reasoning performances than those usually assumed. As we have seen in this chapter, my two-step normative framework is able to solve the ambiguity by distinguishing the legitimacy of subjects' task interpretations from the normative appropriateness of their responses. According to my framework, in order to regard the subjects' most common response in the standard version of the selection task as rational, we have to assume that subjects misinterpret the reasoning problem. Indeed, according to the researchers who have maintained that the most common response should be regarded as rational, subjects frame the reasoning problem in a wider context, that is, however, in a way which is inappropriate with respect to the situationally available information. What most subjects seem to do is to frame the problem in a way that makes sense of the situationally available information in the light of their ordinary goals. Within this ordinary frame, then, subjects tend to evaluate the indicative conditional with reference to a wider domain of evidence than the narrow one provided by the experimental context. But such an interpretation of the task is illegitimate because the problem presentation explicitly states that the conditional rule concerns only the four cards in the task. So, according to my two-step normative framework, while subjects checking for the p and q cards rely on a rational reasoning strategy (at least in ordinary situations), their framing of the reasoning problem is illegitimate because it does not match the information available in the experimental context.
Context-sensitive consequentialist. (2017, Jun 26).
Retrieved December 3, 2024 , from
https://studydriver.com/context-sensitive-consequentialist/
A professional writer will make a clear, mistake-free paper for you!
Get help with your assignmentPlease check your inbox
Hi!
I'm Amy :)
I can help you save hours on your homework. Let's start by finding a writer.
Find Writer