Visible thinking and complex examination questions

No students were permanently harmed during this lesson.

Since the introduction of the new A – Level psychology specifications, complex essay questions have become a prominent feature of the examination landscape. By ‘complex’ questions, I mean those that  impose several distinct skill requirements that must be addressed simultaneously in order to attract credit. For example, the Edexcel 2018 Paper 2 contained the following question in the ‘Criminological Psychology’ section:

Kylie witnessed a crime and had to go to the police station for an interview. The crime involved a robbery of a shop in a busy shopping centre. Kylie was walking past the shop with her friends when she heard the shopkeeper shouting for help, as the thief ran out of the shop. The police carried out a cognitive interview to gather as much information as possible from Kylie about what she witnessed.

To what extent would the cognitive interview be effective in gathering accurate information from Kylie about the crime she witnessed? You must make reference to the context in your answer. (16)

In order to answer this question effectively, the candidate must evaluate cognitive interviewing in the context of the crime witnessed by Kylie. This means she has to show knowledge and understanding (AO1) of how CI might be used with Kylie (AO2) and make judgements about its probable effectiveness in that context (AO3). This is rather more demanding than the ‘Describe and evaluate cognitive interviewing’ type questions that used to prevail at A – Level.

When these types of question started appearing my initial response was that they were needlessly difficult and represented nothing more than a new set of hoops I needed to train my students to jump through. However, a couple of years into the new specifications and I’m now more inclined to welcome them as a challenge for us and our students to embrace. After all, our task as psychology educators is to support our students in attaining mastery of core psychological concepts, research, methodologies and ways of thinking. Complex questions are a more valid test of mastery than straightforward ‘describe and evaluate’ questions as they are not amenable to a brute-force ‘rote learn the facts and the criticisms without understanding’ approach. More pertinently, it seems to me that by using these types of question as a teaching tool, we can support out students in becoming better psychological thinkers.

Because the context material that accompanies complex questions cannot be predicted in advance, they require students to construct their response on the fly, under examination conditions.  In other words, they have to think. As Dan Willingham (2010) memorably points out, thinking is difficult and students are disinclined to do it even under ideal conditions. The examination situation imposes substantial psychological demands that reduce students’ capacity to think effectively (Putwain & Symes, 2018). Consequently, it is our responsibility to teach our students to think in the right ways long before the exam, and support them in acquiring a degree of automaticity that will allow them to devote their already-stretched cognitive resources to engaging with the content of the question.

The trouble with thinking is that you can’t see it. That makes it difficult for us to explain the sorts of thinking we want our students to do. It also makes it difficult for us to access our students’ thinking processes so we can check whether they’re being directed in the right way.  In recent years, I’ve drawn a great deal on Ron Ritchhart’s notion of making thinking visible to support my students in learning how to think (see Ritchhart et al., 2011). Ritchhart’s approach relies on  manipulables (e.g. sticky-notes) to represent concepts and the use of spatial organisation to represent relationships between them. Together with simple, repeatable dialogues and structures, they present a powerful toolbox for reducing unhelpful cognitive load, establishing transferable routines for dealing with generic subject-specific thinking situations and getting students’ thinking out in the open, where we can see it.

A visible thinking routine for complex essay questions

Here’s how I’ve been using visible thinking to teach students how to address complex questions. I lay the groundwork by presenting a question and asking the students to consider how they should address it and what an answer should do.  For example:

Joe has been convicted of criminal damage.  The magistrate sentencing noted that Joe had been arrested a number of times for similar acts and had a record of disruptive behaviour going bad to his school days.  The magistrate accepted that most teenagers get into trouble but that most seem to ‘grow out of it’ whilst Joe had not.  When asked why he had committed this crime, Joe said, ‘mostly because I’m bored…but sometimes things just wind me up.  That day I was supposed to be meeting my mates but the bus didn’t come so I just lost it a bit, smashed the bus stop up a bit.’  Joe’s father and older brother both have a similar history of antisocial behaviour and offending. 

Evaluate personality theory as an explanation of Joe’s offending. (16)

I encourage my students to adopt a four-question routine to set themselves up to address the demands of the problem:

  1. What do I know about this topic?
  2. What’s relevant in the context?
  3. What am I making judgements about?
  4. How can I justify those judgements?

This type of subject-specific metacognition is best taught by modelling, in my experience. The aim is for the students to understand that, in order to address the question satisfactorily, they need to form a principled judgement (AO3) of whether personality theory (AO1) is a valid explanation of Joe’s offending (AO2).

Students are then given sheets of A3 paper and three colours of sticky note (in my case, green, blue and orange). Pairs or three work well. The visible thinking routine is as follows:

  • First, students recall as many facts as they can that represent the knowledge and understanding required to address the question. They write one fact per green sticky note. These are collected in the centre of the A3 sheet, arranging them such that more closely related ideas are grouped together on the page.
  • Second, the students are then asked to read the context material carefully and look for specific things in the text that relate clearly to the facts/ideas on the green sticky notes. Each of these is noted on a blue sticky note and added to the sheet, near to the relevant facts, but concentrically outward.
Different colours denote different skill elements/assessment objectives.
  • Third, the students are asked to identify material relevant to evaluating personality theory, for example, supporting or challenging research findings, conceptual strengths and weaknesses and so on. Each of these is added to an orange sticky note, again placed near to the relevant application (blue) and knowledge (green).

The students are encouraged to keep thinking and recalling more relevant facts, applications and evaluative points throughout the activity, as each point made and recorded may cue either recall of other material or provoke new links between the ideas, deepening understanding.

Lines of reasoning flow from the centre towards the edge.

The fact that all the ideas are present on the page reduces cognitive load, helps the students think more clearly, and tells the teacher where they can most incisively intervene.  The flexible nature of sticky notes allows the students to think and rethink by trying out different positioning and juxtapositionings of ideas. The different colours allow the students to keep track of the different skill demands of the question, allowing them to spot gaps and deploy material effectively. By now the students should be in a position to trace lines of reasoning about the question by working from the middle to the edge.

  • The final step is for students to reorganise the sticky notes into a linear plan from which they could write their response.  The different coloured notes help here, prompting the students to organise their writing into balanced paragraphs that address all the question requirements.
The linear plan supports thinking about the sequencing of material.

I’ve only recently started using this approach with my Year 13s in a consistent way. They have commented positively on how it is helping them keep track of task requirements and organise their ideas before writing. Of course, the long-term intention is to remove the physical placeholders and the prompts from the teacher that support the process, leaving a purely mental routine that the students can use independently and without prompting. My feeling is that the systematic withdrawal of the various elements will be a fairly straightforward thing to plan, and, at each stage, I can draw attention to what I’m removing and why (e.g. ‘last time I gave you three colours of sticky notes but time I’m not…’) so that the students can establish a conscious rationale for their own thinking when approaching this type of problem.

It’s much more complicated to explain this approach than it is to do it in practice; I hope the accompanying photographs make this clear. I believe that it has the potential to help more of my students access the higher essay mark bands in their examinations. More importantly, I also believe that it can play a part in helping my students to become better thinkers in and about psychology.

Thanks

The concentric planning approach on which this VTR draws was developed collaboratively with Charlotte Hubble.

References

Putwain, D. W. & Symes, W. (2018). Does increased effort compensate for performance debilitating test anxiety? School Psychology Quarterly, 33(3), 482-49

Ritchhart, R., Church, M. & Morrison, K. (2011).  Making thinking visible: How to promote engagement, understanding and independence for all learners. Hoboken, NJ: Jossey-Bass.

Willingham, D.T. (2010). Why don’t students like school?  A cognitive scientist answers questions about how the mind works and what it means for the classroom.  San Francisco: Jossey-Bass.

Resources: two lessons on interviewing witnesses and suspects

Photo: Krystian Olszansky (Creative Commons licence)

Here are two lessons on interviewing witnesses (cognitive interview) and suspects (ethical interview). Each lesson assumes you have set advance reading from whichever textbook or other source you are using.  Lesson one starts with students making comparisons between standard police interviews and cognitive interviews using this visible thinking routine for comparing. The main application activity is to write a letter to a chief constable persuading her to adopt cognitive interviewing in her force.  I’ve found that some students get all up tight about writing an essay because it smells like assessment and they do a better job if they write a letter instead, even though the same skills are required. The slideshow gives a structure for the lesson.

Lesson two starts with the use of the same VTR. This is followed by an analysis task using this recording of a police suspect interview.  Finally, students work up an evaluation using a handout of evidence. A slideshow structures the lesson.

Scaffolding and differentiating for evaluative writing

Evaluative writing is probably the hardest thing we teach, and it’s always a work in progress.  Since I started teaching Psychology (some 20-odd years ago) I’ve tried to teach written evaluation many different ways and never really been satisfied with the result.  Part of the problem is that I have no recollection of actually being taught to do it.  Clearly, this must have happened as it seems very unlikely that I worked out how to evaluate on my own and it’s certainly the case that I wasn’t always able to do it. But I suspect it was a process that happened subtly, of the course of many interactions with many teachers and over a long time.  I’m also fairly certain I only started to learn how to do it during my undergraduate degree (I do remember slaving over an essay on Autism in my first year, which my early mentor Brown gave a First whilst damning it with faint praise as ‘a series of bright apercus’; I thanked him later). Contrary to popular opinion, the A – Levels we did in those days did not make significant demands on critical thinking and pretty good performance was guaranteed to anyone who could read a syllabus, was sufficiently skilled in memorising large chunks of material verbatim and could write quickly.

However, the specifications we teach now, and the exams for which we must prepare our students, make pretty stiff demands on students’ capacity to write critically in response to questions that are increasingly difficult to predict.  The new Edexcel specification (I can’t speak for the others) has upped the ante on this even further as their rules for the phrasing of questions limit their essay questions to a single command term (e.g. ‘Evaluate…’) even when students are expected to address several different assessment objectives in their responses.  In contrast to the questions they used to face (e.g. ‘Describe and evaluate…’), where it would always be possible for students to score marks by addressing the ‘knowledge and understanding’ element even if the critical thinking aspect was ropey, the new arrangements mean that students must address the main assessment objective all the way through their response at the same time as addressing a subsidiary assessment objective that is only implied by the question. Consequently, it is more important than ever to teach evaluative writing early in the course, and as quickly and thoroughly as we can.

But, as I said, I can’t remember learning to do it.  Furthermore, evaluative writing is, for me (and presumably for most other people who do it a lot), procedural knowledge, so what we are doing when we evaluate is not easily consciously inspected: we simply evaluate.  As a result, I have spent a fair bit of my career trying to teach something very important with neither a clear idea of what it consists of nor a principled understanding of how it develops.  In the absence of these things it is very difficult to communicate to students what the goal is or support them in moving towards it effectively.  The risk then is that ‘evaluation’ gets reduced to a set of theory-specific ‘points’ for students to learn more-or-less verbatim.  This is unsatisfactory because (1) it doesn’t equip them to meet the demands of the current assessment scene; and (2) because we’re supposed to be teaching them to think, dammit.  However, this is what I have done in the past and I suspect I’m not alone.

I started making more progress a few years ago when I began to use the SOLO taxonomy (Biggs & Collis, 1982) and the Toulmin Model of Argumentation (Toulmin, 1958) as the basis for teaching evaluation. I won’t unpack these ideas here (although the SOLO taxonomy provokes lively debate so I might come back to it in a future post) but they lead to a model of evaluative writing in which the student needs to:

  • Identify the claims made by a theory;
  • Explain the reasons why each claim should be accepted or rejected;
  • Present evidence that supports or challenges the reasons;
  • Develop their argument, for example by assessing the validity of the evidence or by comparing with a competing theory.

This might sound obvious to you but it has really helped me think clearly about what students need to learn and what the barriers to learning it are likely to be.  The fundamental block is where a student has a naive personal epistemology in which they regard theories as incontrovertible statements of fact (see Hofer & Pintrich, 2002).  In that case evaluation can only be experienced as a mysterious and peculiar game (my own research on epistemic cognition suggests that this may frequently be the case).  We can start to address this by presenting psychological knowledge using a language of possibilities and uncertainty (this is particularly salient to me as I teach in a girls’ school; Belenky et al, 1986) and by continually returning to the idea that scientific theories are maps of the world and the map is not the territory (NB. this is a job for the long haul). Other barriers are where:

  1. The student cannot identify the specific claims made by a theory;
  2. The student cannot identify evidence that relates to these claims;
  3. The student cannot articulate reasons why the evidence supports or challenges the claims;
  4. The student cannot introduce principled judgements about the validity of the evidence.

Again, all this might seem obvious but where a student has difficulty writing good evaluation it gives a starting point for diagnosing the possible problem and therefore intervening successfully.  My own experience with Year 12 and 13 students (OK, not particularly scientific but it’s all I’ve got) suggests that the major sticking points are (1) because the theory itself has not been well understood and (3) because the student needs to identify what the theory predicts and reconcile these with a distillation of what, generally, the evidence suggests, so they tend to jump from claim to evidence but don’t explain the connection between the two.

Inevitably, any class we teach is going to contain students whose capacities to think and write in these ways vary, often considerably.  We therefore might wish to differentiate activities whose aim is to develop evaluative writing.  One way of doing this is to break down evaluation of a particular theory into claims, reasons and evidence by preparing a set of cards.  Here is an example set for evaluating Atkinson and Shiffrin’s multi-store model of memory. All students are given an evaluative writing task, and are given a subset of the cards to support them.  The subset given depends on the student’s current capacity:

  • Level 1 – students are given all the cards.  Their challenge is to match up the claims/reasons/evidence and use suitable connectives to turn them into well-articulated critical points.
  • Level 2 – students are given the claims and the reasons.  Their challenge is to identify suitable evidence (e.g. from prior learning) and include this in their evaluation.
  • Level 3 – students are given the claims and the evidence.  Their challenge is to explain the reasons why each claim should be accepted/rejected before introducing the evidence.
  • Level 4 – students are given the claims only.  Their challenge is to articulate suitable reasons for accepting/rejecting the claims and link these to suitable evidence (e.g. from prior learning)
  • Level 5 – students who show competence at level 4 are then invited to consider quality of evidence/competing theories.  Visible thinking routines like tug-of-war can be useful here (see Ritchhart et al, 2011).

This general structure can be used for activities supporting the evidential evaluation of any psychological theory.  Intuitively, its success probably depends on the amount of practice students get with the format of the activity, and their sense of progress could depend on our pointing out how their performance has changed as they get more practice.  It also depends crucially on students’ understanding of the roles of claims, reasons and evidence, which should not be taken for granted.  A common problem is where students believe that the reasons are reasons for making a claim (which leads to circular arguments), not reasons why it should be accepted as true/rejected as false.

As usual, no guarantees can be given about the effectiveness of this approach relative to the alternatives but it does seem to give focus to my feedback about quality of evaluative writing and it has helped shift our students’ extended responses in a direction more likely to appeal to Edexcel’s examiners.  If anyone has thoughts about the above, I’d love to hear them.

Belenky, M.F., Clinchy, B.M., Goldberger, N.R. & Tarule, J.M. (1986). Women’s ways of knowing: the development of self, voice and mind.  New York, NY: Basic Books.

Biggs, J.B. & Collis, K.F. (1982).  Evaluating the quality of learning: the SOLO taxonomy.  New York, NY: Academic Press.

Hofer, B.K. & Pintrich, P.R. (2002).  Personal epistemology: The psychology of beliefs about knowledge and knowing.  Mahwah, NJ: Lawrence Earlbaum Associates.

Ritchhart, R., Church, M. & Morrison, K. (2011).  Making thinking visible: How to promote engagement, understanding and independence for all learners. Hoboken, NJ: Jossey-Bass.

Toulmin, S.E. (1958). The uses of argument. Cambridge: Cambridge University Press.

A visible thinking routine (VTR) for comparisons

Two very powerful techniques for provoking learning are getting students to make comparisons and using graphic organisers (see Marzano et al, 2001 pp. 14-20).  The process of making comparisons helps with the acquisition and refinement of concepts because it stimulates students to think and rethink the boundary between this and not-this.  It also requires some fairly deep processing of the sort that seems likely to support long term retention of ideas.  The use of graphic organisers helps to facilitate understanding by making it easier for learners to discern the relationships between ideas.  There is consequently strong justification for using comparison tables a lot, and I do.

A standard comparison table with predefined criteria.

Typically, I use a table organised around criteria given by me.  This works well enough but it’s also somewhat unsatisfying precisely because I give the criteria. Marzano suggests this is preferable where convergent thinking is required because the students are unlikely to come up with suitable criteria on their own.  Point taken, but all the same, what we’re presumably shooting for here is students who are capable of defining their own comparison criteria.

I have been developing a visible thinking routine (see Ritchhart et al, 2011) that shows some promise in this direction.

  1. In groups of three or four, students start by generating as many facts as they can about each of the things they are comparing.  Each fact is written in a separate sticky note.
  2. They are then invited to look for correspondences between the facts about each by lining the sticky note up with each other.  It is best to model this with some sticky notes of your own on the board or under a visualiser, and think aloud whilst doing it e.g. “OK, MRI measures energy from water molecules and PET measures energy from a radiotracer…they sort of go together, so I’m going to line these up together…”
  3. Once students have identified some comparisons they can be encouraged to add more facts to match up with any ‘stray’ sticky notes that don’t currently have a corresponding fact about the comparand.

The students have by this point constructed a skeleton comparison table.  The next step is to encourage and support them in distilling and naming their comparison criteria so they can make their comparisons explicit. Depending on the students they might need more or less scaffolding to do this.

The comparison VTR in progress. Students have started identifying correspondences.

Final steps could be:

  • Recording their table, either drawing it up or photographing it;
  • Working as a whole class to draw up a ‘master’ comparison table based on small-group contributions;
  • Translating their table into well-formed written comparisons (stems can be helpful here).

This process has two significant virtues in that (1) the students do more of the thinking since they go all or at least some of the way to working out their own comparison criteria and (2) it makes their thinking processes visible to you – so you can intervene helpfully – and them, which supports metacognition.

I’ve used this approach several times and am satisfied that it results in comparisons that are as good as those that emerge from a pre-prepared table (although it does take a bit longer).  I am unable to say whether it has any significant impact on my students’ more general capacity to think in comparative ways, although it has intuitive appeal. The important point here is that a visible thinking routine needs to become, well, a routine.  I have not yet used this approach consistently enough across a sufficient range of contexts for my students to incorporate into their everyday thinking repertoire and thereby to permit spontaneous use and generalisation.  My next step, therefore, is to review my schemes of learning for next year and see where the opportunities for this might be.

Marzano, R.J., Pickering, D.J. & Pollock, J.E. (2001).  Classroom instruction that works: Research based strategies for increasing student achievement.  Alexandria, VA: ASCD.

Ritchhart, R., Church, M. & Morrison, K. (2011).  Making thinking visible: How to promote engagement, understanding and independence for all learners.  Hoboken, NJ: Jossey-Bass.