Resources: lessons on reconstructive memory

Here are some resources for teaching reconstructive memory.  There are some lesson plans, three slideshows, one for the theory, one for a jigsaw activity and one on evaluation and writing.  There is also an associated paper quiz, an application scenario, the jigsaw materials and a Socrative quiz on reconstructive memory.

Resources: theories of long-term memory

Here are two lessons on theories of long term memory, covering the distinctions between episodic, semantic and procedural memories, associated research studies and critical issues. The first has a slideshow on the key concepts, accompanied by a classification task and a comparison table to complete. The second has a slideshow on critical issues and a reading on clinical case studies of episodic, semantic and procedural memories. An irrelevant case has been added to the reading so that the students get practice at deciding whether evidence is relevant to an issue or not. There is also a Socrative quiz on theories of LTM.

Teaching effective revision strategies.

I have declared a personal war on exam technique.

Actually, I haven’t. Familiarity with the format of an assessment is a significant influence on students’ performance. What I’ve declared war on is the use of ‘poor exam technique’ as an excuse for under-performance that is actually caused by students’ failure to learn the material on which they will be examined.

‘Exam Technique’ attributions

Confronted with evidence of failure, many students find the ‘exam technique’ attractive because it allows them to sustain the belief that they are ‘bright’ and ‘a good student’. Most of the students I teach invest considerable time and effort in learning and preparing for tests/exams. Cognitive dissonance theory (Festinger, 1957) suggests that the thought, ‘I have done badly’ is incompatible with the thought, ‘I worked hard for this’. This give rise to psychological discomfort. Consequently, the student is motivated to reduce the dissonance. This can be done by making a suitable attribution.

Three possible dissonance-reducing attributions are: (1) ‘I am not capable of learning’; (2) ‘I did the wrong things whilst learning’; and (3) ‘I had poor exam technique’. My suspicion is that (1) is unattractive because of its implications for self-image and (2) is unattractive because it implies the need to change longstanding beliefs and habits around learning and revision. That leaves (3), which preserves both positive self-image and entrenched learning habits by allowing the student to think, ‘It’s OK, I know this stuff really, it’s just my exam technique that let me down’.

I suspect this may also be true of some teachers, at least some of the time. Knowledge of a student’s failure is dissonant with our beliefs about our own teaching (most of us believe we are above average; Hoorens, 1993) and ‘exam technique’ usefully deflects doubts about whether the things we spend time and effort doing are actually working, especially since most of us (I believe) are apt to avoid attributing students’ failure to stupidity (cf. Dweck, 1999).

Like most teachers, I test my students fairly regularly, for a variety of reasons. I see relatively few examples of students’ performance being affected significantly by what I would characterise as exam technique (e.g. gross errors of time management, inappropriate application of material or misapprehension of question requirements). I wish it were otherwise, as problems of exam technique are, in my experience, relatively easy to fix. But, ultimately, problems of exam technique are reserved for students that actually know their stuff and, in the majority of cases, the core problem is that they don’t.

It’s students’ learning that needs fixing, not their exam technique.

Retrieval practice

There is now fairly unequivocal evidence that the learning strategy most likely to result in retention of material is retrieval practice, that is, the reconstruction, without prompts, of information previously learned and stored in long-term memory. Students who practice retrieving material from long-term memory forget less than those who do not (see this chapter by Karpicke, 2017, for a comprehensive review). Karpicke identifies several reasons why retrieval practice enhances learning and recall. First, retrieval practice is transfer-appropriate processing. That is, there is a large overlap between recall practice during learning and the way students will need to use material in their exams. Second, the effort involved in retrieval leaves memory traces strengthened. Third, retrieval practice incorporates retrieval cues into memory traces in helpful ways (semantic elaboration).

Although theoretical accounts of why retrieval practice works are under development, the empirical support for its use is unarguable. A study by Roediger and Karpicke (2006) is fairly representative. Student participants were given unfamiliar material to learn across four study sessions. One group was told to study (i.e. read and reread) the material in all four sessions (SSSS). A second group studied the material in the first three sessions and, in the fourth, tested themselves instead, by writing down as much of the material they could remember in free recall (SSST). A third group (STTT) were allowed to study the material only in the first session and then completed three free-recall tests (STTT). All the participants were then given a recall test. This was done 5 minutes after the end of the final session and then repeated after an interval of 1 week. After 5 minutes, students who had studied and restudied the material (SSSS) had higher recall than the other two groups. However, after 1 week, the STTT group had the highest recall, followed by SSST, with the SSSS group showing the lowest level of recall.

The problem of spontaneous adoption

This study, and the many confirmatory findings, demonstrates the superiority of retrieval-based learning over restudying for retention of material over the longer term. It also hints at why many of our students may fail to adopt retrieval-based revision methods even when advised to do so: immediate recall in Roediger and Karpicke’s study was better when the students ‘crammed’. Since a typical student probably doesn’t retest themselves over longer intervals in any systematic way, they remain unaware of how quickly they forget information that has been learned that way.

Ariel and Karpicke (2018) highlight a number of unhelpful beliefs that students (and teachers) often hold that militate against the adoption of retrieval-based study strategies. First, there is the belief that restudying is the most effective way of learning material. Second, there is the belief that, whilst retrieval is a suitable way of monitoring learning, it does not, in itself, provide benefits to recall. Third, even when students do use retrieval-based methods, students tend to rely on a ‘one and done’ strategy, whereas the evidence is that it is repeated retrieval that has the most significant impact on retention.

Ariel and Karpicke’s paper describes a study showing that a straightforward intervention increased the spontaneous adoption of retrieval practice in a group of student participants. They were given the task of learning English-Lithuanian word translations. They used software that allowed them to chose between ‘studying’ (i.e. reading and rereading) and ‘practising’ (i.e. being tested). Participants were randomly assigned to either a control group who were simply told to learn as many of the words as possible in preparation for a final test or to a retrieval practice instructions group who were given (1) information about the superiority of retrieval over restudying; (2) a graph supporting this information; and (3) the advice that the best way of learning for the recall test was to ensure that each translation had been recalled at least three times before dropping it from study.

Source: Ariel & Karpicke (2018)

Students who received the retrieval practice instructions made more spontaneous use of retrieval practice during learning and performed better on the Lithuanian translations than the controls. Importantly, in a transfer test given 1 week later, those who had received the retrieval instructions made significantly more use of self-testing on a task involving learning English-Swahili translations.

A card-based revision strategy

I was sufficiently impressed by these results to use them as the basis of an attempt to improve my students’ use of effective learning and revision strategies. I used ‘statistical test choice’ as the focus since it is a small and discrete body of material, it is straightforward to test both recall and transfer of learning and it is something my Y12 students had not encountered before. I taught the content in a conventional way. Then, after explaining and justifying the revision strategy I wanted them to use, I gave each student a set of revision cards for statistical test choice. These are set up so that, when photocopied back-to-back, there is a question on one side of each card and the relevant answer on the other.

I explained that revision with these cards should be done as follows (the strategy is closely based on the one designed by Ariel and Karpicke):

  1. Create space on your desk for three piles of cards: STUDY, PRACTICE and DONE.
  2. Start by testing yourself on every card.
  3. If you can answer a question fully and accurately, put it on the PRACTICE pile. If you cannot, put it on the STUDY pile.
  4. Alternate between STUDY and PRACTICE. Any card you have studied should be put on the PRACTICE pile. Any card you have successfully retrieved should be returned to the PRACTICE pile. Any card you have been unable to retrieve should be returned to the STUDY pile.
  5. If you have successfully retrieved a card three times, put it on the DONE pile.

During the ensuing study session, cards should gradually work their way across from the STUDY pile to the DONE pile.

I demonstrated this process, and then got the students to try it. I circulated and watched how they went about it, coaching where necessary. Over the course of the lesson I gave them opportunities to use the revision strategy. In subsequent lessons, I tested their recall using this Socrative quiz, which tests recall of statistical decision rules and has no applied element. I asked the students to use the revision cards for 20 minutes before their next lesson.

Here are the quiz results at the start of the next lesson (the following day):

The majority of students had 100% recall, although some students either had not acquired the material or had forgotten it very quickly. At the end of the lessons, the quiz was repeated:

Recall was higher; student 10 went from 13% to 100% correct. The quiz was repeated after a four-day interval:

Interestingly, whilst the majority of the students retained 100% recall, student 10’s recall had fallen to 38%. It is interesting to speculate whether this was due to individual differences in memory or to differences in strategy adoption. At the end of the lesson, recall looked like this:

Student 10’s recall had recovered, and, overall, recall was very high (3 incorrect responses in 120 recall trials).

What have I learned?

My informal investigations with my Year 12s suggest that the card-based revision strategy using retrieval practice is at least as effective as what the students were already doing. Their reactions to the Socrative assessment feedback suggested that they appreciated the impact the strategy was having on their retention. They also found the card-based strategy acceptable and even fun, particularly if they added a social element.

This is all quite encouraging, so I have now started investigating whether the strategy transfers well to less well-structured material. Studies in this area typically use very well-structured material as it’s easy to test recall unambiguously, so it is somewhat open to question whether this card-based strategy requires adapting for use with less well-structured content. I have created a set of revision cards for learning the classic study by Baddeley (1966), which is a requirement of the Edexcel specification and one on which my students performed poorly in their recent end-of-year examination. It will be interesting to see whether it has a similar impact, and whether the students find it as acceptable for this sort of content.

Assuming that it works, my intention is to develop the card-based revision strategy with my Year 12s over the remainder of their course. The aim will be to shift the students from relying on me to make the revision cards and spontaneously to create and use their own as part of their ongoing preparations for their final exams. Depending on how this works out, I would consider adding the card-based strategy to our induction programme at the start of Year 12, alongside the other elements we currently promote as essential, including reciprocal teaching and the Cornell note-making system.

Thanks

Many of the ideas for this post came out of conversations with Andy Bailey.

References

Ariel, R. & Karpicke, J.D. (2018). Improving self-regulated learning with a retrieval practice intervention. Journal of Experimental Psychology: Applied, 24(1), 43-56.

Baddeley, A. D. (1966). The influence of acoustic and semantic similarity on long-term memory for word sequences. The Quarterly Journal of Experimental Psychology, 18(4), 302–309.

Dweck, C.S. (1999). Self Theories: Their Role in Motivation, Personality and Development. Hove: Psychology Press.

Festinger, L. (1957). A Theory of Cognitive Dissonance. Evanston, IL: Row Peterson.

Hoorens, V. (1993). Self-enhancement and superiority biases in social somparison. European Review of Social Psychology. 4(1), 113–139.

Karpicke, J.D. (2017). Retrieval-based learning: a decade of progress. In J.H. Byrne (Ed.) “Learning and Memory: A Comprehensive Reference (Second Edition)” pp.487-514. Oxford: Elsevier.

Resources: working memory

Here are a couple of bits for teaching Baddeley & Hitch’s (1974) working memory model. There’s a slideshow, a set of application tasks to help students understand the distinction between the different components and the idea of processing conflicts in WM, and a summary of some relevant research studies with space for students to comment/interpret.

Scaffolding and differentiating for evaluative writing

Evaluative writing is probably the hardest thing we teach, and it’s always a work in progress.  Since I started teaching Psychology (some 20-odd years ago) I’ve tried to teach written evaluation many different ways and never really been satisfied with the result.  Part of the problem is that I have no recollection of actually being taught to do it.  Clearly, this must have happened as it seems very unlikely that I worked out how to evaluate on my own and it’s certainly the case that I wasn’t always able to do it. But I suspect it was a process that happened subtly, of the course of many interactions with many teachers and over a long time.  I’m also fairly certain I only started to learn how to do it during my undergraduate degree (I do remember slaving over an essay on Autism in my first year, which my early mentor Brown gave a First whilst damning it with faint praise as ‘a series of bright apercus’; I thanked him later). Contrary to popular opinion, the A – Levels we did in those days did not make significant demands on critical thinking and pretty good performance was guaranteed to anyone who could read a syllabus, was sufficiently skilled in memorising large chunks of material verbatim and could write quickly.

However, the specifications we teach now, and the exams for which we must prepare our students, make pretty stiff demands on students’ capacity to write critically in response to questions that are increasingly difficult to predict.  The new Edexcel specification (I can’t speak for the others) has upped the ante on this even further as their rules for the phrasing of questions limit their essay questions to a single command term (e.g. ‘Evaluate…’) even when students are expected to address several different assessment objectives in their responses.  In contrast to the questions they used to face (e.g. ‘Describe and evaluate…’), where it would always be possible for students to score marks by addressing the ‘knowledge and understanding’ element even if the critical thinking aspect was ropey, the new arrangements mean that students must address the main assessment objective all the way through their response at the same time as addressing a subsidiary assessment objective that is only implied by the question. Consequently, it is more important than ever to teach evaluative writing early in the course, and as quickly and thoroughly as we can.

But, as I said, I can’t remember learning to do it.  Furthermore, evaluative writing is, for me (and presumably for most other people who do it a lot), procedural knowledge, so what we are doing when we evaluate is not easily consciously inspected: we simply evaluate.  As a result, I have spent a fair bit of my career trying to teach something very important with neither a clear idea of what it consists of nor a principled understanding of how it develops.  In the absence of these things it is very difficult to communicate to students what the goal is or support them in moving towards it effectively.  The risk then is that ‘evaluation’ gets reduced to a set of theory-specific ‘points’ for students to learn more-or-less verbatim.  This is unsatisfactory because (1) it doesn’t equip them to meet the demands of the current assessment scene; and (2) because we’re supposed to be teaching them to think, dammit.  However, this is what I have done in the past and I suspect I’m not alone.

I started making more progress a few years ago when I began to use the SOLO taxonomy (Biggs & Collis, 1982) and the Toulmin Model of Argumentation (Toulmin, 1958) as the basis for teaching evaluation. I won’t unpack these ideas here (although the SOLO taxonomy provokes lively debate so I might come back to it in a future post) but they lead to a model of evaluative writing in which the student needs to:

  • Identify the claims made by a theory;
  • Explain the reasons why each claim should be accepted or rejected;
  • Present evidence that supports or challenges the reasons;
  • Develop their argument, for example by assessing the validity of the evidence or by comparing with a competing theory.

This might sound obvious to you but it has really helped me think clearly about what students need to learn and what the barriers to learning it are likely to be.  The fundamental block is where a student has a naive personal epistemology in which they regard theories as incontrovertible statements of fact (see Hofer & Pintrich, 2002).  In that case evaluation can only be experienced as a mysterious and peculiar game (my own research on epistemic cognition suggests that this may frequently be the case).  We can start to address this by presenting psychological knowledge using a language of possibilities and uncertainty (this is particularly salient to me as I teach in a girls’ school; Belenky et al, 1986) and by continually returning to the idea that scientific theories are maps of the world and the map is not the territory (NB. this is a job for the long haul). Other barriers are where:

  1. The student cannot identify the specific claims made by a theory;
  2. The student cannot identify evidence that relates to these claims;
  3. The student cannot articulate reasons why the evidence supports or challenges the claims;
  4. The student cannot introduce principled judgements about the validity of the evidence.

Again, all this might seem obvious but where a student has difficulty writing good evaluation it gives a starting point for diagnosing the possible problem and therefore intervening successfully.  My own experience with Year 12 and 13 students (OK, not particularly scientific but it’s all I’ve got) suggests that the major sticking points are (1) because the theory itself has not been well understood and (3) because the student needs to identify what the theory predicts and reconcile these with a distillation of what, generally, the evidence suggests, so they tend to jump from claim to evidence but don’t explain the connection between the two.

Inevitably, any class we teach is going to contain students whose capacities to think and write in these ways vary, often considerably.  We therefore might wish to differentiate activities whose aim is to develop evaluative writing.  One way of doing this is to break down evaluation of a particular theory into claims, reasons and evidence by preparing a set of cards.  Here is an example set for evaluating Atkinson and Shiffrin’s multi-store model of memory. All students are given an evaluative writing task, and are given a subset of the cards to support them.  The subset given depends on the student’s current capacity:

  • Level 1 – students are given all the cards.  Their challenge is to match up the claims/reasons/evidence and use suitable connectives to turn them into well-articulated critical points.
  • Level 2 – students are given the claims and the reasons.  Their challenge is to identify suitable evidence (e.g. from prior learning) and include this in their evaluation.
  • Level 3 – students are given the claims and the evidence.  Their challenge is to explain the reasons why each claim should be accepted/rejected before introducing the evidence.
  • Level 4 – students are given the claims only.  Their challenge is to articulate suitable reasons for accepting/rejecting the claims and link these to suitable evidence (e.g. from prior learning)
  • Level 5 – students who show competence at level 4 are then invited to consider quality of evidence/competing theories.  Visible thinking routines like tug-of-war can be useful here (see Ritchhart et al, 2011).

This general structure can be used for activities supporting the evidential evaluation of any psychological theory.  Intuitively, its success probably depends on the amount of practice students get with the format of the activity, and their sense of progress could depend on our pointing out how their performance has changed as they get more practice.  It also depends crucially on students’ understanding of the roles of claims, reasons and evidence, which should not be taken for granted.  A common problem is where students believe that the reasons are reasons for making a claim (which leads to circular arguments), not reasons why it should be accepted as true/rejected as false.

As usual, no guarantees can be given about the effectiveness of this approach relative to the alternatives but it does seem to give focus to my feedback about quality of evaluative writing and it has helped shift our students’ extended responses in a direction more likely to appeal to Edexcel’s examiners.  If anyone has thoughts about the above, I’d love to hear them.

Belenky, M.F., Clinchy, B.M., Goldberger, N.R. & Tarule, J.M. (1986). Women’s ways of knowing: the development of self, voice and mind.  New York, NY: Basic Books.

Biggs, J.B. & Collis, K.F. (1982).  Evaluating the quality of learning: the SOLO taxonomy.  New York, NY: Academic Press.

Hofer, B.K. & Pintrich, P.R. (2002).  Personal epistemology: The psychology of beliefs about knowledge and knowing.  Mahwah, NJ: Lawrence Earlbaum Associates.

Ritchhart, R., Church, M. & Morrison, K. (2011).  Making thinking visible: How to promote engagement, understanding and independence for all learners. Hoboken, NJ: Jossey-Bass.

Toulmin, S.E. (1958). The uses of argument. Cambridge: Cambridge University Press.

Resources: eyewitness testimony (post-event information)

Here are some resources for teaching the effect of post-event information on eyewitness testimony.  There is an application problem for EWT (with guidance for the analysis on page 2) and a brief slideshow to accompany it.  The essay writing advice is pitched towards Edexcel exams, so YMMV.

Resources: the multi-store model of memory

Here are some resources for teaching the multi-store model of memory. There is an application problem using the multi-store model and a writing task to support well formed evaluation of the multistore model based on the SOLO taxonomy.

Resources: introductory memory concepts

Here are some resources for teaching introductory memory concepts. There is a memory concepts slideshow, some wordlists for serial position demonstrations, a spreadsheet for graphing the serial position demos and a short reading on Milner et al’s (1968) ‘HM’ case study.

psychlotron.org.uk has re-entered the building.

The original psychlotron.org.uk psychology resource website started in 2005 and was regularly updated until 2013.  At that point, fatigue, exam specification changes, cancer (my partner’s) and then open heart surgery (mine) collectively intervened to bring further updates to a halt.

As I now have material I’d like to share and a bit more time and motivation I’ve decided to pick up where I left off in 2013.  If you used the site before you’ll notice I’ve ditched the lovingly hand-crafted html of yore and will now be building psychlotron using WordPress.  I’m hoping that this will make managing the content a bit more straightforward and the addition of things like tags and a search box will make it easier to find what you’re looking for.

If you’re looking for links to resources posted on the old website, try the archive link on the right hand side.  I’ve combined all the old resource links on one big fat web page; you’ll have to scroll down until you find what you’re after.

New resources will be posted in the new ‘blog’ format and tagged to make them searchable.  Here’s an experimental one to see if the process works as intended: an application problem for teaching the multi-store model of memory.