Resources: lessons on reconstructive memory

Here are some resources for teaching reconstructive memory.  There are some lesson plans, three slideshows, one for the theory, one for a jigsaw activity and one on evaluation and writing.  There is also an associated paper quiz, an application scenario, the jigsaw materials and a Socrative quiz on reconstructive memory.

Resources: theories of long-term memory

Here are two lessons on theories of long term memory, covering the distinctions between episodic, semantic and procedural memories, associated research studies and critical issues. The first has a slideshow on the key concepts, accompanied by a classification task and a comparison table to complete. The second has a slideshow on critical issues and a reading on clinical case studies of episodic, semantic and procedural memories. An irrelevant case has been added to the reading so that the students get practice at deciding whether evidence is relevant to an issue or not. There is also a Socrative quiz on theories of LTM.

Resources: evaluating the working memory model

Here’s a jigsaw activity for developing students’ evaluations of the working memory model. It’s designed for four ‘expert’ groups and three or four ‘jigsaw’ groups and covers (1) experimental support; (2) support from studies of the brain; (3) practical applications; (4) limitations of the model. There’s a set of working memory jigsaw stimuli and a slideshow with a couple of recall/application exercises tagged on at the end.

Resources: working memory

Here are a couple of bits for teaching Baddeley & Hitch’s (1974) working memory model. There’s a slideshow, a set of application tasks to help students understand the distinction between the different components and the idea of processing conflicts in WM, and a summary of some relevant research studies with space for students to comment/interpret.

Teaching eyewitness testimony (and many other things) using the jigsaw approach

Image by Jared Tarbell; used under Creative Commons license.
An oblique approach to image choice would add subtlety but, frankly, it’s been a long week.

I’m a big fan of the jigsaw classroom (Aronson et al, 1978) to the point where I probably overuse it. If you’re not familiar, it’s a cooperative learning activity format in which students learn part of a topic so they can teach it to others and, in turn, are taught other parts by them. The aim is that all the students end up learning the whole topic. The students are organised into ‘jigsaw’ groups. Each jigsaw group is then split up and rearranged into ‘expert’ groups. Each expert group is given responsibility for mastering one part of the topic knowledge. The expert groups are then returned to their jigsaw groups, where they teach each other. There’s a good guide to the jigsaw technique here.

When it’s done well, jigsaw promotes a high degree of interdependence amongst learners and exposes all the students to the material to be learned, both of which contribute to its effectiveness as a psychology teaching strategy (Tomcho & Foels, 2012). Compared to non-cooperative methods (i.e. those that do not require interdependence) techniques like jigsaw provide more effective learning of conceptual knowledge, a greater sense of competence and more enjoyment of learning. This is particularly so when the activity is highly structured with assigned roles, prompts for self reflection, and both individual and group feedback on performance (Supanc et al, 2017).

When I use it I like to keep group sizes to a maximum of four. If you have 16 or 32 students in a class that’s great because you can divide the material into four and have four students in each jigsaw/expert group. A group of 25 also works well, with the material divided into five parts. It can be a headache to assign groups when you have inconvenient numbers of students so you need to plan ahead and think about how you will ensure that every student learns all the content.

In my experience, the jigsaw approach works best when:

  • You stress that the activity is all about understanding what they are learning and remind students throughout of their responsibility for both teaching and learning the material. The danger is that it can easily become an ‘information transfer’ exercise, with students copying down material verbatim and dictating to each other without understanding. It is sometimes useful to impose rules to prevent this (e.g. limit the number of words students are allowed to use when making notes in their expert groups, only allowing them to draw pictures etc.)
  • The learning material is tailored to the students. This means adjusting the difficulty/complexity level of the material to be just difficult enough so that the students need to engage with it and each other to co-construct an understanding. Too difficult and they can’t do it; too easy and it becomes trivial; either way, they lose interest.
  • The learning material is tailored to the timescale. Again, we want the students to create meaning from the materials and this takes time. If too little time is given then either some of the material won’t get taught, or students will resort to ‘information transfer’ and there will be no co-construction.
  • You actively monitor what’s going on in the groups, particularly the expert groups. This is how we moderate the difficulty of the materials. We don’t want the students teaching each other things that are wrong. At the same time, it’s important not to just charge in and instruct the learners directly. Doing that undermines the point of the approach. In any case, I wouldn’t use jigsaw to teach fundamental concepts for the first time; it’s just too risky. I prefer to use it to elaborate on and deepen understanding of ideas.
  • You have an accountability mechanism (i.e. a test). Multiple choice/online assessment is quick and effective if the test items are well written. Plickers and Socrative are useful tools for this. One approach that can work here is to tell the students that everyone will do the test but that each student will receive the average mark for their jigsaw group. This creates an incentive for students to ensure that everyone in the group does well (although it also creates an incentive to blame people if the group does badly, so YMMV).

Here’s a set of materials for teaching some of the factors that moderate the misinformation effect on eyewitness testimony using the jigsaw method. This is for a one-hour lesson with a 10-15 minute expert groups phase and a 15-20 minute jigsaw groups phase. There is a slideshow that structures the lesson and a set of learning materials covering the moderating effects of time, source reliability, centrality and awareness of misinformation. You can extend the activity by prompting students to evaluate the evidence offered.  If you are a Socrative user (free account with paid upgrades) you can get the multiple choice quiz using this link. As with all these approaches, there is no guarantee that it’s superior to the alternatives but the available evidence suggests it is worth trying.  And, like everything, its effectiveness is likely to grow when both teacher and students are practised in the technique.

Aronson, E., Blaney, N., Stephin, C., Sikes, J., & Snapp, M. (1978). The Jigsaw Classroom. Beverly Hills, CA: Sage Publishing Company

Supanc, M., Vollinger, V.A. & Brunstein, J.C. (2017).  High-structure versus low-structure cooperative learning in introductory psychology classes for student teachers: Effects on conceptual knowledge, self-perceived competence, and subjective task values.  Learning and Instruction, 50, 75-84.

Tomcho, T.J. & Foels, R. (2012).  Meta-analysis of group learning activities: Empirically based teaching recommendations.  Teaching of Psychology, 39 (3), 159-169.

Resources: proficiency scales for criminological psychology topics

If you get this you win 1,000,000 geek points.

I’m not a massive fan of presenting a set of learning objectives (or whatever we’re calling them this inspection cycle) at the start of every lesson. I agree it’s important that students know where they’re heading and how what they’re engaging with relates to other things they are learning; I just don’t think that sticking today’s LOs on the board and reading them out/getting students to copy them down is a particularly effective way of accomplishing this. That said, there is still an argument for defining clear set of LOs when we plan.  When we teach a syllabus whose content and examination format we don’t determine (like A – Level Psychology) careful thought needs to be given to translating its potentially vague statements into terms that are meaningful given the people we’re teaching, the context in which we’re teaching them and the timescales involved.

I’ve done this a variety of ways in the past. I’ve always found it a very useful exercise for me, but of relative little apparent value to my students. To try to extract some more mileage from the process I’m currently experimenting with proficiency scales (Marzano, 2017). Besides communicating clearly what students need to be able to do, Marzano’s format also requires us to consider what progression in knowledge and understanding might look like in a topic and gives a scoring rubric we can use as the basis for assessment and feedback. I am interested to see how this works in practice.

Here is a set of proficiency scales for the Edexcel criminological psychology topic and a generic proficiency scale (RTF) you can adapt for your own purposes. I’ve divided up the content using SOLO levels (Biggs & Collis, 1982) because it’s a fairly useful model of how students’ knowledge and understanding can be expected to develop. I’ll upload more topic proficiency scales when I’ve finished writing them.

Biggs, J.B. & Collis, K.F. (1982). Evaluating the quality of learning: the SOLO taxonomy. New York: Academic Press.

Marzano, R.J. (2017). The new art and science of teaching. Alexandria: Solution Tree/ASCD.

Scaffolding and differentiating for evaluative writing

Evaluative writing is probably the hardest thing we teach, and it’s always a work in progress.  Since I started teaching Psychology (some 20-odd years ago) I’ve tried to teach written evaluation many different ways and never really been satisfied with the result.  Part of the problem is that I have no recollection of actually being taught to do it.  Clearly, this must have happened as it seems very unlikely that I worked out how to evaluate on my own and it’s certainly the case that I wasn’t always able to do it. But I suspect it was a process that happened subtly, of the course of many interactions with many teachers and over a long time.  I’m also fairly certain I only started to learn how to do it during my undergraduate degree (I do remember slaving over an essay on Autism in my first year, which my early mentor Brown gave a First whilst damning it with faint praise as ‘a series of bright apercus’; I thanked him later). Contrary to popular opinion, the A – Levels we did in those days did not make significant demands on critical thinking and pretty good performance was guaranteed to anyone who could read a syllabus, was sufficiently skilled in memorising large chunks of material verbatim and could write quickly.

However, the specifications we teach now, and the exams for which we must prepare our students, make pretty stiff demands on students’ capacity to write critically in response to questions that are increasingly difficult to predict.  The new Edexcel specification (I can’t speak for the others) has upped the ante on this even further as their rules for the phrasing of questions limit their essay questions to a single command term (e.g. ‘Evaluate…’) even when students are expected to address several different assessment objectives in their responses.  In contrast to the questions they used to face (e.g. ‘Describe and evaluate…’), where it would always be possible for students to score marks by addressing the ‘knowledge and understanding’ element even if the critical thinking aspect was ropey, the new arrangements mean that students must address the main assessment objective all the way through their response at the same time as addressing a subsidiary assessment objective that is only implied by the question. Consequently, it is more important than ever to teach evaluative writing early in the course, and as quickly and thoroughly as we can.

But, as I said, I can’t remember learning to do it.  Furthermore, evaluative writing is, for me (and presumably for most other people who do it a lot), procedural knowledge, so what we are doing when we evaluate is not easily consciously inspected: we simply evaluate.  As a result, I have spent a fair bit of my career trying to teach something very important with neither a clear idea of what it consists of nor a principled understanding of how it develops.  In the absence of these things it is very difficult to communicate to students what the goal is or support them in moving towards it effectively.  The risk then is that ‘evaluation’ gets reduced to a set of theory-specific ‘points’ for students to learn more-or-less verbatim.  This is unsatisfactory because (1) it doesn’t equip them to meet the demands of the current assessment scene; and (2) because we’re supposed to be teaching them to think, dammit.  However, this is what I have done in the past and I suspect I’m not alone.

I started making more progress a few years ago when I began to use the SOLO taxonomy (Biggs & Collis, 1982) and the Toulmin Model of Argumentation (Toulmin, 1958) as the basis for teaching evaluation. I won’t unpack these ideas here (although the SOLO taxonomy provokes lively debate so I might come back to it in a future post) but they lead to a model of evaluative writing in which the student needs to:

  • Identify the claims made by a theory;
  • Explain the reasons why each claim should be accepted or rejected;
  • Present evidence that supports or challenges the reasons;
  • Develop their argument, for example by assessing the validity of the evidence or by comparing with a competing theory.

This might sound obvious to you but it has really helped me think clearly about what students need to learn and what the barriers to learning it are likely to be.  The fundamental block is where a student has a naive personal epistemology in which they regard theories as incontrovertible statements of fact (see Hofer & Pintrich, 2002).  In that case evaluation can only be experienced as a mysterious and peculiar game (my own research on epistemic cognition suggests that this may frequently be the case).  We can start to address this by presenting psychological knowledge using a language of possibilities and uncertainty (this is particularly salient to me as I teach in a girls’ school; Belenky et al, 1986) and by continually returning to the idea that scientific theories are maps of the world and the map is not the territory (NB. this is a job for the long haul). Other barriers are where:

  1. The student cannot identify the specific claims made by a theory;
  2. The student cannot identify evidence that relates to these claims;
  3. The student cannot articulate reasons why the evidence supports or challenges the claims;
  4. The student cannot introduce principled judgements about the validity of the evidence.

Again, all this might seem obvious but where a student has difficulty writing good evaluation it gives a starting point for diagnosing the possible problem and therefore intervening successfully.  My own experience with Year 12 and 13 students (OK, not particularly scientific but it’s all I’ve got) suggests that the major sticking points are (1) because the theory itself has not been well understood and (3) because the student needs to identify what the theory predicts and reconcile these with a distillation of what, generally, the evidence suggests, so they tend to jump from claim to evidence but don’t explain the connection between the two.

Inevitably, any class we teach is going to contain students whose capacities to think and write in these ways vary, often considerably.  We therefore might wish to differentiate activities whose aim is to develop evaluative writing.  One way of doing this is to break down evaluation of a particular theory into claims, reasons and evidence by preparing a set of cards.  Here is an example set for evaluating Atkinson and Shiffrin’s multi-store model of memory. All students are given an evaluative writing task, and are given a subset of the cards to support them.  The subset given depends on the student’s current capacity:

  • Level 1 – students are given all the cards.  Their challenge is to match up the claims/reasons/evidence and use suitable connectives to turn them into well-articulated critical points.
  • Level 2 – students are given the claims and the reasons.  Their challenge is to identify suitable evidence (e.g. from prior learning) and include this in their evaluation.
  • Level 3 – students are given the claims and the evidence.  Their challenge is to explain the reasons why each claim should be accepted/rejected before introducing the evidence.
  • Level 4 – students are given the claims only.  Their challenge is to articulate suitable reasons for accepting/rejecting the claims and link these to suitable evidence (e.g. from prior learning)
  • Level 5 – students who show competence at level 4 are then invited to consider quality of evidence/competing theories.  Visible thinking routines like tug-of-war can be useful here (see Ritchhart et al, 2011).

This general structure can be used for activities supporting the evidential evaluation of any psychological theory.  Intuitively, its success probably depends on the amount of practice students get with the format of the activity, and their sense of progress could depend on our pointing out how their performance has changed as they get more practice.  It also depends crucially on students’ understanding of the roles of claims, reasons and evidence, which should not be taken for granted.  A common problem is where students believe that the reasons are reasons for making a claim (which leads to circular arguments), not reasons why it should be accepted as true/rejected as false.

As usual, no guarantees can be given about the effectiveness of this approach relative to the alternatives but it does seem to give focus to my feedback about quality of evaluative writing and it has helped shift our students’ extended responses in a direction more likely to appeal to Edexcel’s examiners.  If anyone has thoughts about the above, I’d love to hear them.

Belenky, M.F., Clinchy, B.M., Goldberger, N.R. & Tarule, J.M. (1986). Women’s ways of knowing: the development of self, voice and mind.  New York, NY: Basic Books.

Biggs, J.B. & Collis, K.F. (1982).  Evaluating the quality of learning: the SOLO taxonomy.  New York, NY: Academic Press.

Hofer, B.K. & Pintrich, P.R. (2002).  Personal epistemology: The psychology of beliefs about knowledge and knowing.  Mahwah, NJ: Lawrence Earlbaum Associates.

Ritchhart, R., Church, M. & Morrison, K. (2011).  Making thinking visible: How to promote engagement, understanding and independence for all learners. Hoboken, NJ: Jossey-Bass.

Toulmin, S.E. (1958). The uses of argument. Cambridge: Cambridge University Press.

Resources: eyewitness testimony (post-event information)

Here are some resources for teaching the effect of post-event information on eyewitness testimony.  There is an application problem for EWT (with guidance for the analysis on page 2) and a brief slideshow to accompany it.  The essay writing advice is pitched towards Edexcel exams, so YMMV.

Resources: the multi-store model of memory

Here are some resources for teaching the multi-store model of memory. There is an application problem using the multi-store model and a writing task to support well formed evaluation of the multistore model based on the SOLO taxonomy.

Resources: introductory memory concepts

Here are some resources for teaching introductory memory concepts. There is a memory concepts slideshow, some wordlists for serial position demonstrations, a spreadsheet for graphing the serial position demos and a short reading on Milner et al’s (1968) ‘HM’ case study.