Teaching eyewitness testimony (and many other things) using the jigsaw approach

Image by Jared Tarbell; used under Creative Commons license.
An oblique approach to image choice would add subtlety but, frankly, it’s been a long week.

I’m a big fan of the jigsaw classroom (Aronson et al, 1978) to the point where I probably overuse it. If you’re not familiar, it’s a cooperative learning activity format in which students learn part of a topic so they can teach it to others and, in turn, are taught other parts by them. The aim is that all the students end up learning the whole topic. The students are organised into ‘jigsaw’ groups. Each jigsaw group is then split up and rearranged into ‘expert’ groups. Each expert group is given responsibility for mastering one part of the topic knowledge. The expert groups are then returned to their jigsaw groups, where they teach each other. There’s a good guide to the jigsaw technique here.

When it’s done well, jigsaw promotes a high degree of interdependence amongst learners and exposes all the students to the material to be learned, both of which contribute to its effectiveness as a psychology teaching strategy (Tomcho & Foels, 2012). Compared to non-cooperative methods (i.e. those that do not require interdependence) techniques like jigsaw provide more effective learning of conceptual knowledge, a greater sense of competence and more enjoyment of learning. This is particularly so when the activity is highly structured with assigned roles, prompts for self reflection, and both individual and group feedback on performance (Supanc et al, 2017).

When I use it I like to keep group sizes to a maximum of four. If you have 16 or 32 students in a class that’s great because you can divide the material into four and have four students in each jigsaw/expert group. A group of 25 also works well, with the material divided into five parts. It can be a headache to assign groups when you have inconvenient numbers of students so you need to plan ahead and think about how you will ensure that every student learns all the content.

In my experience, the jigsaw approach works best when:

  • You stress that the activity is all about understanding what they are learning and remind students throughout of their responsibility for both teaching and learning the material. The danger is that it can easily become an ‘information transfer’ exercise, with students copying down material verbatim and dictating to each other without understanding. It is sometimes useful to impose rules to prevent this (e.g. limit the number of words students are allowed to use when making notes in their expert groups, only allowing them to draw pictures etc.)
  • The learning material is tailored to the students. This means adjusting the difficulty/complexity level of the material to be just difficult enough so that the students need to engage with it and each other to co-construct an understanding. Too difficult and they can’t do it; too easy and it becomes trivial; either way, they lose interest.
  • The learning material is tailored to the timescale. Again, we want the students to create meaning from the materials and this takes time. If too little time is given then either some of the material won’t get taught, or students will resort to ‘information transfer’ and there will be no co-construction.
  • You actively monitor what’s going on in the groups, particularly the expert groups. This is how we moderate the difficulty of the materials. We don’t want the students teaching each other things that are wrong. At the same time, it’s important not to just charge in and instruct the learners directly. Doing that undermines the point of the approach. In any case, I wouldn’t use jigsaw to teach fundamental concepts for the first time; it’s just too risky. I prefer to use it to elaborate on and deepen understanding of ideas.
  • You have an accountability mechanism (i.e. a test). Multiple choice/online assessment is quick and effective if the test items are well written. Plickers and Socrative are useful tools for this. One approach that can work here is to tell the students that everyone will do the test but that each student will receive the average mark for their jigsaw group. This creates an incentive for students to ensure that everyone in the group does well (although it also creates an incentive to blame people if the group does badly, so YMMV).

Here’s a set of materials for teaching some of the factors that moderate the misinformation effect on eyewitness testimony using the jigsaw method. This is for a one-hour lesson with a 10-15 minute expert groups phase and a 15-20 minute jigsaw groups phase. There is a slideshow that structures the lesson and a set of learning materials covering the moderating effects of time, source reliability, centrality and awareness of misinformation. You can extend the activity by prompting students to evaluate the evidence offered.  If you are a Socrative user (free account with paid upgrades) you can get the multiple choice quiz using this link. As with all these approaches, there is no guarantee that it’s superior to the alternatives but the available evidence suggests it is worth trying.  And, like everything, its effectiveness is likely to grow when both teacher and students are practised in the technique.

Aronson, E., Blaney, N., Stephin, C., Sikes, J., & Snapp, M. (1978). The Jigsaw Classroom. Beverly Hills, CA: Sage Publishing Company

Supanc, M., Vollinger, V.A. & Brunstein, J.C. (2017).  High-structure versus low-structure cooperative learning in introductory psychology classes for student teachers: Effects on conceptual knowledge, self-perceived competence, and subjective task values.  Learning and Instruction, 50, 75-84.

Tomcho, T.J. & Foels, R. (2012).  Meta-analysis of group learning activities: Empirically based teaching recommendations.  Teaching of Psychology, 39 (3), 159-169.

Scaffolding and differentiating for evaluative writing

Evaluative writing is probably the hardest thing we teach, and it’s always a work in progress.  Since I started teaching Psychology (some 20-odd years ago) I’ve tried to teach written evaluation many different ways and never really been satisfied with the result.  Part of the problem is that I have no recollection of actually being taught to do it.  Clearly, this must have happened as it seems very unlikely that I worked out how to evaluate on my own and it’s certainly the case that I wasn’t always able to do it. But I suspect it was a process that happened subtly, of the course of many interactions with many teachers and over a long time.  I’m also fairly certain I only started to learn how to do it during my undergraduate degree (I do remember slaving over an essay on Autism in my first year, which my early mentor Brown gave a First whilst damning it with faint praise as ‘a series of bright apercus’; I thanked him later). Contrary to popular opinion, the A – Levels we did in those days did not make significant demands on critical thinking and pretty good performance was guaranteed to anyone who could read a syllabus, was sufficiently skilled in memorising large chunks of material verbatim and could write quickly.

However, the specifications we teach now, and the exams for which we must prepare our students, make pretty stiff demands on students’ capacity to write critically in response to questions that are increasingly difficult to predict.  The new Edexcel specification (I can’t speak for the others) has upped the ante on this even further as their rules for the phrasing of questions limit their essay questions to a single command term (e.g. ‘Evaluate…’) even when students are expected to address several different assessment objectives in their responses.  In contrast to the questions they used to face (e.g. ‘Describe and evaluate…’), where it would always be possible for students to score marks by addressing the ‘knowledge and understanding’ element even if the critical thinking aspect was ropey, the new arrangements mean that students must address the main assessment objective all the way through their response at the same time as addressing a subsidiary assessment objective that is only implied by the question. Consequently, it is more important than ever to teach evaluative writing early in the course, and as quickly and thoroughly as we can.

But, as I said, I can’t remember learning to do it.  Furthermore, evaluative writing is, for me (and presumably for most other people who do it a lot), procedural knowledge, so what we are doing when we evaluate is not easily consciously inspected: we simply evaluate.  As a result, I have spent a fair bit of my career trying to teach something very important with neither a clear idea of what it consists of nor a principled understanding of how it develops.  In the absence of these things it is very difficult to communicate to students what the goal is or support them in moving towards it effectively.  The risk then is that ‘evaluation’ gets reduced to a set of theory-specific ‘points’ for students to learn more-or-less verbatim.  This is unsatisfactory because (1) it doesn’t equip them to meet the demands of the current assessment scene; and (2) because we’re supposed to be teaching them to think, dammit.  However, this is what I have done in the past and I suspect I’m not alone.

I started making more progress a few years ago when I began to use the SOLO taxonomy (Biggs & Collis, 1982) and the Toulmin Model of Argumentation (Toulmin, 1958) as the basis for teaching evaluation. I won’t unpack these ideas here (although the SOLO taxonomy provokes lively debate so I might come back to it in a future post) but they lead to a model of evaluative writing in which the student needs to:

  • Identify the claims made by a theory;
  • Explain the reasons why each claim should be accepted or rejected;
  • Present evidence that supports or challenges the reasons;
  • Develop their argument, for example by assessing the validity of the evidence or by comparing with a competing theory.

This might sound obvious to you but it has really helped me think clearly about what students need to learn and what the barriers to learning it are likely to be.  The fundamental block is where a student has a naive personal epistemology in which they regard theories as incontrovertible statements of fact (see Hofer & Pintrich, 2002).  In that case evaluation can only be experienced as a mysterious and peculiar game (my own research on epistemic cognition suggests that this may frequently be the case).  We can start to address this by presenting psychological knowledge using a language of possibilities and uncertainty (this is particularly salient to me as I teach in a girls’ school; Belenky et al, 1986) and by continually returning to the idea that scientific theories are maps of the world and the map is not the territory (NB. this is a job for the long haul). Other barriers are where:

  1. The student cannot identify the specific claims made by a theory;
  2. The student cannot identify evidence that relates to these claims;
  3. The student cannot articulate reasons why the evidence supports or challenges the claims;
  4. The student cannot introduce principled judgements about the validity of the evidence.

Again, all this might seem obvious but where a student has difficulty writing good evaluation it gives a starting point for diagnosing the possible problem and therefore intervening successfully.  My own experience with Year 12 and 13 students (OK, not particularly scientific but it’s all I’ve got) suggests that the major sticking points are (1) because the theory itself has not been well understood and (3) because the student needs to identify what the theory predicts and reconcile these with a distillation of what, generally, the evidence suggests, so they tend to jump from claim to evidence but don’t explain the connection between the two.

Inevitably, any class we teach is going to contain students whose capacities to think and write in these ways vary, often considerably.  We therefore might wish to differentiate activities whose aim is to develop evaluative writing.  One way of doing this is to break down evaluation of a particular theory into claims, reasons and evidence by preparing a set of cards.  Here is an example set for evaluating Atkinson and Shiffrin’s multi-store model of memory. All students are given an evaluative writing task, and are given a subset of the cards to support them.  The subset given depends on the student’s current capacity:

  • Level 1 – students are given all the cards.  Their challenge is to match up the claims/reasons/evidence and use suitable connectives to turn them into well-articulated critical points.
  • Level 2 – students are given the claims and the reasons.  Their challenge is to identify suitable evidence (e.g. from prior learning) and include this in their evaluation.
  • Level 3 – students are given the claims and the evidence.  Their challenge is to explain the reasons why each claim should be accepted/rejected before introducing the evidence.
  • Level 4 – students are given the claims only.  Their challenge is to articulate suitable reasons for accepting/rejecting the claims and link these to suitable evidence (e.g. from prior learning)
  • Level 5 – students who show competence at level 4 are then invited to consider quality of evidence/competing theories.  Visible thinking routines like tug-of-war can be useful here (see Ritchhart et al, 2011).

This general structure can be used for activities supporting the evidential evaluation of any psychological theory.  Intuitively, its success probably depends on the amount of practice students get with the format of the activity, and their sense of progress could depend on our pointing out how their performance has changed as they get more practice.  It also depends crucially on students’ understanding of the roles of claims, reasons and evidence, which should not be taken for granted.  A common problem is where students believe that the reasons are reasons for making a claim (which leads to circular arguments), not reasons why it should be accepted as true/rejected as false.

As usual, no guarantees can be given about the effectiveness of this approach relative to the alternatives but it does seem to give focus to my feedback about quality of evaluative writing and it has helped shift our students’ extended responses in a direction more likely to appeal to Edexcel’s examiners.  If anyone has thoughts about the above, I’d love to hear them.

Belenky, M.F., Clinchy, B.M., Goldberger, N.R. & Tarule, J.M. (1986). Women’s ways of knowing: the development of self, voice and mind.  New York, NY: Basic Books.

Biggs, J.B. & Collis, K.F. (1982).  Evaluating the quality of learning: the SOLO taxonomy.  New York, NY: Academic Press.

Hofer, B.K. & Pintrich, P.R. (2002).  Personal epistemology: The psychology of beliefs about knowledge and knowing.  Mahwah, NJ: Lawrence Earlbaum Associates.

Ritchhart, R., Church, M. & Morrison, K. (2011).  Making thinking visible: How to promote engagement, understanding and independence for all learners. Hoboken, NJ: Jossey-Bass.

Toulmin, S.E. (1958). The uses of argument. Cambridge: Cambridge University Press.

Teaching synaptic transmission using ping pong balls

You will be clearing up ping-pong balls for days.

Many students seem to come to us with a block about bio-psychology. I’m not sure why this is but I suspect it’s got something to do with the English science curriculum, whose writers have apparently mistaken content load in the absence of conceptual coherence for academic rigour. But that’s an argument for another day and, probably, a different blog. The issue is, faced with content-load problems of our own, and in the face of students’ objections to learning ‘all that science stuff’ it’s easy for us to retreat into a ‘here it is, you’ve just got to memorise it’ teaching style.

And it is quite easy to teach things like synaptic transmission this way – all we have to do is drill the students with a series of steps, probably with accompanying diagrams. Provided, that is, we’re content to settle for teaching for knowledge, as opposed to teaching for understanding. Now, there are arguments for taking that approach: it’s quick, and if the assessment for which we’re preparing the students is recall based, it’s often good enough for the purpose of ‘getting the marks’ in an exam. However, if our values are oriented towards teaching as a way of changing students’ understanding of their world then it might strike you as unsatisfactory. And even if we’re not, recent changes to the A – Level psychology exams mean markedly increased demands on students’ capacity to think with content as opposed to just recalling it. There is strong justification, therefore, for teaching biopsychology in ways that move beyond the more presentation of information.

Over the past few years I’ve made increasing use of physical models to teach biopsychology. They make biological processes concrete for the students, who may find it difficult to visualise events at the microscopic level, and they reduce working memory load because having manipulable objects at hand makes fewer demands than maintaining mental representations of multiple concepts, especially when these are newly acquired and not well integrated with long term memory (there is some debate about this, but see Pouw et al, 2014).

Here’s an approach to teaching synaptic transmission that can be extended to a range of related areas including the mode of action of drugs and biological explanations of mental disorders. I’ve used it with groups of up to 20 or so students; more than that and it would probably be better to divide the group. You’re going to need lots of ping-pong balls. I bought a box of 500 from eBay for about a tenner.

  1. Prepare by inviting students to read about the process of synaptic transmission. This could be for a home learning task or in class using a reciprocal teaching routine. There’s a reading on synaptic transmission here that you can use or there are any number of web and textbook resources.
  2. Arrange your teaching space so there’s a large, clear floor area. Explain that we are going to deepen our understanding of synaptic transmission using the ping-pong balls. Tell the students that they should organise themselves to depict the process of synaptic transmission. The only rule is that each ping-pong ball represents one molecule of neurotransmitter.
  3. At this point, let the students sort themselves out and observe what they do. It is likely that they will arrive at an arrangement whereby one set of students is passing the balls to another set (or possibly throwing or rolling them). Whatever they do, it represents their shared conception of synaptic transmission, so it’s your starting point for developing that understanding further. At this point, pause proceedings and ask named students to explain how the model represents synaptic transmission.
  4. What follows is an iterative process of identifying shortcomings of the students’ model and inviting them to correct them. The ideas is that, with each iteration, the model becomes a more accurate representation and the students’ understanding correspondingly grows. For example:
  • ‘Vesicles can’t throw, and receptors can’t catch’, (addresses the misconception that neurotransmitter is ‘fired’ across the synaptic gap/aimed at the receptors rather than being a probabilistic process based on diffusion);
  • ‘I can’t see what’s causing the vesicles to release neurotransmitter’, (prompts the students to connect a change in neuronal firing rate with the release of neurotransmitter);
  • ‘All these receptors now have a molecule of neurotransmitter activating them – what problem do we have now?’ (introduces the idea that the neurotransmitter needs to be removed or the receptors will be permanently stimulated);
  • ‘There are no more ping-pong balls left in the box and loads in the synaptic gap’, (can lead to the importance of neurotransmitter concentration, the reuptake mechanism and the possibility of neurotransmitter depletion).

And so on. How far you take this depends on your inclination and the time available. I’ve found it’s usually possible to generate a model that includes the pre- and post-synaptic firing rate, vesicles, diffusion/concentration, receptors, enzymes that break down the neurotransmitter and the reuptake mechanism.

It is crucial that you keep questioning named students about the correspondences between different elements of the model and the process of synaptic transmission. The model doesn’t teach anything on its own; it’s a point of focus for the dialogue between you and the students.

A good way to finish the activity is with a free writing exercise in which students either describe the process of neural transmission from memory or write an explanation of how their model represents the process. This should be done from recall and allows them to consolidate understanding whilst giving you a chance of catching any remaining misconceptions.

Once you have established a viable model, you can use it in a number of ways. Simply recreating the model on a future occasion is a good revision activity, especially if done from free recall and if students are instructed to take on different roles from last time, so they have to co-construct their understanding again. You can use it to develop further understanding e.g. ‘what you have modelled is an excitatory synapse – how would things be different in an inhibitory synapse?’ You can also use it as a basis for teaching related ideas e.g. the effects of drugs e.g. ‘What would happen if we stopped the reuptake mechanism from working/what would happen if we blocked off these receptors?’ etc.

This approach is not without its risks. You may feel that you cannot rely on your students’ capacity to self-regulate during activities like this. That’s your call, but I would urge you to give it a go. You do need to be on the lookout for social loafing – some students may feel able to position themselves as an innocuous section of neural membrane and quietly opt out of proceedings. For this reason you need to keep up with the directed questions throughout. Finally, and most seriously, there is a possibility, when using vivid demonstrations, that what students will encode is that they did an unusual activity and it was fun but not the actual conceptual content of the demo (see Willingham, 2010). For this reason I regard the preparation reading as crucial and would never attempt to use the above approach to teach synaptic transmission ab initio. It is also critical to keep prompting the students to re-encode what they are doing in terms of the target concepts and understanding through questioning at the time and in the follow-up activity.

Of course, I cannot prove that doing it this way will result in better understanding and learning than the approach it replaces but there is good reason to believe that it might. And it’s a lot more fun.


Pouw, W.T.J.L., van Gog, T. & Paas, F. (2014).  An embedded and embodied cognition review of instructional manipulatives.  Educational Psychology Review, 26, 51-72.

Willingham, D.T. (2010).  Why don’t students like school?  A cognitive scientist answers questions about how the mind works and what it means for the classroom.  San Francisco: Jossey-Bass.

Reciprocal teaching for deeper understanding

Most of our students won’t go on to study psychology after their A – Level course but one of the things we can teach them that will help them whatever they do next is to read effectively.  In the past it has surprised me how a student might be able to read a complex text very fluently, yet have very poor comprehension and retention of what they have just read. One way of increasing the effectiveness of students’ reading is to use the reciprocal teaching technique originally developed by Palincsar and Brown (1984).

In the form in which I use it, reciprocal teaching is done as a small group activity (4 per group is ideal).  The group is given a text and decide who will take the ‘teacher’ role first.  The routine is:

  • All group members silently read a section of the text.  While they are reading, the ‘teacher’ must think of a question about that section of the text.
  • Once all have finished reading, the ‘teacher’ asks their question of the group.  The group then discusses and agrees on an answer with the ‘teacher’.
  • When the ‘teacher’ is satisfied with the answer, they summarise that section of the text.  A new ‘teacher’ is chosen and the cycle begins again.

The power of reciprocal teaching comes when you establish as a routine in your classroom, and practice is important.  What is crucial is the quality of the questions the ‘teacher’ asks.  They have to be genuine questions that require deep thinking, rather than ones that can be answered simply by pulling a word or phrase out of the text.  For this reason it is helpful to model the process of thinking up a question and discussing it with the class before handing over to the students.  I also find it useful to circulate and monitor the quality of the questions, intervening where necessary. With regular use, reciprocal teaching becomes just ‘our way of reading’.

Reciprocal teaching has been extensively tested and is associated with substantial learning gains.  Hattie (2008) reports an effect size of 0.74. Much of this research has focused on younger learners, many of these with poor reading relative to age, so it is sometimes overlooked that it also has a positive effect on apparently proficient learners of high school age (Alfassi, 2004).

One problem I have encountered, however, is finding suitable texts for my students to use with the technique. Research papers and more advanced undergraduate textbooks are potentially too difficult for students to access, especially if they are new to psychology.  However, the typical A – Level textbook is laden with intrusive ‘pedagogical features’ that deprive the students of opportunities to ask each other good questions and generally ‘grapple with the text’ in productive ways.  Consequently I have ended up preparing a number of my own.  These are written to be ‘just difficult enough’ but kept short enough for students to process them in a lesson phase lasting 15 – 20 minutes or so.  I’ll be tagging these as ‘reading’ when I post them up.

Alfassi, M. (2004).  Reading to learn: Effects of combined strategy instruction on high school students.  Journal of Educational Research, 97 (4), 171-185.

Hattie, J. (2008).  Visible learning: A synthesis of over 800 meta-analyses relating to achievement.  London: Routledge.

Palincsar, A.S. & Brown, A.L. (1984).  Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities.  Cognition and Instruction, 1 (2), 117-175.

A visible thinking routine (VTR) for comparisons

Two very powerful techniques for provoking learning are getting students to make comparisons and using graphic organisers (see Marzano et al, 2001 pp. 14-20).  The process of making comparisons helps with the acquisition and refinement of concepts because it stimulates students to think and rethink the boundary between this and not-this.  It also requires some fairly deep processing of the sort that seems likely to support long term retention of ideas.  The use of graphic organisers helps to facilitate understanding by making it easier for learners to discern the relationships between ideas.  There is consequently strong justification for using comparison tables a lot, and I do.

A standard comparison table with predefined criteria.

Typically, I use a table organised around criteria given by me.  This works well enough but it’s also somewhat unsatisfying precisely because I give the criteria. Marzano suggests this is preferable where convergent thinking is required because the students are unlikely to come up with suitable criteria on their own.  Point taken, but all the same, what we’re presumably shooting for here is students who are capable of defining their own comparison criteria.

I have been developing a visible thinking routine (see Ritchhart et al, 2011) that shows some promise in this direction.

  1. In groups of three or four, students start by generating as many facts as they can about each of the things they are comparing.  Each fact is written in a separate sticky note.
  2. They are then invited to look for correspondences between the facts about each by lining the sticky note up with each other.  It is best to model this with some sticky notes of your own on the board or under a visualiser, and think aloud whilst doing it e.g. “OK, MRI measures energy from water molecules and PET measures energy from a radiotracer…they sort of go together, so I’m going to line these up together…”
  3. Once students have identified some comparisons they can be encouraged to add more facts to match up with any ‘stray’ sticky notes that don’t currently have a corresponding fact about the comparand.

The students have by this point constructed a skeleton comparison table.  The next step is to encourage and support them in distilling and naming their comparison criteria so they can make their comparisons explicit. Depending on the students they might need more or less scaffolding to do this.

The comparison VTR in progress. Students have started identifying correspondences.

Final steps could be:

  • Recording their table, either drawing it up or photographing it;
  • Working as a whole class to draw up a ‘master’ comparison table based on small-group contributions;
  • Translating their table into well-formed written comparisons (stems can be helpful here).

This process has two significant virtues in that (1) the students do more of the thinking since they go all or at least some of the way to working out their own comparison criteria and (2) it makes their thinking processes visible to you – so you can intervene helpfully – and them, which supports metacognition.

I’ve used this approach several times and am satisfied that it results in comparisons that are as good as those that emerge from a pre-prepared table (although it does take a bit longer).  I am unable to say whether it has any significant impact on my students’ more general capacity to think in comparative ways, although it has intuitive appeal. The important point here is that a visible thinking routine needs to become, well, a routine.  I have not yet used this approach consistently enough across a sufficient range of contexts for my students to incorporate into their everyday thinking repertoire and thereby to permit spontaneous use and generalisation.  My next step, therefore, is to review my schemes of learning for next year and see where the opportunities for this might be.

Marzano, R.J., Pickering, D.J. & Pollock, J.E. (2001).  Classroom instruction that works: Research based strategies for increasing student achievement.  Alexandria, VA: ASCD.

Ritchhart, R., Church, M. & Morrison, K. (2011).  Making thinking visible: How to promote engagement, understanding and independence for all learners.  Hoboken, NJ: Jossey-Bass.

Using a timeline to teach synoptic thinking in psychology

A timeline of concepts/theories/observations/applications/innovations in psychology.
A timeline of concepts/theories/observations/applications/innovations in psychology.

We recently moved to a linear/terminal assesment curriculum with a heavy synoptic element (Edexcel A – Level; other brands are available).  As we experimented with different ways of teaching the synoptic elements it repeatedly struck me how much the last seventeen years of modularisation have militated against the kinds of ‘joined up’ thinking that is characteristic of good quality learning in Psychology (and, presumably, other subjects/disciplines).

One manifestation of this is that many of our students lack a coherent sense of how the curriculum fits together chronologically.  I initially wanted to address this so our students were prepared for questions about ‘how psychological understanding has developed over time’ (Edexcel 2015 specification) but as soon as the students got stuck in it became clear that the activity that emerged could potentially provoke a much deeper understanding of psychological ideas in context and the relationships between them.

On a long stretch of noticeboard I made six parallel timelines, each labelled with one of the topics/applications on the specification: cognitive psychology; social psychology; bio-psychology; learning theory; clinical psychology and criminological psychology.  The timeline was marked from 1850 to 2025.  Before the lesson I ‘seeded’ the timeline with a few significant events/innovations (e.g. Darwin publishes ‘Origin of Species’; Wundt opens the first psychology laboratory; Ramon y Cajal presents the neural theory etc.)

In the first phase of the lesson I simply invited the students to populate the timeline with as many concepts/theories/studies/applications/observations as they could.  For this, they worked in small groups, one to a topic.  I encouraged them to work from recall as far as possible but to supplement their memory with material from their notes and to look up additional things they decided were important.

In the second phase, once the timeline had a decent amount of material on it, each group was given a ball of wool, a different colour for each topic.  They were then invited to start connecting the concepts/theories/studies/applications/observations to show how earlier ideas influenced later ones.

As the students started linking things together, observations, insights and questioned emerged quite spontaneously.  It was consequently very easy to sustain a productive discussion using a handful of prompt questions like:

  • What do you notice?
  • Can you see any patterns?
  • Why do you think…?

From this discussion the whole group then distilled some important influences on the development of psychological knowledge and understanding (e.g. the use of the scientific method, refinements in methodology, the interplay between ‘pure’ and ‘applied’ areas, new technologies, and social change).  In the following lesson the students developed their writing by selecting a few ‘strands’ from the timeline and using them as the basis for an essay.

I believe that this provided an opportunity for students to integrate the things they already knew into a more coherent and stable conceptual structure.  The students reported that it was engaging and thought provoking.  Time (and the examiner, of course) will be the judge of whether it really did have the intended effect.  But I was sufficiently impressed to regret not having thought of this before.  My intention for the coming academic year is to start the timeline very early on and add to it gradually over the year.  My hope is that I can thereby build that ‘joined up’ thinking right through the course.