>
Communicating research to support the evolution of teaching

News  ·  Neurocuriosity  ·  Neuroscience & Education  ·  Evidence for the Frontline Conference  ·  The Science of Learning



Learnus Annual Lecture

Generously Supported by The Borrows Charitable Trust


"Supporting Struggling Learners - beyond the label - understanding why some kids struggle at school"

Church House, Westminster

19th June 2019


Report by Richard Newton-Chance


Duncan Astle gave this year’s annual lecture. He is Programme leader at the Medical Research Council’s Cognition and Brain Sciences Unit at the University of Cambridge where the CALM team are based.  He is also a fellow of Robinson College, University of Cambridge


Summary of the presentation

In the recent past, educators desperate for silver bullets have leapt on techniques based on neuromyths about the brain. These techniques, often expensive to implement, have been based on dubious evidence – with little real or reliable scientific data supporting them. 

Even well established research techniques are liable to throw up unreliable results because random noise in testing can sometimes give the impression of improvement statistically. For example, if a group of children defined by having a deficit in a particular ability (such as reading) are retested a few weeks later, their scores can seem to be higher because their initial low scores were partly by chance – a phenomenon called ‘regression to the mean’.

The CALM team (Centre for Attention, Learning and Memory) have therefore been searching for a new methodology for analysing the data they have collected.

The project has involved testing a large group of children who are identified by their schools as struggling learners. Ignoring any labels that have been attached to the children, they are given a detailed battery of cognitive assessments, behavioural surveys, MRI scans and saliva sampling (currently unused). So far, the team has tested 805 children mainly of late primary age.  These children were mainly referred by SEND specialists as presenting with a wide variety of diagnoses. This generated an enormous amount of data.

The unique thing about the project is the analytical method applied to the data to identify whether they fall into groups. This involved employing machine learning - roughly the same technique that allows the Internet to target you with adverts of specific interest.  In other words, it employed algorithms to try and make connections between the data and the individuals and then to examine these for statistical significance.  This is a cutting edge technique and potentially much more reliable in drawing valid conclusions from large data sets without diagnostic preconceptions.

What the CALM team found was that there was little reliable connection between the groupings identified by the algorithm and the original diagnoses that the children had been given.  The implication is that the referral reason or diagnosis is not always a good predictor of a child’s cognitive difficulties.

The data instead indicated that the children mainly clustered into four groups: (1) those who struggled across the range of abilities, (2) those who were not struggling at all, but perhaps labelled as a struggling learner because they were disruptive in the classroom, (3) those who struggled with phonological coding, and (4) those with poor working memory and spatial skills. 

When the team then looked at how the groups performed on reading, spelling and maths tests – which had been held back from the algorithm – they found the first group fell in the bottom 5% across all scores, the second group showed age-appropriate performance, while the third and fourth groups fell in the bottom 15%. Those low across all scores, and those with phonological coding problems, had independently been identified by their parents as having communication difficulties, while the other two groups showed none.

Finally, the team went on to look at any correlation between structural characteristics of the brains of the participants (half of whom went through MRI scans). They have so far concluded that there is no evidence to support the idea that there are ‘holes in the brain’ – that is, differences in specific locations which might be causing difficulties in particular skills. The team were able to use data about brain structure involving three physical characteristics of the cortex (the external surface of the brain) – grey matter thickness, curvature of the folds (gyrification) and depth of the folds (sulci depth) – to predict the cognitive profiles of the four groups. However, the prediction was made from small bits of structural information right across the brain, again arguing against very specific brain deficits been the cause of children struggling to learn in the classroom. The team’s latest, cutting edge analyses are investigating how different parts of the brain communicate with each other, using so-called ‘network theory’. These analyses suggest that something about the quality or efficiency of signalling between different regions in the brain contributes to learning problems.

From this research, it might be possible to develop new diagnostic tests for cognitive difficulties to complement the problematic diagnostic labels (e.g., dyslexia, ADHD, ASD) that are currently in use. The next step is to evaluate the diagnostic utility of the neuroscience measures – whether they can be turned into cheap and practical tests that can help teachers look out for children at risk of developing certain profiles – and establish how these measures may inform different interventions to support children with each of the four new profiles – while recognising that no two children are ever the same.

Richard Newton-Chance