Newly-published evidence suggests that the simulation-based learning interventions on which trainee healthcare professionals often rely to develop important, non-technical skills are being inadequately evaluated before they are potentially put to use.
The research, by a team at the University of Cambridge, says that there is an urgent need to improve the evaluative framework which underpins the use of simulation-based learning to develop the so-called ‘soft’ skills of student doctors, nurses, dentists and other professionals.
At present, the findings suggest, many evaluations fail to test how effectively simulations support essential competencies which are clearly mandated in medical and healthcare curricula, and in policy. The study found glaring omissions in the evaluation of their impact on professionals’ evidence-based decision-making, and their sensitivity towards equality and diversity issues with patients and colleagues.
“We were pretty astonished that some really important learning outcomes, such as how much a professional considers inclusivity when talking to colleagues and patients, were simply overlooked.”
Non-technical skills – such as communication, decision-making, and teamwork – are increasingly important in modern healthcare systems. In many cases, trainees and students develop, learn and practise them using educational tools and methods which simulate real-life professional practice. For example, role-playing is often used to prepare them to deal with frightened or aggressive patients; while technology-based and mixed reality simulations are increasingly employed to test how they would act or instruct others in a high-pressure situation.
New simulation-based methods are continually being developed for this purpose and are typically evaluated through academic trials. Concerns have, however, been raised about how thoroughly these trials assess their potential.
To investigate this further, the Cambridge team analysed some of the most recent such evaluations, focusing on those published between 2018 and 2020. They found that these employ a sprawling range of assessment instruments and methods, which often bear little resemblance to each other. The study also identifies ‘significant gaps’ in the extent to which they test key non-technical aptitudes in healthcare.
Riikka Hofmann, an Associate Professor at the University of Cambridge’s Faculty of Education, said: “Fundamentally, these evaluations don’t always define exactly what learning outcomes they expect to see. The studies may be technically well-designed, but because goals like ‘communication’ are quite vague, they aren’t always clear about what it is they are trying to measure.”
“We were pretty astonished that some really important learning outcomes, such as how much a professional considers inclusivity when talking to colleagues and patients, were simply overlooked as a result. This is not the fault of the professionals themselves; the problem is gaps in the research testing the relevant simulations.”
The analysis focused on three broad non-technical skill areas: interprofessional teamwork, communication, and decision-making. Hofmann and colleagues identified 72 relevant, peer-reviewed assessments of simulation-based tools which targeted these skills from within their two-year timeframe. In each case, they examined the learning outcomes the research was trying to measure, and the assessment instruments used for this purpose.
Separately, the team also analysed the non-technical learning objectives of different healthcare curricula. These included the goals set by the General Medical Council for newly-qualified doctors; an ‘indicative’ undergraduate curriculum also endorsed by the GMC; and a third, postgraduate, curriculum set by the Royal College of Anaesthetists. They then examined how far these goals matched those identified in the research trials.
The findings point to major inconsistencies in how research measures the potential of simulation-based tools to develop practitioners’ non-technical competencies. Just 31 of the 72 studies, for example, actually used a named and validated instrument for measuring soft skills; with the remainder often preferring assessment devices that the researchers had developed themselves. Of the 31 validated instruments, 27 only appeared once in the results – meaning that most of the research trials had used a different system of measurement from the others.
“One of the big implications of this study is that within, but especially across, education and research, we need to make sure we are using a shared language about the skills we want to develop.”
Closer analysis revealed that the skills being measured were also often quite different. Within the official curricula, the team found that each broad learning goal breaks down into different ‘sub-competencies’. For example, ‘Communication’ requires learners to master communicating through different means; communicating with different types of patient and colleague; relationship-building; and conflict-resolution skills – among others.
Not all of these more granular objectives were consistently covered in the evaluative research, however. While some were referenced quite frequently, others were mentioned either barely or not at all.
Two omissions were particularly striking. Almost no attention was paid to whether new simulation-based learning tools trained professionals to use evidence in their reasoning. And none of the research examined how far these tools prepared them to lead diverse teams, or to work sensitively with different patients: for example, those who speak another language or have additional needs.
The analysis suggests that a stronger conceptual model, which closely and clearly defines the ‘non-technical skills’ that simulation-based learning is actually meant to meet, is badly needed at the evaluation stage.
As the study itself demonstrates, however, this can realistically be compiled by drawing on the various criteria set by curricula, where ‘a rounded and sound conceptual framework’ appears to exist. That, in turn, would enable the more consistent use of validated instruments to measure new simulation-based learning tools, creating a more comparable evidence base for future research.
The study also notes that simulation-based learning is being used more and more widely in other disciplines where soft skills are similarly in demand. “Although research in this area has increased in many fields, it remains patchy,” Hofmann said. “One of the big implications of this study is that within, but especially across, education and research, we need to make sure we are using a shared language about the skills we want learners to develop.”
The findings are published in Studies in Educational Evaluation. This article was republished with permission from The Faculty of Education News.