I first came across human factors in 2013, when I did some training at the simulation centre at Aintree Hospital, and was gripped from the start. I was fascinated to learn of the scale of error in healthcare and the opportunities for improvement and learning. I’ve continued to study human factors ever since, completing an apprenticeship in non-technical skills with the simulation centre in 2016 and gathering titbits of knowledge, examples of how we can apply human factors in healthcare. I really enjoy delivering the human factors programmes at Aqua to hear how people are using it in their own work and seeing that lightbulb moment I had all those years ago.
My first experience of human factors was very much focused on the important goal of safety – identifying why errors happen and making effective changes to prevent them recurring, avoiding blame and looking for the root causes within a wider system. A lot of the time this focuses on the more obvious or catastrophic errors, like wrong site surgery. This focus is useful both to make major strides in reducing harm and in raising awareness of human factors with examples that capture people’s emotions and motivate them to get involved.
The bigger opportunity we have is to demonstrate that the application of human factors in healthcare is far wider than preventing never events. The International Ergonomics Association – bringing together human factors specialists from across the globe – defines the aim of human factors as optimising human performance and wellbeing. And when we start to think about those two joint objectives we realise that we can apply human factors to anything we do. This multi-disciplinary science that draws from biology, psychology, medicine and engineering (amongst many others) gives us a lens through which we can view any improvement challenge. All six elements of quality healthcare as defined by the IHI – safe, timely, effective, efficient, equitable and personalised – can all be influenced by efforts to understand and influence the behaviour of the people in the system.
Aqua’s Applied Human Factors programme is a good example of where this broad scope gets utilised really well. We have teams coming to us with safety challenges – improving communication in theatres, reducing manual handling or needlestick injuries, improving situational awareness of deteriorating patients. They are complemented by the teams that bring other quality challenges – reducing stress for teams under pressure, improving teamwork between multidisciplinary groups, designing new patient pathways or reinvigorating old ones to improve flow and patient experience.
A year ago I began to study human factors more seriously – working towards a Masters in Human Factors at Loughborough University. Although I began my postgraduate study with a strong view that human factors is more than safety, that it is about overall quality and performance, I did think it was primarily about humans. I thought it was about how people behave depending on their individual circumstances and the physical, organisational and social environment around them. It is about all those things, but one of the big step changes in my thinking is that human factors is almost less about the humans themselves and more about the systems and the interactions of the humans in them. It is important to appreciate the cognitive, physical and environmental impacts on people, but looking at any of them in isolation doesn’t get you anywhere near as far as looking at them altogether.
Luckily, there are some good mapping tools available to support this system thinking and help us gain some order over all the different interactions we need to consider. A couple that I particularly like are accimaps and FRAM.
I like the simplicity of accimaps as a visual representation of what is going on in a system and how everything interacts to influence behaviour. We start by identifying specific actions, usually problematic or erroneous ones. Then, almost like the five whys process of quality improvement, we assess why those actions occurred; what was going on for that person at that time that influenced the action, what was it about the processes in place, the organizational culture? The idea of an accimap is to map the causal and contributory factors to an accident, in an attempt to enable the systemic issues to be addressed, preventing not just the error under investigation but potentially many others too.
The process of putting an accimap together takes time; there are often multiple issues, multiple actions to investigate, and mapping these to causal effects requires a somewhat iterative process. This is part of its charm for me because you have to stop and consider each step a few times. Plus, the format of an accimap is loose enough that you can mould it to some extent to fit your needs. As an advocate of appreciative approaches, I’d like to think there is scope for mapping the protective factors as well as the problems. From a safety 2 approach you could start with an incident and for each causal factor ask yourself what it was that meant that issue didn’t arise more often. Or perhaps we could take an accimap to learn from excellence, to look beyond exceptional behaviour to all the preconditions that enabled (or necessitated) that performance.
Another tool that helps us map our system, and supports a safety 2 approach is FRAM; the functional resonance analysis method. When I first learned of this method, its focus on identifying and understanding normal variation in a system really spoke to me as an improver. My understanding of FRAM is that it helps us to map where a system or process has variation, where or how that resonates to have an amplified impact on the wider system, and what controls are in place or needed to reduce that resonance.
FRAM is, on the face of it, more complicated and takes some getting your head round, especially the graphics of the maps it results in. But again, the iterative process of working through an analysis, asking yourself the questions repeatedly about where everything fits and influences everything else, is really productive in terms of the learning you get from it.
Although I previously recognized the role of systems thinking in human factors, it was limited to linear thinking like the swiss cheese model and recognizing that every defence has a weakness. Studying these mapping tools, that help to discourage that linear thinking, has really given me a new perspective. Just as Reason’s swiss cheese model requires an understanding of human factors to appreciate what the vulnerabilities in each defence might be, these mapping tools also require an a similar understanding of the humans in the system, but they provide a great structure to help us get a much better understanding of how all the interactions combine and intertwine.