Advertisement

Risk-Taking in Diving: Is it Worth it?

Diving is a sport which has an inherent risk of death or serious injury due to the aquatic environment in which the activity takes place. These risks are not just limited to drowning or decompression sickness, but many other issues like entanglement, injuries from the flora and fauna, or trauma. However, the positive side is that there are massive benefits that can be realised, including exposure to amazing underwater creatures from macro-level fauna to whale sharks and everything in between, wrecks full of interesting artifacts and life, cave systems with unique and marvelous geology, or just the serenity achieved by floating in the water, watching the world go by.

Contributed by

Factfile

Gareth Lock is an accomplished technical diver based in the United Kingdom.

Recently retired from the Royal Air Force, he is now teaching human factors in the oil and gas sector.

Lock is also undertaking a part-time PhD examining the role of human factors in scuba diving incidents.

For more information, visit the Cognitas Incident Research website at: https://cognitasresearch.wordpress.com.

It is this balance of risks that is often hard to understand when something goes wrong and a diver is killed, injured or has a really “scary” moment. We often forget that there are ever-present, low probability, high consequence risks whenever we go diving. This article will explain some of the challenges divers face when asked to take “personal responsibility” when they go diving and why. As with many things involving human factors and decision-making, the simplistic approach does not always help divers make the best choices. The knowledge within this article will help the diver make better decisions when it comes to risk management, or at least understand why they made a bad decision if they look back in hindsight.

What is risk?

This might appear to be an odd question, but there are many definitions of risk. Indeed, the Royal Society in 1983 published a paper, which took a number years to write, trying to define it. Their definition was: “the probability that a particular adverse event occurs during a stated period of time, or results from a particular challenge. As a probability in the sense of statistical theory, risk obeys all the formal laws of combining probabilities”. Unfortunately, there was much disagreement within the academic and practitioner communities about how to apply this, especially given the conflict between objective risk and perceived risk. The revised 1992 paper did not close the gap, and John Adam’s, in his book, Risk, highlights the point quite clearly when talking about ice on the pavement:

“Slipping and falling on the ice, for example, is a game for young children, but a potentially fatal accident for an old person. And the probability of such an event is influenced both by a person's perception of the probability, and by whether they see it as fun or dangerous. For example, because old people see the risk of slipping on an icy road to be high, they take avoiding action, thereby reducing the probability. Young people slipping and sliding on the ice, and old people striving to avoid doing the same, belong to separate and distinct cultures. They construct reality out of their experience of it. They see the world differently and behave differently; they tend to associate with kindred spirits, who reinforce their distinctive perspectives on reality in general and risk in particular.”

At a very simplistic level, risk can be defined as “something with a level of uncertainty (that) happens at some point in time, with an outcome”. This is not too helpful when it comes to day-to-day risk management, but often risk management is rather vague at the personal or small group level, and in many cases, takes place subconsciously. These subconscious processes will be looked at below.

Types of risk

Risks do vary, and we cannot use certain metrics from one domain in another. Bassam Salem in his article, “All risk is not created equal,” describes three key types of risk which we encounter during our lives, one of which is particularly relevant to diving.

Calculated risk. Calculated risk is where we look at the positive and negative, and weigh up the benefits, assuming the benefit will be realised and the likelihood of the negative outcome is small. Simple examples could be buying a car, hoping that it will be reliable and safe, or in aviation that the aircraft will not crash, or in diving that the equipment will not fail.

Mandatory risk. This is where we are willing to sacrifice an ever-present risk to achieve a successful outcome. Firefighters and police face this type of risk all the time. Those undertaking an emergency diving rescue are also in this situation, because the longer the hazard is present, the less chance that the victim will survive, and therefore, certain mental and physical shortcuts will be taken.

Luxury risk. This kind of risk is purely about self-satisfaction and does not need to be faced. Scuba diving, especially those types of diving with greater exposure to risk such as cave diving or rebreather diving, is a perfect example of this. The easiest way to mitigate the risk is not to get in the water, but that would deny us the benefits, the value of which are very much at a personal and subjective level. Bassam explains: “Unlike a calculated risk that we’re taking in the hopes of a clear upside, this sort of risk typically has little quantifiable upside and only subjective, lifestyle-related benefits. The downside, on the other hand, may be quite quantifiable.”

So, if the negative side is an incident or accident, how do you determine the likelihood of such an event occurring if there are so many variables?

How we make decisions

At the most basic level, we use biologically developed processes such as the fight/flight/freeze response or from learned experiences. These learned experiences can be either a direct experience of the situation, which is followed through to an outcome and subsequently stored in our long-term memory, or via indirect means such as story-telling, or from processing training and learning materials, which again goes to inform our long-term memory stores.

In the 1970s, two researchers, Kahneman and Tversky, conducted significant research in the topic of decision-making and came up with a two-part model of how we make decisions. They named them System 1 and System 2. System 1 is the fast and intuitive part of decision-making, much of which takes place subconsciously and is based on the mental shortcuts we have developed over time, and System 2 is where we break the task down into logical steps and process each one in slower time with conscious thought and effort.

System 1 is great because it allows us to operate at high speed, without thinking about the activity or the decisions we are making. However, there are times when this fast process should be slowed down, engaging System 2. These include the times we encounter a novel topic (to us), or when there is no “undo” option. The difficulty is that we often do not recognise these decision points until it is too late and something bad happens, and then we look back and say, “If only we’d spotted that…”

Over time, System 2 practices can move into System 1 through deliberate practice and feedback to determine what works and what does not. System 1 operations are informed by developing mental shortcuts, and it is these mental shortcuts, or biases and heuristics, that allow us to operate at the speed we do.

Biases

Unfortunately, some of these biases have a direct impact on our decision-making processes when we encounter risky circumstances. These include recall bias, outcome bias, availability bias and severity bias. We also need to recognise that the encoding and recall of memories is influenced by emotional significance. This means that those events that have a high level of emotion associated with them—e.g. fear, happiness, joy, sadness—can be recalled more easily due to the availability bias.

The reason why an understanding of these biases is important is because they all influence us as to the meaning of the risks we face and how likely we perceive the risk will reach its full potential. These biases can be summarised by this four-part model from Buster Benson that condenses the 200+ biases he identified (see Figure 1).

 

Image
Decision-Making Model

 

Decision-making model

The biases and heuristics we use to make rapid decisions are not the whole story. Gary Klein and his associates have spent decades researching this subject and have produced a simplified model of how we make decisions, which looks at the models we create based on our previous experiences. This is known as the recognition primed decision-making model (as shown in Figure 2).

Klein understood that whilst the research community could isolate the biases identified above in experimental situations, the real world is far more messy and complex, and the interactions that take place between people and systems were not as simple as had been made out. This was the case when it came to time-limited and ambiguous circumstances.

To this end, his research focused on those who operated in high stress/high stakes/short time domains such as firefighters, paramedics and military commanders to determine how they made their decisions. Contrary to what many had thought at the time, decisions were not made by looking at all the options, lining them up and then logically deducting the best choice, but rather by referring to mental models that had been developed over time, selecting the most relevant pieces of information from the stream coming in, and making decisions.

What he found was that experts made better and quicker decisions than novices for two reasons. Firstly, they had more mental models to refer to, and secondly, they were able to isolate the relevant pieces of the puzzle faster than novices who waited for more information to come in. If you do not have a mental model, you cannot run an “accurate” simulation of what will or might happen, and so our decisions about risk can be flawed. Or more correctly, they were correct for the mental model we held at the time as we had no prior experience to which to refer, or we were unable to pull the memories out of our long-term memory store because they did not appear to be relevant at the time.

Personally responsible

So, how does this inform how one becomes “personally responsible” for the risks one takes? To start with, by understanding that we are subject to considerable biases when we make decisions in diving (and life in general). Consequently, we can start to look out for them and when they might be critical in terms of time or space. A couple of examples are given below.

• If you have encountered a similar situation before and nothing went wrong, despite there being a significant likelihood that this time it could end up really badly, then maybe pause and think, “What should I be looking for to give me a clue that the dive is going, or will go, pear-shaped?” Just because it went okay the last time, it does not mean it will this time round.

• If you see a situation and it looks like something you did previously, you will likely make a mental shortcut and make the same decision, even if you do not have all the facts. An example of this could be visiting a dive site, and the previous time you were there, the visibility from the surface looked good and it extended all the way to the bottom. When you look now, you make an assumption that because the surface visibility looks good, it will be good on the bottom.

Developing mental models

The only way to address risk management in a dynamic environment is to develop your mental models. You can do this by talking about the dive afterwards and why you made certain decisions, looking for cues or clues as to why it made sense to you at the time, or looking at resources like the Divers Alert Network or British Sub Aqua Club annual reports and try to make sense about why divers did what they did. Whilst the reports often lack detail, they are the most prevalent data sets out there, and at some point, those who appear in those reports must have determined that the risk of diving—whatever form that took—was greater than the perceived negative outcome.

In summary, to make better decisions involving risks, we need to reflect on the activities and the sense-making of those involved, which includes ourselves. By doing so, we can build new models, or reinforce or correct our existing ones, which will allow us to run more accurate mental simulations. These simulations are the “what-ifs” that we often hear about in diving safety literature. The problem is, if you do not have any idea what might happen, how do you run a “what-if”? And even if you do think it might happen, how do you determine what is an acceptable level of risk (likelihood) when risk acceptance is determined at a personal level. Ultimately, the choice is yours, even though those choices may be informed by your subconscious! ■

Advertisements