PRINT BOOKMARK

Conor Crowley

UK & Europe

Conor Crowley is the consultancy director of process safety for Atkins’ Oil and Gas business. He currently leads Atkins’ Process Safety Team based in Aberdeen and is a fellow of the Institution of Chemical Engineers.

Find out more about where I work and any related career opportunities.


Please complete the form below to contact Conor Crowley.

   
 
 
Captcha
 

MOST RECENT

What’s 12 times 7? Chances are you don’t need to work too hard to answer that one. Simple multiplication was drilled into us as children. It’s likely that you got the answer quicker than you thought you might. It’s just there. But if I was to ask you a harder sum while we were walking along the street, chances are you would stop while your brain worked on the problem.

These examples are easy illustrations about the different ways we tackle problems, and is what Daniel Kanneman refers to as ‘system 1’ and ‘system 2’ thinking.

In system 1, we rely on rules of thumb, experience, and our drilled knowledge to get an answer quickly. In system 2, we have to think, to go back to fundamentals, to work out an answer. System 1 is where we spend a lot of our time. It’s how we manage to drive complex machines, as we learn to move from thinking about every move, and every action, and with practice, it becomes natural, easy.

However, while we often rely on ‘rules of thumb’ or heuristics in our engineering lives, we have to be careful, because relying on this approach brings a risk, the risk of introducing biases. These are many and varied, but here are some that are pertinent to our engineering work.

Action biases

A number of these come down to our overconfidence when things seem to go right. People will often underestimate how long it can take to complete tasks, and therefore don’t add contingency to deal with problems when they crop up. Under pressure, the ‘band-aid’ solutions seem attractive, but don’t end up dealing with the problems, rather only the symptoms.

As process engineers, we often pride ourselves on being ‘expert generalists’. It’s easy to forget that this is not a universal skill.

Perceiving and judging alternatives

It is surprisingly difficult to recognise we are wrong, once we’ve made a decision. We’re good at spotting patterns, coming up with explanations and theories as to what is going on.

What is more difficult is when we have misunderstood something. For example, during early-stage production of a North Sea FPSO (floating production storage and offloading vessel), the operators were struggling with an issue. The marine system was meant to be designed so that production would flow evenly to port and starboard storage tanks, and the boat would therefore stay level.

However, flow was preferentially going to one tank side, and as a result, the operators were switching from one tank to another by opening and closing the divertor valves on the control system. Soon after one of those changes, a high pressure alarm came up on the cargo tank pumps. But since they knew they were having problems with some of the instruments, and the valve was open on the system, they over-rode the trip, and started up again. A high liquid level trip came up, so they over-rode that as well. The flow backed up into the flare drum, into the glycol system, shut down the compression and blew the liquid from the flare drum out of the flare and onto the sea below. Luckily, the tanker that was coming in to pick up the first cargo had not yet arrived, so there were no injuries.

Once the dust settled the crew realised they had misdiagnosed the cause. They had all the classic symptoms of a blocked outlet, but their control console was showing an open outlet, so they diagnosed failed instruments, not valve closed.

A more severe version of this problem caused the meltdown of the Three Mile Island nuclear plant as the operators worked to solve a problem which was the opposite of the one they had. And there are many more similar mistakes in accident reports, both minor and major.

While things are developing, we need to make sure that we continue to ask if we are correct, and evaluate different alternatives, rather than just following the first one and disregarding evidence that we are wrong.

Groupthink

Some of the meetings that I chair are short, maybe one or two days, to examine a modification. The agenda is arranged in such a way that higher risk items are addressed earlier, and it can be hard to keep the level of thinking about hazards high enough as the end of the day looms.

It’s at times like this that I really like the way one of my clients behaves and thinks. As we’ve reached the end of the day and ask “are there any other issues that we’d like to raise?” I always watch for this client to see if I can see that he has entered deep think mode, then we wait for him. Many times, he’s asked the killer question, “but what if?”, and identified something new, tricky, or that the rest of us might have missed. Discussing that scenario has resulted in better designs, better risk management, and often lower cost as well.

This client is impervious to groupthink, that psychological term for when a group makes faulty decisions because group pressures lead to a deterioration of “mental efficiency, reality testing, and moral judgment”.

It can exasperate a team to deal with this type of person in a meeting, but given a choice, I’d always have at least one of him around. Often, this group-think can be minimised by rotating personnel into the team to keep it fresh. But it does take a pretty strong character to challenge a set of norms once they are established in team.

Egocentrism

We focus on our own point of view, ignoring that others don’t have all the information or the understanding that we have. If we don’t face the consequences of our decisions ourselves, it’s hard to imagine the consequences for others.

In the offshore oil and gas industry, it is quite possible to have spent a long and successful career designing facilities without ever seeing the metal you have been slaving over for years, so what could be significant operations problems are not even noticed, never mind designed out.

Another interesting concept is the much used ‘lessons learned’ approach. Project teams are happy to sit down at the end of the project, publish a report on what went well and lessons to be learned for posterity.

The thing about lessons learned is that publishing is only half the battle. Not until the person who needs to know that piece of information has been able to receive it, understand it, and internalise it for future use could you claim ‘lessons learned’ have indeed been learned. Before that, they are ‘learning opportunities’, and quite weak ones at that.

So we are more likely not to repeat our own mistakes, than repeat the mistakes of others. We engineers are more likely to focus on significant incident causes within our own experience, rather than the highest priority incident causes. We are more comfortable interacting with others like ourselves, so we organise ourselves into single-discipline silos, or ‘communities of practice’, and then hope that mashing together that approach with the other disciplines will give us the best integrated solution.

We all have a tendency to reduce the likelihood of an event happening if it hasn’t happened to us personally and to overestimate frequency if it has happened. I’ve seen many designs where there are group estimates of likelihood, but little post-start-up review that these estimates were correct.

Admit it, you’re biased

Humans are complex beings, the product of experience, education, talent, and interactions. We can be amazingly good at doing complex things, understanding how to process raw materials into the basis for our life across the planet.

And while a lot of things are being computerised – a recent study from the University of Oxford predicted that while there was a 99% chance that the job of telemarketer be replaced by computers in 20 years’ time, and even a 55% chance that commercial pilots would be no longer needed, there was only a 1.7% chance that chemical engineers wouldn’t exist.

So we are difficult to replace by computer, but still flawed. The more you read in this area, the more you learn what we can get wrong. So rather than tackling every bias ever, I’ll leave you with some homework, or opportunities to learn more.

An edited version of this article was first published in the September 2015 edition of tce magazine, published by the Institution of Chemical Engineers.

www.thechemicalengineer.com

Asia Pacific, Middle East & Africa, North America, Rest of World, UK & Europe,

“In many ways, telling people what my job is about is straightforward”, the young safety engineer told me. “Process safety, when it comes down to it, is all about keeping the harmful stuff away from where it would do harm. However, explaining what that means I do on a day-to-day basis, that’s another matter entirely…..”

We work in complex industries, in my case, the upstream oil and gas production business.

It doesn’t take much explaining to let people know what happens when it goes wrong: the “person on the street” will be aware of Deepwater Horizon, of Piper Alpha, of Buncefield, of Texas City, of Chernobyl. Explaining all we do to stop that from happening again is not that easy, and most people aren’t that interested anyway – the tendency is to assume that the “experts” in the background are making it all work, park it and get on with day-to-day living.

But what about those whose day-to-day living includes the results of how we manage that risk?

All we do is underpinned in the UK (and many other jurisdictions) by the marvellously British concept of ALARP – making risk “as low as reasonably practicable”; it’s not enough that you show that you’ve followed a particular design method, or operate your plant in line with some good practice; you’ve got to be able to explain why the risk has been reduced to as low as you reasonably can. In a world where somehow people seem to expect that there are right and wrong answers, we live in the land of the “grey area”. And our workers on site, be that offshore and onshore, live in that land with us, right at the front line.

As technical specialists, we need to move to being technical explainers. Lee LeFever, in his book “The Art of Explanation”, captures it well. As he puts it, “You work inside a bubble. Remember that your explanation has to make sense outside of it”. We’re dealing with complex balances, trying to keep our businesses in business, while keeping safety as a core value, meeting targets, keeping people happy. No one would deliberately endanger someone’s life, but could still consider extending a maintenance interval or spending a little less here and there, that could have that very effect.

We owe it to those people who are impacted by our technical effort, directly and indirectly, to do what we can to explain why the risk they face is under control and that we have done everything that is reasonable and practicable to reduce the risk. We’ve evolved a complex ecosystem of risk assessment, hazard identification, technical evaluation, performance standards, which are intended to cover risk management. We have safe ways of working, and work within management systems that are intended to make that risk management work in practice. Our Safety Case is meant to be that explanation.

So, can we do it? From the small design choice to the full operation, we are legally obliged to be able to demonstrate the risk is ALARP. Could we explain it to the man on the site?

If we can’t, then I don’t fancy our chances of explaining it to a judge.

This article was first published in Energy Voice in May 2014.

Asia Pacific, Middle East & Africa, North America, Rest of World, UK & Europe,