PRINT BOOKMARK

Critical thinking

Conor Crowley | 12 Feb 2016 | Comments

As engineers, we need to be careful to not rely on instinctive thought processes.

What’s 12 times 7? Chances are you don’t need to work too hard to answer that one. Simple multiplication was drilled into us as children. It’s likely that you got the answer quicker than you thought you might. It’s just there. But if I was to ask you a harder sum while we were walking along the street, chances are you would stop while your brain worked on the problem.

These examples are easy illustrations about the different ways we tackle problems, and is what Daniel Kanneman refers to as ‘system 1’ and ‘system 2’ thinking.

In system 1, we rely on rules of thumb, experience, and our drilled knowledge to get an answer quickly. In system 2, we have to think, to go back to fundamentals, to work out an answer. System 1 is where we spend a lot of our time. It’s how we manage to drive complex machines, as we learn to move from thinking about every move, and every action, and with practice, it becomes natural, easy.

However, while we often rely on ‘rules of thumb’ or heuristics in our engineering lives, we have to be careful, because relying on this approach brings a risk, the risk of introducing biases. These are many and varied, but here are some that are pertinent to our engineering work.

Action biases

A number of these come down to our overconfidence when things seem to go right. People will often underestimate how long it can take to complete tasks, and therefore don’t add contingency to deal with problems when they crop up. Under pressure, the ‘band-aid’ solutions seem attractive, but don’t end up dealing with the problems, rather only the symptoms.

As process engineers, we often pride ourselves on being ‘expert generalists’. It’s easy to forget that this is not a universal skill.

Perceiving and judging alternatives

It is surprisingly difficult to recognise we are wrong, once we’ve made a decision. We’re good at spotting patterns, coming up with explanations and theories as to what is going on.

What is more difficult is when we have misunderstood something. For example, during early-stage production of a North Sea FPSO (floating production storage and offloading vessel), the operators were struggling with an issue. The marine system was meant to be designed so that production would flow evenly to port and starboard storage tanks, and the boat would therefore stay level.

However, flow was preferentially going to one tank side, and as a result, the operators were switching from one tank to another by opening and closing the divertor valves on the control system. Soon after one of those changes, a high pressure alarm came up on the cargo tank pumps. But since they knew they were having problems with some of the instruments, and the valve was open on the system, they over-rode the trip, and started up again. A high liquid level trip came up, so they over-rode that as well. The flow backed up into the flare drum, into the glycol system, shut down the compression and blew the liquid from the flare drum out of the flare and onto the sea below. Luckily, the tanker that was coming in to pick up the first cargo had not yet arrived, so there were no injuries.

Once the dust settled the crew realised they had misdiagnosed the cause. They had all the classic symptoms of a blocked outlet, but their control console was showing an open outlet, so they diagnosed failed instruments, not valve closed.

A more severe version of this problem caused the meltdown of the Three Mile Island nuclear plant as the operators worked to solve a problem which was the opposite of the one they had. And there are many more similar mistakes in accident reports, both minor and major.

While things are developing, we need to make sure that we continue to ask if we are correct, and evaluate different alternatives, rather than just following the first one and disregarding evidence that we are wrong.

Groupthink

Some of the meetings that I chair are short, maybe one or two days, to examine a modification. The agenda is arranged in such a way that higher risk items are addressed earlier, and it can be hard to keep the level of thinking about hazards high enough as the end of the day looms.

It’s at times like this that I really like the way one of my clients behaves and thinks. As we’ve reached the end of the day and ask “are there any other issues that we’d like to raise?” I always watch for this client to see if I can see that he has entered deep think mode, then we wait for him. Many times, he’s asked the killer question, “but what if?”, and identified something new, tricky, or that the rest of us might have missed. Discussing that scenario has resulted in better designs, better risk management, and often lower cost as well.

This client is impervious to groupthink, that psychological term for when a group makes faulty decisions because group pressures lead to a deterioration of “mental efficiency, reality testing, and moral judgment”.

It can exasperate a team to deal with this type of person in a meeting, but given a choice, I’d always have at least one of him around. Often, this group-think can be minimised by rotating personnel into the team to keep it fresh. But it does take a pretty strong character to challenge a set of norms once they are established in team.

Egocentrism

We focus on our own point of view, ignoring that others don’t have all the information or the understanding that we have. If we don’t face the consequences of our decisions ourselves, it’s hard to imagine the consequences for others.

In the offshore oil and gas industry, it is quite possible to have spent a long and successful career designing facilities without ever seeing the metal you have been slaving over for years, so what could be significant operations problems are not even noticed, never mind designed out.

Another interesting concept is the much used ‘lessons learned’ approach. Project teams are happy to sit down at the end of the project, publish a report on what went well and lessons to be learned for posterity.

The thing about lessons learned is that publishing is only half the battle. Not until the person who needs to know that piece of information has been able to receive it, understand it, and internalise it for future use could you claim ‘lessons learned’ have indeed been learned. Before that, they are ‘learning opportunities’, and quite weak ones at that.

So we are more likely not to repeat our own mistakes, than repeat the mistakes of others. We engineers are more likely to focus on significant incident causes within our own experience, rather than the highest priority incident causes. We are more comfortable interacting with others like ourselves, so we organise ourselves into single-discipline silos, or ‘communities of practice’, and then hope that mashing together that approach with the other disciplines will give us the best integrated solution.

We all have a tendency to reduce the likelihood of an event happening if it hasn’t happened to us personally and to overestimate frequency if it has happened. I’ve seen many designs where there are group estimates of likelihood, but little post-start-up review that these estimates were correct.

Admit it, you’re biased

Humans are complex beings, the product of experience, education, talent, and interactions. We can be amazingly good at doing complex things, understanding how to process raw materials into the basis for our life across the planet.

And while a lot of things are being computerised – a recent study from the University of Oxford predicted that while there was a 99% chance that the job of telemarketer be replaced by computers in 20 years’ time, and even a 55% chance that commercial pilots would be no longer needed, there was only a 1.7% chance that chemical engineers wouldn’t exist.

So we are difficult to replace by computer, but still flawed. The more you read in this area, the more you learn what we can get wrong. So rather than tackling every bias ever, I’ll leave you with some homework, or opportunities to learn more.

An edited version of this article was first published in the September 2015 edition of tce magazine, published by the Institution of Chemical Engineers.

www.thechemicalengineer.com