To the Letter Icon

The Safety and Efficiency Benefits of the Wide Area Augmentation System (WAAS)
by Ross Bowie, Director, ANS Service Design, NAV CANADA


Canadian pilots have been using GPS since the early 1990s as an aid to VFR navigation and for IFR en-route, terminal and non-precision approach operations. For the IFR pilot, the ability to go direct saves time and fuel, and area navigation GPS [RNAV (GPS)] approaches often mean lower minima. These approaches also bring safety benefits by eliminating circling procedures and reducing the need for visual manoeuvring to line up and land, thanks to the accuracy of GPS.

The operational approval to use WAAS in Canada was issued on October27,2005. Details can be found in the Transport Canada Aeronautical Information Manual (TC AIM) COM 3.16 and RAC 3.14.1, aeronautical information circular (AIC) 27/05 and in a special notice in each Canada Air Pilot (CAP) volume.

WAAS builds on the success of GPS and promises even more benefits. The U.S. Federal Aviation Administration (FAA) commissioned WAAS in 2003, and it already serves part of Canada. NAV CANADA has installed two WAAS stations in Goose Bay, N.L., and Gander, N.L., and will install two more in Winnipeg, Man., and Iqaluit, Nun., next year. This expanded network will extend WAAS service to most of southern Canada, as depicted in the map on page 8.

How does WAAS work? A network of reference stations monitors GPS satellite signals and sends data to master stations, which create a message containing corrections and integrity data. The WAAS message is up-linked to geostationary (GEO) satellites orbiting over the equator for rebroadcast over a hemisphere. As an aside, in the mid 1990s NAV CANADA brought the FAA and Telesat Canada together to explore the hosting of a WAAS transponder on one of Telesat's Anik satellites.

On September 9, 2005, the Anik F1R, with an advanced WAAS transponder on board, was launched into an orbital slot at 107.3°W, and from there it will provide WAAS service to all of Canada. Other GEO satellites will ensure redundant coverage.

Aircraft WAAS receivers use the WAAS message and the data from GPS satellites to deliver horizontal and vertical accuracy that is better than 2 m. Even more importantly, the integrity portion of the message provides assurance that the aircraft will not be misled by a faulty satellite signal.

WAAS supports instrument landing system (ILS)-like approaches with vertical guidance, termed "LPV" approaches (Localizer Performance, Vertical guidance).1 The FAA has monitored WAAS performance since 2003, and found that it is even better than predicted, so ILS design criteria can be used for LPV approaches. It is expected that the decision altitude will be at or near 250 ft AGL at over 90 percent of runways meeting instrument runway physical standards. Lower decision altitudes will mean higher airport usability at many sites.

Approach charts with LPV minima are titled RNAV (GNSS) [global navigation satellite system], and there are minima lines for lateral navigation (LNAV) (basic GPS), LNAV/VNAV [for aircraft with basic GPS and barometric vertical navigation (BARO VNAV) capability] and LPV (for WAAS-equipped aircraft). The first chart with LPV minima was published on October 27, 2005, for the Kitchener/Waterloo airport.

Aircraft with WAAS avionics will of course be able to use LNAV minima on existing RNAV (GPS) charts. The plan is to convert all the RNAV (GPS) charts to RNAV (GNSS) by adding LNAV/VNAV and LPV minima.

As was the case with GPS, avionics production has lagged behind. There is one panel-mount WAAS unit available in the USA, but it has not yet been approved in Canada. The first flight management system (FMS)-capable system will be available in the fall of 2006.

Since GPS was first approved for IFR flight in 1993, many operators have gained benefits from over 350 RNAV (GPS) approaches. At many small airports previously served by circling non-directional beacon (NDB) approaches, the improvement in both safety and efficiency has been dramatic. WAAS will clearly provide more safety and efficiency benefits, and support the realisation of a long-term goal to provide vertical guidance on all approaches. This not only enhances safety by reducing the probability of controlled flight into terrain (CFIT) accidents, but it also reduces training- costs by standardizing on one procedure for all approaches.

WAAS builds on the success of GPS and promises even more benefits. The U.S. Federal Aviation Administration (FAA) commissioned WAAS in 2003, and it already serves part of Canada. NAV CANADA has installed two WAAS stations in Goose Bay, N.L., and Gander, N.L., and will install two more in Winnipeg, Man., and Iqaluit, Nun., next year. This expanded network will extend WAAS service to most of southern Canada

1 The FAA previously defined LPV as "Lateral Precision, Vertical Guidance," as explained in ASL 1/2004. In the summer of 2005, they changed it to "Localizer Performance, Vertical Guidance."This change in definition has no operational significance.

The Canadian Business Aviation Association Column-Attitude and Behaviour

CBAA - Logo

Aviators studying human factors may find the following Merriam-Webster definitions of the word "attitude" very useful:

a. the position of an aircraft determined by the relationship between its axes and a reference datum;

b. a mental position or feeling or emotion with regard to a fact or state;

c. a negative or hostile state of mind;

d. a cocky or arrogant manner.

When discussing attitude in flight training, we learn how "attitude and movements" determine the flight path of an aircraft, and when in trouble, a pilot reverts to these basics. Using our collective knowledge of human behaviour, we could apply a simple rule to create a fall-back position called "attitude and behaviour" to use in making decisions under stressful situations.

Many accident cost factors are attributed to poor human judgement. It is remarkable that under a controlled scenario-based environment, we choose the appropriate action. However, when faced with real situations, our judgement becomes clouded by outside pressures. An example of an outside pressure applicable to many aviators is the perceived need to get the job done at all cost.

Corporate aviation in Canada has evolved into an efficient, global transportation service with well-defined protocols and standard operating procedures (SOP), and can boast one of the safest operational records. There are, however, situations where the system has failed. A safety management system (SMS) is integral to the Private Operator Certificate (POC) Program managed by the CBAA. SMS requires us to be proactive in identifying all hazards and mitigating the ensuing risks to our operation.

Poor judgement is one such hazard that creates risk requiring effective mitigation. The desire to please and to get the job done at all costs creates pressures; the resultant stress can cause a lapse in judgment by otherwise well-trained and experienced aviators, and can lead to accidents.

In one moment of misjudgement, they react contrary to their training, the regulations, and their company SOPs, in favour of a misplaced belief that they could make it and beat the odds. This failure in judgement, or so-called "bad attitude," is not in keeping with the individual's contract, which requires one to be responsible and accountable to comply with well-defined protocols and SOPs. Inherent in the contract is the obligation to make appropriate decisions.

Our collective experience has shown that erring on the side of safety can easily be defended. Bad judgement, where negative indicators were present, cannot be defended.

Being a well-trained professional is important. Exercising good judgment is the minimum standard we must all strive for in order to maintain credibility and service excellence.

Remember the basics of attitude and behaviour to stay on the positive side of the definition of attitude.

COPA Corner-Managing Your Weather Risks
by Adam Hunt, Canadian Owners and Pilots Association (COPA)

COPA logo

On September 7, 2005, the U.S. National Transportation Safety Board (NTSB) issued a study that had some interesting things to say about general aviation (GA) weather accidents and who is most at risk for having them.

The NTSB report stated: "Even though weather-related accidents are not frequent, they account for a large number of aviation fatalities-only 6percent of GA accidents are weather-related, but they account for more than one in four fatalities that occur in GA annually.

"For the study, NTSB investigators collected data from 72 GA accidents that occurred between August 2003 and April 2004. Information about these accidents was compared to a matching group of 135 non-accident flights operating under the same conditions.

"The study results suggest that a pilot's performance history, including previous aviation accidents or incidents, and Federal Aviation Administration (FAA) knowledge or practical test failures, are associated with an increased risk of being involved in weather-related GA accidents.

"The study also found that pilots who obtain their first pilot certificates earlier in life, or those who obtain higher levels of certifications or instrument ratings, are at reduced risk, compared to other pilots."

Some of the information here will not come as a surprise to many pilots in Canada. Most of us know that flying into bad weather-low ceilings, visibilities and thunderstorms-kills a high proportion of those who do it. While the overall number of accidents is relatively low, the fatality rate in these types of accidents is high. That is usually because the aircraft hits the ground at high speed.

Dealing with the risks of bad weather is the key issue here, and this is where the NTSB report is most interesting-it notes that the pilots who are at an increased risk for weather accidents are those who:

  • have had a previous accident or incident;
  • have failed written exams or flight tests in the past;
  • have learned to fly later in life;
  • hold only lower licences (i.e. private pilot);
  • do not hold an instrument rating.

So should pilots who meet this profile stop flying? Absolutely not! The key is "risk management" identifying the risks in your flying and working to reduce them. If that profile describes you-even a little bit-then there are steps you can take. You know you are at an increased risk, so reduce it by doing the following:

  • Leave an extra margin when it comes to weather-don't push into marginal weather, or allow anyone else to pressure you into flying in marginal weather. Always leave yourself an "out."
  • If you have had previous accidents or incidents, then that is your "wake-up call." Seek out an instructor and get some dual training, focussing on the events and decision making that lead up to the event. Train to prevent a reoccurrence.
  • If you have previously failed a written exam or flight test, then you know those are areas of weakness that you will continuously have to work on. Study those subjects until you become "an expert" in those areas.
  • Seek out extra training-upgrade your skills by taking extra ratings (night, instrument) or work on higher licences-the skills and judgement gained will help reduce your risks.

Judgement is a learned skill, just like crosswind landings. And, just like crosswind landings, judgement skills need constant practice if they are to not become rusty. Fly as often as you can. Be current and keep your judgement skills sharp-your life will depend on it.

You can find out more about COPA at

Flying in the Twilight Zone
by Garth Wallace

I watched as a ski-equipped Aeronca Champion, cocked in one almighty sideslip, came out of nowhere and slid down to the snow-covered airport infield. It was early on a Saturday morning. I was sipping coffee and looking out the window while waiting for my first student to arrive.

The Champ taxied over the lumpy, snow-covered grass toward the flying school. It had an original Aeronca paint scheme, cream with a big red teardrop on the bottom of the fuselage. The airplane stopped just short of the snow ridge at the edge of the ramp and shut down.

The arrival of a skiplane was an unusual event at this uncontrolled but medium-busy airport. I continued to watch as the Champ's door flopped forward against the wing strut. A short, stocky pilot climbed out. He was dressed in a black snowmobile suit, big, laced boots and one of those winter hats with earflaps. He was carrying two short pieces of wood in a heavy pair of leather gauntlets. He bent under the right wing strut, used his shoulder to rock the airplane and slid one of the sticks under the right ski. He tramped around to the left side and repeated the procedure. The pilot then scrambled over the low snow bank and waddled across the ramp to the office. I smiled and nodded to him as he came through the door.

"She's nippy out there, eh," he said with a friendly grin. His face was tanned, leathery and peppered with whiskers. As he spoke, the telephone rang.
"Yup, I guess it is," I replied, walking over to the counter. "Good morning, flying school."

It was the local flight service specialist calling. "Let me speak to the pilot of that rag wing that just landed on the infield," he said. The man in question was stamping his feet on the entrance mat and removing his gloves and hat.

"Flight service wants to talk to you," I said, holding the phone out to the newcomer.
"I don't know anyone in flight service," he replied cautiously.
"Maybe he has questions about your arrival," I suggested.

The visitor was not the first older pilot to apply his own interpretation of the airport's mandatory frequency (MF) designation. He ambled to the flight desk, unzipping his well-worn suit, and took the phone.

I could only hear the pilot's side of the conversation. It was interesting.
"0' course I landed without callin', I got no radio, eh," the pilot said.
He listened patiently for a minute.
"Well, there weren't nothin' like that 'ere last time, eh." "Eight years? That's what I thought, it's somethin' new, eh."
He listened again for a while.
"Well, why would I be puttin' a radio in an airplane that's got no 'lectrics? It don't make sense, eh."
"Sure, whatever you say." He hung up.
He wrinkled his brow and looked at me. "He sounded a bit excited, eh."
"Did you talk to anyone on your way in?" I asked.
He gave me a questioning look. "Well, I'd be talkin'to myself, wouldn't I? I got no one with me, eh."

My student arrived so I didn't continue the conversation. I mentally named this character Grizzly Adams and went to work. During the pre-flight briefing with my student, I noticed that Grizzly bought a coffee from the machine and wandered around the lounge reading the bulletin board and looking at the pictures.

I was signing out for my flight when the visitor bid us a friendly "goodbye" and headed outside. My student and I followed him to our aircraft. I watched Grizzly pull the sticks out from under the Champ's skis while my student was doing a pre-flight inspection. He leaned into the cockpit and set the controls. Then he hand-propped the engine while standing behind the propeller. Two flips and it settled into an easy idle. With the engine running, he walked behind the tail, picked it up and turned the airplane into the wind. I scanned the sky. There was no traffic. Grizzly climbed into the airplane, closed the door and applied full power. In a hop, skip and a jump, the Champ was airborne.

The next Saturday morning I watched for Grizzly from the office window, my coffee in hand. He didn't disappoint me. The bright little Champ came curving toward the infield from over the hangar row. The pilot had the airplane turned sideways and dropping like a rock. At the last moment, he snapped it straight and raised the nose. It settled onto the snow in a three-point landing, then taxied toward me and stopped beside the ramp.

The telephone rang before Grizzly had cleared our door. It was the same flight service specialist as the previous week. He sounded a bit hot.
"Good morning," I said to my visitor. "The flight service specialist wants to speak to you."
"Boy, she's a bit nippy out there, eh," he said, stamping his feet.
"Yes, I guess it is," I replied.
He took the telephone receiver. "'ello?"
"Well, I didn't call 'cause I got no radio. I told you last week, eh."
"Well, o' course I started 'er by 'and. She's got no 'lectrics, eh. No 'lectrics, no starter."
Grizzly was frowning and shuffling his feet as he spoke. "Well, how do I start 'er with someone in the front if I'm outside flipping the prop?"
"Whatever you say, lad."
He hung up the phone and scratched his head. "That boy isn't makin' a lot of sense," he said to me.

I had a few minutes before my first student, so I drank my coffee with Grizzly. I found out he was from "up country a piece." He had spent the last 10-odd years rebuilding the Champ after flipping it over in soft snow.
"I re-did the engine while I was at it."

I tried to gently suggest that the flight service station (FSS) helped separate traffic, which made it necessary for pilots to make contact before flying in the area.
Grizzly leaned over to look out the window. At that time on a Saturday morning, there were no airplanes moving. "'e's got his work cut out for 'im, eh," he chuckled.

I couldn't help thinking that this rough-edged pilot was flying in a time warp. The airspace regulations he was breaking were designed for the orderly flow of high and low speed traffic flying visually or on instruments. Hand-starting the airplane by himself was a well-documented safety issue. Hopping to nearby airports for coffee on a sunny Saturday morning in an old, slow airplane was still an important part of pleasure flying. From across the ramp, Grizzly's Champ looked to be in good shape and he seemed to fly it well. With a little education and expense he could fit into this modern, safer era of recreational aviation, if he wanted to.

My student arrived and Grizzly left before I could pursue that suggestion. I saw him hand prop the engine, turn the tail, climb in and take off. Our telephone rang. I let someone else answer it.

The next Saturday he was back. This time, when the airplane stopped on the other side of the snow bank, Grizzly left the engine running while he put the sticks under the skis and walked to the office. Our phone was ringing when he was halfway across the ramp.
"Good morning, it's for you, again," I said when he came in the door.
"She's nippy out there, eh," he said.
"You can say that again," I replied.
He took the receiver. "'ello?"
"0' course I left 'er runnin'. Last week, you gave me the devil for 'and proppin"er, eh."
"My pilot licence number? I don't have no pilot licence. My dad taught me 'ow to fly. He didn't have one either." "The airplane registration? I don't know 'bout that but she's all new since the crash, eh."
"Whatever you say."
He hung up and frowned. "e wants to sec some documents but I got not'in' to show, eh."
He scratched his head for a moment. "I t'ink I'll take the coffee to go."
He did.
As he was turning the airplane around by the tail, the telephone rang.
"Good morning, flying school."
"No, I can't see any registration on the airplane, either," I said. It was the truth.
The little airplane accelerated across the infield. "His name? I think he said that it's Grizzly Adams."

The next Saturday, Grizzly must have flown somewhere else for coffee.

Garth Wallace is an aviator, public speaker and freelance writer who lives near Ottawa, Ont. He has written nine aviation books published by Happy Landings ( The latest is You'd Fly Laughing Too. He can be contacted via e-mail:

Click on image to enlarge

Blackfly Air on SMS

Blackfly Air on Data Gathering

Blackfly Air managers are relentless in implementing their safety management system (SMS), and this time around they dig deep into data gathering, and are introduced to the Swiss Cheese Model of Accident Causation. The Swiss cheese model came from Dr. James Reason, Professor at the University of Manchester, who is internationally known as one of the leading experts on human and organizational factors in safety investigation and accident prevention. As per previous Blackfly Air episodes, we'll briefly discuss these topics here, and in the next article, we'll present a counterpoint on the Swiss cheese model.

Data gathering-the small stuff

Major events such as accidents and significant incidents draw attention in themselves, and certainly will not go unnoticed. However, it is a number of small risks or hazards that, occurring together, cause the series of failures that can lead to an accident. Figure 1 shows how these hazards or latent conditions that exist at the organizational level can contribute to an accident by allowing conditions to exist that make the unsafe acts or active failures possible and dangerous.

Figure 1: The Swiss Cheese Model James Reason
Figure 1: The
Swiss Cheese Model - James Reason

The question is, "How do you identify these small risks that often go unreported or even unnoticed?"You need an effective data-gathering process, but most particularly you need a reporting culture within the organization, one in which people are actively looking for current and potential problems. The reporting, then, looks at two things-events that DID occur, and events that MIGHT occur. Gathering data on both is equally important.

A large helicopter operator in the US started a program where employees received a prize for identifying a hazard or developing a safety-related idea that was used in the company. In this case, employees were motivated to look for, and report, hazards. The program was so successful that the accident rate for this company fell to zero during the life of this program.

The secret to long-term success is to develop a simple reporting system appropriate to the size of the company, to encourage the free flow of safety information. This reflects three commitments already made by management in the company safety policy, namely that:

  • it supports the open sharing of information on all safety issues;
  • it encourages all employees to report significant safety hazards or concerns; and
  • it has pledged that no disciplinary action will be taken against any employee for reporting a safety hazard, concern or incident.

Successful reporting programs have these four qualities:

  • reports are easy to make;
  • no disciplinary action is taken as a result of submitted reports;
  • reports can be submitted in confidence and are de-identified; and
  • feedback to everyone is rapid, accessible and informative.

The reporting system has to have methods for doing four things:

  • reporting hazards, events or safety concerns;
  • collecting and storing the data;
  • analyzing reports; and
  • distributing the information gleaned from the analysis.

There are various options for gathering the data. Here are some:

  • confidential report forms deposited in a secure box;
  • suggestion box;
  • online computer reporting systems;
  • confidential staff questionnaires;
  • an "open-door" policy for informal communication;
  • brainstorming sessions;
  • organized study of work practices;
  • internal or external company safety assessment; and
  • simple forms to be included with regular documentation submitted by crews in the field.

In very small operations, reports can be verbal, but it is essential that the end result be in written format rather than verbal, to preclude any reports from "slipping through the cracks." Make sure that everyone knows exactly where, how and to whom reports are submitted.

Sample reporting forms are included in the toolkit found at
The simpler it is, the less time-consuming it will be to complete, and the more people will be encouraged to use it. Keep a supply of blank report forms beside the collection box, with aircraft spares packages, or with crew position reports, but also accept simple hand-written notes. After all, this is about looking for safety hazards and fixing them, not creating a bureaucracy.

Should it require the individual's name? No. The person reporting may add their name, which allows the company to advise promptly that the report has been received and what corrective action is planned, but anonymous reports must be allowed. In a small operation, the level of anonymity will probably be limited, but it then becomes all the more essential that everyone understands the company safety policy's guarantee of no reprisals. Management must make an extra effort to win the trust of employees when the level of anonymity is limited.

You will almost certainly get better response if you post some ideas about the sort of issues to report. In general, you are looking for hazards, risks, incidents and concerns-anything that has the potential to cause injury or damage. A system-wide application of this process will also include reports on recommendations to improve overall efficiency. Here are some examples you can suggest to get people thinking:

  • incorrect or inadequate procedures, a setup for error;
  • poor communication between different parts of the company;
  • out-of-date manuals;
  • inadequate training;
  • inadequate, incorrect or missing checklists;
  • excessively long working days;
  • missing or unsecured equipment;
  • obstacles and limited clearances for manoeuvring;
  • refuelling hazards;
  • flight preparation;
  • unreasonable customer expectations or unplanned requirements; and
  • near misses or almost "gotchas."

Encourage your company employees to brainstorm ways in which the system could fail, and to submit these ideas for review and correction. You might consider periodic informal staff discussions focusing on safety improvement, and then document the results. Larger operations may hold monthly safety meetings to review reports and encourage discussion on various safety issues. These meetings should be documented and any action required clearly recorded and followed up.

Whether you are a large or a small operator, you need to keep track of the data in these reports. You want to be able to monitor and analyze trends. Whether your safety database is in written or electronic form, when you receive a report, categorize the type of hazard it identifies, take down the date and any other pertinent facts, then document what gets done to correct the problem, and confirm that feedback was provided to all employees. Ensure that the data does not identify the reporter, and then destroy the original report to protect confidentiality

Follow-up is vital, both to correct safety problems, and to show people that the program works. There are three parts to this:

  • decide who should be involved in ensuring prompt and effective corrective action;
  • publicize what has been done to address every concern raised, including decisions to accept certain risks and why; and
  • alert people to the safety issues involved so that everyone can learn from them.

Here are some ways to pass on company actions on safety issues to the staff:

  • bulletin board;
  • company safety newsletter;
  • company Web site;
  • e-mail to staff; and
  • staff meetings.

Finally, keep in mind that trust is the most important part of the reporting system, because people are being encouraged to describe, not only the hazards they see, but also the mistakes they themselves have committed.

Getting feedback on safety weaknesses in the operation has proven to be far more important than assigning blame. For this reason it is important to have a non-punitive or no-blame policy for reporting safety concerns.

For further information, refer to Chapter 3 of Safety Management Systems for Small Aviation Operations-A Practical Guide to Implementation (TP14135), at, and Safety Management Systems for Flight Operations and Aircraft Maintenance Organizations -A Guide to Implementation (TP 13881).

Seeking and Finding Organizational Accident Causes: Comments on the Swiss Cheese Model
The following is adapted from an online article found on the University of New South Wales' aviation Web site at, reprinted with permission.

Swiss cheese model

When it comes to understanding incidents and accidents, James Reason's "Swiss cheese model" has become the de facto template. This has had a positive effect on aviation safety thinking and investigation, shifting the end-points of accident investigations from a "pilot error" explanation to organizational explanations. However, overzealous implementation of a theoretical framework has led to an illusion of management responsibility for all errors. The Swiss cheese model of accident causation is now adopted as the model for investigation in many industries. Indeed, in aviation it has become the accepted standard as endorsed by organizations such as the Australian Transport Safety Bureau (ATSB) and the International Civil Aviation Organization (ICAO).The Swiss cheese model shows several layers between management decision making and accidents and incidents. The layers are shown below:

Variation of Reason's Swiss Cheese Model
Variation of Reason's Swiss Cheese Model

An accident or incident occurs where "holes" in these layers align. The holes themselves change over time.

Reason (1990, 1997) made a key distinction between the active, operational errors ("unsafe acts") and the latent (organizational) conditions. Reason (1990) stated that, "systems accidents have their primary origins in the fallible decisions made by designers and high-level (corporate or plant) managerial decision makers" (p. 203). Active errors were therefore seen as symptoms or tokens of a defective system. It became the duty of incident investigators and researchers to examine the psychopathology of organizations in the search for clues.

One implication of the organizational approach has been the tenacious search for latent conditions leading up to an accident. There are serious flaws in such prescriptive implementation. While the importance of analyzing human factors throughout the accident sequence is not in question, the dogmatic insistence on identifying the latent conditions could, and should, be challenged in cases where active errors played a major part.

From human factors to organizational factors, and back again!

Organizational accident theory and the Swiss cheese model occupy a curious position in accident research and commentary, in that they are never challenged. While these developments were clearly landmarks in accident investigation research, this uncritical stance is an unhealthy state of affairs in science. One of the few researchers to question the use of Reason's Swiss cheese model is Reason himself, who warned that, "the pendulum may have swung too far in our present attempts to track down possible errors and accident contributions that are widely separated in both time and place from the events themselves" (1997, p. 234), and that, "maybe we are reaching the point of diminishing returns with regard to prevention" (2003).

The human factors and accident investigation community should encourage a holistic view of errors and accidents, but one that does not necessarily lead deep into the roots of the organization. Here is why.

Issue 1: Active errors may be the dominant factors.
The Swiss cheese model can lead to the illusion that the root of all accidents or even errors stems from the organization's management. This is not the case. Many errors are simply a by-product of normal, adaptive cognitive processes. "Inadequate defences"would make the errors more dangerous, but even then some errors would overcome even well-planned and maintained defences.

Issue 2: The causal links between distant latent conditions and accidents are often tenuous.
The mapping between organizational factors and errors or outcomes, if any such mapping can be demonstrated with an appropriate degree of certainty, is complex and loosely coupled. However, the Swiss cheese model makes it tempting to draw a line back from an outcome to a set of "latent conditions." This invites "hindsight bias," where we overestimate what we knew or could have known before an event occurred. Many "latent conditions" would seem insignificant in the pre-event scenario.

Issue 3: Latent conditions can always be identified-with or without an accident.
An organization can identify its systemic weaknesses with or without an accident. Reason (1997) himself stated that distant factors do not discriminate between normal and abnormal states, "??only proximal events-unsafe acts and local triggers-will determine whether or not an accident occurs" (p. 236). Reason (1997) argued that, "the extent to which they are revealed will depend not so much upon the 'sickness' of the system, but on the resources available to the investigator" (p. 236). It seems that the harder you look, the more latent conditions you'll find.

Issue 4: Some latent conditions may be very difficult to control, or take many years to address.
The factors that can be most easily remedied are [those closest] to the task performer-the working environment and supporting processes. Latent or organizational factors are not so amenable to rapid correction. For instance, an organization's "safety culture"-much maligned in the Challenger accident report-cannot be manipulated easily or rapidly. Again, Reason (1997) declared that our main interest must be in the "changeable and controllable."

Issue 5: Misapplication of the model can shift the blame backwards.
Just as the focus of accident investigations has changed over the years, the focus of blame has also changed. The "blame-the-pilot" culture swung to a "no-blame" culture. This over-swing was corrected by the concept of a "just" culture. Somewhere in the midst of this, a "blame-the-management" culture blossomed. Paradoxically, the organizational approach has sometimes tended to focus on a single type of causal factor-"management incompetence" or "poor management decisions."

Finding the balance

Reason's Swiss cheese model revolutionised accident investigation worldwide. However, some industries, organizations and professions may have stretched the model too far. The "model" is really a theoretical framework, not a prescriptive investigation technique. And it may not be universally applicable. Investigations can turn into a search for latent offenders when, in some cases, the main contributory factors might well have been active errors with more direct implications for the outcome, and therefore defences should be strengthened to tolerate errors. The search for latent conditions has resulted in recommendations that undoubtedly improve the safety health of the organizations concerned. In some cases, however, these conditions have arguably only tenuous connections to the actual event and should perhaps be reported separately.

Without wanting to return to the dark ages of "human error" being the company scapegoat for all accidents, there is a balance to be redressed in accounting for the role of active errors.

This article is based on Shorrock, Young and Faulkner (2005) and Young, Shorrock and Faulkner (2005).

Reason, J. (1990) Human Error. Cambridge: University Press, Cambridge.
Reason, J. (1997) Managing the Risk of Organizational Accidents. Aldershot: Ashgate.
Reason, J. (2003) Keynote Address - Aviation Psychology in the Twentieth Century: Did we Really Make a Difference? 2003 Australian Aviation Psychology Symposium, 1-5 December 2003, Sydney.
Shorrock, S., Young, M., Faulkner, J. (2003) "Who moved my (Swiss) cheese?"Aircraft and Aerospace, January/February 2005, 31-33.
Young, M.S., Shorrock, S.T., and Faulkner, J.P.E. (2005) "Taste preferences of transport safety investigators: Who doesn't like Swiss cheese?" In P.D. Bust and P.T. McCasbe (Eds.), Contemporary Ergonomics 2005, London: Taylor and Francis.

Spring Review: Flying Passengers On Board Seaplanes? Prepare Them!

A review of past seaplane accidents on water indicates that the pilots and passengers in inverted aircraft often survived the impact, but were unable to evacuate under water, and subsequently drowned. In some cases, passengers were unable to release their seat belts, and their bodies were discovered with little or no impact injuries, still strapped to the seats. In other cases, passengers were able to release their seat belts, but were unable to find an exit and/or open it because of impact damage or ambient water pressure. Those who did survive spoke of extreme disorientation and said that they did not exit in what may be considered a normal procedure, i.e. they did whatever they had to in order to get out of the aircraft.

In some of the accidents where pilots survived and passengers did not, investigation revealed that pilots provided a pre-flight safety briefing, but did not discuss underwater egress. There were many accidents where the pilot was injured or killed and could not assist passengers in an underwater evacuation.

Seaplane pilots are therefore urged to include specific procedures for underwater egress as part of their comprehensive pre-flight safety briefing. This could make the difference between a successful evacuation, and being trapped inside a submerged seaplane. A thorough underwater egress briefing will provide critical information to passengers so that they can help themselves.

Situational awareness and exit operation

Prior to takeoff, advise passengers to locate the exit in relation to their left or right knee. If the exit is on their right while upright, then it will still be on their right in the event the seaplane comes to rest inverted. No matter how disorienting an accident, the passenger's relationship to the exit(s) remains the same, provided their seat belt remains fastened. Ensure passengers know the location of, and how to use, all exits. The method of opening an exit may be different from one seaplane to another, and even within the same aircraft. Permit passengers to practice opening the exit(s) before engine start up.

Underwater egress

In water accidents, seaplanes tend to come to rest inverted. The key to survival is to retain situational awareness and to expeditiously exit the aircraft.

The seven actions listed below are those found in the Transport Canada safety brochure for seaplane passengers, entitled Seaplanes: A Passenger's Guide (TP 12365). Pilots should read those seven steps out loud to all their passengers during the emergency egress portion of their pre-flight safety briefing, as follows:

If an emergency underwater egress is necessary, the following actions are recommended once the seaplane momentum subsides:

  1. Stay calm-Think about what you are going to do next. Wait for the significant accident motion to stop.
  2. Grab your life preserver/PFD-If time permits, put on, or at least, grab your life preserver or PFD. DO NOT INFLATE IT until after exiting. It is impossible to swim underwater with an inflated life preserver. You may get trapped.
  3. Open the exit and grab hold-If sitting next to an exit, find and grab the exit handle in relation to your left or right knee as previously established. Open the exit. The exit may not open until the cabin is sufficiently flooded and the inside water pressure has equalized. DO NOT release your seat belt and shoulder harness until you are ready to exit. It is easy to become disoriented if you release your seat belt too early. The body's natural buoyancy will cause you to float upwards, making it more difficult to get to the exit.
  4. Release your seat belt/harness-Once the exit is open, and you know your exit path, keep a hold of a fixed part of the seaplane and release your belt with the other hand.
  5. Exit-Proceed in the direction of your nearest exit. If this exit is blocked or jammed, immediately go to the nearest alternate exit. Always exit by placing one hand on a fixed part of the aircraft, and not letting go before grabbing another fixed part (hand over hand). Pull yourself through the exit. Do not let go until you are out. Resist the urge to kick, as you may become entangled in loose wires or debris, or you might kick a person exiting right behind you. If you become stuck, back up to disengage, twist your body 90°, and then exit.
  6. Getting to the surface-Once you have exited the seaplane, follow the bubbles to the surface. If you cannot do so, as a last resort inflate your life preserver. Exhale slowly as you rise.
  7. Inflate your life preserver-Only inflate it when you are clear of the wreckage, since life preservers can easily get caught on wreckage, block an exit, or prevent another passenger from exiting.

Transport Canada updated its TP 12365 brochure in 2005, and also developed a bilingual poster for passengers, Flying On Board Seaplanes (TP 14346). Copies of those products were sent to all commercial seaplane operators in Canada, in order to put emphasis on this seasonal issue. For information, comments, or to obtain additional copies, please contact the Transport Canada Civil Aviation Communications Centre at 1 800 305-2059 or on the Web site at

Improving Air Operator and Airport Operations Using a "Code Grey" Fog Forecasting System
by Martin Babakhan, The University of Newcastle (Australia), Newcastle, New South Wales, Australia and John W. Dutcher, Dutcher Safety and Meteorology Services, Halifax, Nova Scotia, Canada

Given some recent fog events at the Halifax International Airport, a Transport Canada System Safety Specialist from the Atlantic Region suggested that the work of two researchers to develop a proactive fog forecasting system to improve flight dispatcher planning and flight crew decision making may be of interest to ASL readers. More about this topic can be found at -Ed.

Low ceilings and reduced visibilities impact departures and arrivals at airports worldwide. Besides interruptions to flight schedules and passenger inconvenience, it can be financially costly to both air operators and airports. Therefore, forecasting of short-term variations of airport conditions, such as visibility and cloud base, is important for the safe and economic operation of airlines. Airline dispatchers must account for the possibility of delays due to such impeding weather phenomena and decide whether extra fuel should be loaded onto an aircraft. Of course, this decision on future weather conditions-two hours or more after the flight's departure-must be made one to two hours prior to the plane's departure.

These decisions require accurate and timely forecasts, made using standard aerodrome forecasts (TAF) which have some limitations. There must be at least a probably of 30 percent or more for a significant phenomena (i.e. thunderstorm, fog) before it can be placed in the TAF, and additional restrictions to the use of TEMPO and BECMG in the TAFs. The end result is that TAFs are typically conservative, even though the forecasters may feel that the phenomena could occur within the forecast period. The stated reason is that forecasters must be mindful of the potential impact of their TAFs in driving operational decisions. It is true that TAFs do drive operational decisions in the aviation industry, but one must argue that if there is at least a possibility (under 30 percent) of a significant weather phenomena impacting operators and airport operations, it should be reported to them.

In Australia, the Bureau of Meteorology forecasters are also restricted in notifying the aviation industry as a result of similar restrictions on TAFs. However, internally they use a system called "Code Grey" for significant phenomena having a 10–20 percent probability. This internally signals forecasters to continually monitor conditions to see if they do develop further, warranting an amendment to the TAF-which is typically hours after the original TAF. However, if an air operator or airport developed a similar "Code Grey" system, they could start hours before usual in their strategic operational planning, whilst continually monitoring the situation and updating their plans.

We developed such a system for a large airline based in Australia. With the chief pilot and director of flight operations, a "Code Grey" system for use by their flight dispatchers and flight crews was developed. We also developed a "Fog Model" for Sydney's Kingsford Smith International Airport (YSSY), which determines the probability of fog events for each month under certain temperature conditions and wind profiles. By combining the "Code Grey" system with the Fog Model, flight dispatchers have improved their forecasting and operational performance.

This has also allowed for improved decision making on the flight deck. The flight crews working with the flight dispatchers can continually monitor the weather conditions and make decisions on diverting to alternates, etc. In addition to enhanced decision making and safety, this program yielded significant savings in fuel costs. These systems are meant to supplement the total weather picture for crews and airlines operating in areas prone to fog and other low visibility phenomena. They are not meant to replace the traditional aerodrome forecasts.

Aircraft Safety Through Delegation - Regulation Oversight Accountability
REMINDER-2006 Delegates Conference

The Aircraft Certification Branch will host the 2006 Delegates Conference at the Ottawa Congress Centre, in Ottawa, Ont., from June 27 to 29. Any delegates who have not yet received an invitation can register electronically at, or by contacting Mr. G. Adams at 613 941-6257, or e-mail

Date modified: