Flight Operations

Focus on CRM

The following two articles continue our campaign to promote Crew Resource Management (CRM) concepts and principles, in the overall context of extending contemporary CRM Training to all commercial pilots. The first one, entitled “Single-Pilot Resource Management (SRM) Competencies” is from Dr. Suzanne Kearns, an assistant professor teaching Commercial Aviation Management at the University of Western Ontario. The second one is a typical scenario-based airline case study for CRM training programs.

Single-Pilot Resource Management (SRM) Competencies

by Dr. Suzanne Kearns

As a follow-up to the article written by Alexander Burton, entitled “Single-Pilot Resource Management (SRM)” and published in Aviation Safety Letter (ASL) 3/2012, this article discusses SRM competencies. As mentioned in the previous article, aviation has entered a curious time when the aircraft we fly are statistically safer than the pilots who fly them. The reality is that, following a mechanically caused aviation accident, it is possible to identify a faulty component and then fix the flaw on all operational aircraft. Following an accident caused by pilot error, it is relatively complex to identify why the human being made the mistake and then “fix” the flaw in all operational pilots!

However, that is the ultimate goal of SRM—to understand the characteristics and limitations of the human mind and body and how these factors can lead to poor performance and, eventually, accidents. SRM training is based on a large body of work gathered by aviation safety researchers and accident investigators.

Although it can be hard on a pilot’s ego to be reminded of their limitations, it is not something to be embarrassed by. Think of it this way: everyone knows that it would be unreasonable if a chief pilot asked a new hire to go out to the ramp, pick up an aircraft, and carry it inside the hangar. Clearly, a pilot would not have enough physical strength to carry out this ridiculous request. It is easy to understand human physical limitations.

However, the same chief pilot may ask a new-hire pilot to complete a trip that would require the pilot to stay awake for a 24-hr period. This situation would cause fatigue, which could result in increases in pilot errors. Yet, when it comes to mental limitations such as fatigue, we often expect a pilot to tough it out. The reality is that human beings are limited in their ability to stay awake and alert for an extended period of time. This limitation is a natural part of being human. It is important to recognize and understand these limitations, just as we understand the limitations of our physical strength.

SRM competencies are elements of pilot performance that are impacted by natural human limitations. Broadly, some SRM competencies include situational awareness, workload management, fatigue management, and decision making. It is helpful to understand these factors, as they can help us make safer choices in the air and on the ground—which is the goal of SRM training.

Situational awareness

Situational awareness refers to a pilot’s ability to

  1. perceive things in their environment,
  2. understand their meaning, and
  3. predict how they will impact the flight.

For example, was the pilot aware of the high terrain? Did they understand how dangerous it was? Were they able to predict a collision with terrain?

Situational awareness can be thought of as a pilot’s mental picture of their environment. Researchers assess a pilot’s situational awareness in a simulator by pausing mid-flight, asking the pilot to close their eyes, and having them describe elements of their environment by memory. This can be replicated during your flights (with a co-pilot, of course) if you close your eyes and challenge yourself to recall as many details about your environment as possible.

Unfortunately, pilots sometimes lose situational awareness. This can lead to a specific type of accident called controlled flight into terrain (CFIT) (pronounced “see-fit”).


You can think of workload capacity as a bucket—it can only hold so much before it begins to overflow.

A CFIT accident occurs when a pilot unintentionally flies a perfectly good airplane into the ground. Unfortunately, CFIT accounts for nearly 20 percent of all general aviation accidents.

How can a pilot lose situational awareness? Unlike a computer, human memory has a limited capacity. You can only remember a certain number of things in your environment before items begin to slip by unnoticed. Generally, researchers suggest that humans can remember approximately seven, plus or minus two, chunks of information. However, under high-stress conditions, this capacity shrinks to about two or three chunks of information.

Everyone has experienced the frustration of forgetting something. Most of the time, the impact of this slip is minor. However, when a pilot is forced to make a snap decision and they don’t have an accurate mental picture of their environment, it can lead to an accident such as CFIT. When flying, remember that your memory is limited. If you encounter a stressful situation or get behind the aircraft, your ability to maintain a mental picture of your environment will be reduced. Don’t be shy about asking for help, informing air traffic control (ATC) about your situation, or climbing to a higher altitude until you mentally catch up with the aircraft.

Workload management

The amount of work a person can manage at a given time is influenced by another human characteristic that has a limited capacity: attention. Similar to memory, the number of things we can pay attention to is limited. You can think of workload capacity as a bucket—it can only hold so much before it begins to overflow. With your mental bucket, as you begin to take on more tasks, the bucket fills. Eventually your mental bucket will reach capacity and then spill over. When you have exceeded your capacity, you will begin to make mistakes. What is less understood is where you will make those mistakes—though it is expected that you will make mistakes on the tasks you place at the lowest priority.

For this reason, when teaching pilots how to manage their mental workload, we introduce prioritization strategies. The best-known strategy is known as ANCS, which stands for aviate, navigate, communicate, and manage systems. This strategy is meant to serve as a reminder, when your mental bucket is full, to focus your attention on flying the aircraft. It doesn’t matter if you precisely manage the onboard systems if you fail to fly the aircraft safely!

Fatigue management

The nature of aviation is that pilots are prone to sleep disruptions from jet lag, long duty days, or a lack of quality sleep while on the road. However, only relatively recently has the industry come to appreciate the risk associated with fatigue. As an example, the following was reported anonymously to the U.S. National Aeronautics and Space Administration’s (NASA) Aviation Safety Reporting System (ASRS):

In March of 2004, the captain and first officer of an Airbus 319 headed to the Denver International Airport both fell asleep. The pilot reported that he flew a red-eye (overnight) flight, after two previous red-eye flights, and after only a one-hour break, immediately started the seven-hour flight back to Denver. In the last 45 min of the flight, he fell asleep and so did the first officer. He missed all of the calls from ATC, crossing a navigational intersection 16 000 ft too high and 350 NM/h too fast. The captain eventually woke up, although he wasn’t sure what awoke him, and heard frantic calls from ATC. He then woke up the first officer and they were able to land the aircraft without further incident.


Understanding that fatigue is directly linked to an increase in pilot error is important.

Although falling asleep while flying is an extreme example, fatigue imposes other threats. Researchers have identified that, when fatigued, people have a greater tolerance for risk. For example, fatigued drivers stop checking their blind spot—not because they have forgotten how to properly drive a car, but because fatigue is causing them to accept this risk. It is expected that, when piloting an aircraft, fatigue will result in similar corner cutting, which leads to increased errors.

Overall, humans are naturally effort conserving. This means that it is in our nature to try and accomplish a task with as little effort as possible. When we are fatigued, this is more pronounced and can lead to very dangerous behaviour. In fact, fatigue can impair a person’s performance similar to alcohol intoxication. Once a person has been awake for 18–24 hr, their performance may be impaired similar to a blood alcohol concentration (BAC) of 0.1 percent. This BAC would be experienced if an average-sized man drank six beers in an hour.

Understanding that fatigue is directly linked to an increase in pilot error is important. This means that no one can choose to tough it out and avoid the fatigue-related dip in performance. It is important to be aware of your level of fatigue and appreciate that it increases risk, so that you can make informed decisions about when it is safe for you to fly.

Decision making

Even the best-trained pilots are prone to making poor decisions in the heat of the moment. This SRM skill focuses on strategies to help pilots make the most effective decisions in the cockpit. Everyone likes to think of themselves as a rational decision maker. However, this is not always true.

For example, consider being in a checkout line at a store, waiting to purchase a package of computer paper for $11. While in line, the person behind you says that there is a big sale at an office store 15 min away where you can get the same package of paper for only $4. Would you drive the 15 min to save the money? Most people in this situation would choose to leave and purchase the more affordable pack of paper.

In another situation, if you were lined up in a store to purchase a new suit for $590 and a fellow shopper said you could travel to another suit store 15 min away and buy the suit for $583, would you leave the store? Most people would choose to stay and buy the $590 suit.

Consider for a moment if this decision is logical. In either case, you would be saving $7. Logically, you are giving 15 min of your time for $7 in both situations. However, when $7 is just a small piece of a large purchase, people value the dollars less. Another example of this is during negotiations to purchase a house, where people who would otherwise pinch pennies are easily willing to barter with thousands of dollars without blinking an eye.

In an aviation context, it is important to understand human bias in decision making as most decisions are not based on a logical weighting of all options—as we would like to believe. Similar to the $7 example, the amount of risk a pilot is willing to accept varies depending on their situation and environment. For example, your decision of whether or not to report a fellow pilot skipping their walk-around may vary depending on whether you are used to a flight school with a perfect safety record and an open safety culture or a school that continually cuts safety corners. Human decisions are heavily influenced by environment and past experiences.

In addition, human beings anchor on previous decisions. This means that, after we have made one decision, it becomes much easier for us to make the same decision in the future. If a student pilot chose to complete training at the flight school with poor safety standards, it would be easier for them to accept low levels of safety throughout their entire professional career. Research suggests that a single decision can impact decision making years in the future. It is important that we critically evaluate our decisions and consider how our habits were formed in the first place—particularly in relation to safety.

The following is an example of bad decision making in action:

A story was in the media a while ago about a pilot who was flying his Piper Tri-Pacer from the Modesto Airport with a passenger who had never flown in an airplane before. Unfortunately, he had to make an emergency landing due to smoke coming out of the engine.

After the pilot performed an inspection, he determined that a problematic hose clamp was to blame. He went to his local Wal-Mart to get a replacement, and then “fixed” the problem himself. No mechanic was called in to check the work.

The pilot took off again and, unfortunately, the cockpit began to fill with smoke a second time. An emergency was declared and the pilot executed a second landing. When the pilot investigated, he determined that the hose had a hole in it. He replaced the hose and then took off a third time.

Not surprisingly, this time the engine caught fire and the pilot was forced to make a third emergency landing. The aircraft was damaged by the hard landing, and the subsequent fire destroyed the aircraft. The poor passenger was so frightened that, during the landing, she threw herself from the aircraft onto the runway and had to be taken to the hospital.

This example of bad decision making is rather ridiculous. It is easy for us to consider the pilot crazy and dismiss the implications of this event. However, if we knew more about the pilot’s past experiences, these poor decisions were probably linked to those experiences. For example, if fellow pilots in his club exhibited similar decision making, or if he had acted in a similar way in the past without incident, it could have led to this situation.

Ultimately, it is important to understand the biases that influence our decision making and to critically consider whether or not our choices are based on logic.

Improving SRM skills

After exploring examples of SRM, the question becomes how these skills can be improved. Traditionally, the aviation industry has relied on pilots naturally developing SRM skills by spending time building hours in the real world. The perspective is that, while building experience, pilots will be exposed to and manage enough threats and errors that they will naturally develop SRM skills.

However, there is a major challenge with the traditional approach to building SRM skills. With the predicted pilot shortage on the horizon, future pilots will begin to progress into senior positions more quickly, with less time building skills naturally during the hours-building phase of their career.

To compensate for this, the burden will fall on aviation training organizations to identify methods of improving SRM skills within a training environment. Many airlines around the world develop safety training by using an observational strategy, called a line operations safety audit (LOSA). Within a LOSA, a company gathers data by hiring a trained observer pilot to sit in the jump seat behind crews and write down all the threats and errors that are faced. However, it can be difficult or impossible to conduct a LOSA within general aviation operations as the cost would be prohibitive, operations vary significantly, and many aircraft lack jump seats altogether.

However, there is a more convenient option—hangar talk stories. Stories are a powerful medium for learning. Storytelling is the fundamental way knowledge has been passed down from generation to generation—far preceding humans’ ability to produce written word. Research has demonstrated that stories are an extremely popular method of conveying information across all cultures.

We often take what people learn from stories for granted because it is something that happens naturally outside of a classroom. However, junior pilots can develop SRM skills by listening to the experiences of senior pilots. If you listen closely when a senior pilot is chatting about a situation they faced, which challenged their skills and required them to think creatively to maintain flight safety, you may realize that they utilized SRM skills. In a recent study, we examined 130 hangar talk stories. Pilots were asked to describe a situation they encountered which challenged them, and required them to think outside the box to maintain flight safety. After analysis, it was revealed that 39 percent of the stories involved decision-making skills, 26 percent communication skills, 20 percent situational awareness, and 15 percent task-management skills.

General aviation companies can gather this data in-house through simple “hangar talk surveys” that ask pilots to share their experiences. The results of hangar talk surveys can be used to create scenario-based exercises within a simulator that target specific SRM skills. Through this approach, it may be possible to accelerate the development of SRM skills without having to gather thousands of flight hours in the real world and lead to SRM training becoming a standard component of initial and recurrent pilot training.

Dr. Suzanne Kearns is an assistant professor teaching Commercial Aviation Management at the University of Western Ontario. Dr. Kearns is also a commercial airplane and helicopter pilot and an aviation safety researcher. Her most recent project is the development of a pilot safety app called “m-Safety” which is available through iTunes. She can be reached at skearns4@uwo.ca.

Scenario-Based CRM Case Study: Stall Warning Device Event

The following event, which occurred in Australia in March 2011, was recommended as an excellent case study for scenario-based crew resource management (CRM) training programs. Operators are therefore encouraged to consider it for that purpose. The Aviation Safety Letter (ASL) will include more of these examples, as we strongly believe the discussions generated by this training method yield great benefits for the crews involved. This report has been slightly shortened and de-identified for use in the ASL. Click on Report AO-2011-036 to read it in full.

Summary

On March 1, 2011, a Bombardier DHC-8-315 was conducting a regular public transport flight from Tamworth Airport to Sydney Airport, New South Wales, Australia. The crew were conducting a Sydney Runway 16L area navigation global navigation satellite system [RNAV (GNSS)] approach in vertical speed (VS) mode. The aircraft’s stick shaker stall warning was activated at about the final approach fix (FAF). The crew continued the approach and landed on Runway 16L.

The stick shaker activated at a speed 10 kt higher than was normal for the conditions. The stall warning system had computed a potential stall on the incorrect basis that the aircraft was in icing conditions. The use of VS mode as part of a line-training exercise for the first officer meant that the crew had to make various changes to the aircraft’s rate of descent to maintain a normal approach profile.

On a number of occasions during the approach, the autopilot pitched the aircraft nose up to capture an assigned altitude set by the pilot flying. The last recorded altitude capture occurred at about the FAF, which coincided with the aircraft not being configured, the propeller control levers being at maximum RPM, and the power levers at a low power setting. This resulted in a continued speed reduction in the lead-up to the stick shaker activation.

Each factor that contributed to the occurrence resulted from individual actions or was specific to the occurrence. The Australian Transport Safety Bureau (ATSB) is satisfied that none of these safety factors indicate a need for systemic action to change existing risk controls. Nevertheless, the operator undertook a number of safety actions to minimize the risk of a recurrence.

In addition, the occurrence highlights the importance of effective CRM and of the option of conducting a go-around should there be any doubt as to the safety of the aircraft. Transport Canada, which regulates the aircraft manufacturer, advised that it would publish a summary of this occurrence and recommend that operators consider using it in their scenario-based CRM training programs.

Factual information

Sequence of events
At about 18:10 local time on March 1, 2011, the flight crew conducted the approach to land at Sydney using the Runway 16L RNAV (GNSS) approach. The instrument landing system (ILS) approach that was normally used for an approach and landing on this runway was not operative at the time. The captain was the pilot not flying (PNF), and the first officer was the pilot flying (PF).

Both pilots stated that an approach brief was completed and that it included an overview of the approach chart procedure, the missed approach procedure and the identification of any additional restrictions or requirements. The approach was conducted with the autopilot engaged and using the flight director in VS mode, rather than the vertical navigation (VNAV) mode. The VNAV mode uses a higher level of automation than the VS mode, which maintains a constant descent profile to an assigned altitude entered by the crew1. When the assigned altitude is reached, the aircraft flight director and autopilot automatically levels the aircraft off unless another, lower altitude has already been entered.

The flight crew reported that the approach was commenced in instrument meteorological conditions (IMC) but as it progressed, the conditions became visual and the approach was operated in visual meteorological conditions (VMC) until landing.

The captain stated that, approaching the initial approach fix (IAF), the crew had started to feel some time pressure to complete all of the necessary checklist items and actions for the approach. It was at this point that the captain identified that the aircraft was no longer in icing conditions and so turned off the ice protection switch, without informing the first officer. During this action, the captain did not turn off the increased reference speed switch. That switch is selected ON for flight in icing conditions and sets the stall warning to activate at a lower angle of attack (thus raising the speed at which the stall warning activates).

The captain reported initially being high on profile during the approach; however, by the ‘SYDLI’ intermediate fix (Figure 1), the aircraft was back on profile but, as a result, needed to slow down. In response, the captain selected the propeller control levers to maximum RPM, which changed the pitch of the propellers and effected a significant slowing of the aircraft. In addition, the first officer reported that the power levers were retarded to flight idle from about the SYDLI position fix until the FAF. The use of maximum RPM at this point in the approach, rather than at the FAF, was not considered normal practice by the operator.

The first officer reported that, despite approaching the FAF, they had not yet configured the aircraft for landing with flaps extended or the landing gear down. In contrast, the captain stated that the landing gear was down prior to the FAF but the flaps were not extended.

The first officer adjusted the assigned altitude in the flight director system during the approach; however, the captain indicated that these adjustments were not happening fast enough to allow a continuous descent, and that the autopilot kept capturing the assigned altitude and levelling off.

Prior to the FAF, the captain noticed the airspeed was decreasing through 130 kt and called “airspeed”. The recorded flight data showed that at about this time the autopilot commenced pitching the aircraft up in anticipation of capturing the preselected altitude set by the first officer. This further reduced the airspeed to around 114 kt, the stick shaker activated, and the autopilot disconnected.

The captain called “stick shaker”, took over as PF and momentarily advanced the power levers before continuing the descent. The first officer reported assuming the role of PNF and conducted the checklist items in preparation for landing, including selecting flaps to 15°.

The aircraft continued on the approach and the crew reported they were stable by 500 ft, in accordance with the operator’s stable approach procedure. They then conducted a landing on Runway 16L. After landing, the first officer noticed the increased reference speed switch was still in the ON position.

Figure 1.

Pilots

Both the captain and first officer were properly licensed and qualified for the flight. The captain had several thousand hours on type, while the first officer had a total of 3 250 hours, with about 26 on type. Those hours were line training and had occurred within the last two weeks. The training notes from the first officer’s endorsement indicated there had been a recurring issue with speed, descent and power management during approaches, and those exercises were successfully repeated before undertaking the next training session. The first officer satisfactorily completed the endorsement training program.

As part of the company’s training program, all pilots initially completed CRM as well as threat and error management (TEM) training as part of the induction program. CRM and TEM training then formed part of flight crew’s annual recurrent training. CRM is a strategy for pilots to use all available resources effectively (including other crew, air traffic control [ATC], equipment and information)2.

Approach procedures

The operator’s standard operating procedures (SOPs) required that, when conducting an instrument approach, the relevant aircraft speed and configuration should be accomplished prior to a defined position in the approach. For an RNAV (GNSS) approach, the crew was required to achieve a speed reduction to 180 kt by the IAF. From the IAF, the aircraft was to be slowed further to a speed below 163 kt and then to 150 kt, with the PF expected to achieve a target speed of 120 to 130 kt by the FAF.

Before the aircraft passed the FAF, the operator’s SOPs required that the PF would request the PNF to select the gear down, set the flaps to 15° and initiate the landing checklist. The crew reported that the aircraft’s speed was not stable and the configuration was not finalized prior to reaching the FAF.

At the FAF, the propeller control levers were to be advanced to provide maximum RPM, the landing checklist was to be completed and the speed reduced to Vref 3+5 to Vref+20 kt by 500 ft above ground level (AGL). If these conditions were not met by 500 ft AGL, a go-around was to be conducted. Additionally, according to the operator’s flight administration manual (FAM):

Flight crew are encouraged to perform a missed approach whenever any doubt exists as to the safe continuation of an approach and landing.

Previous stick shaker occurrences prompted the operator to issue a safety alert and safety investigation bulletins to all operating crew. These notices highlighted the importance of crews following SOPs and monitoring all stages of the approach. They also highlighted the need for crews to adhere to the SOPs for ceasing the use of all ice protection systems after exiting icing conditions. In addition, the safety alert detailed strategies for profile management and aids for maintaining situational awareness.

With regard to the use of automation, the company’s DHC-8 flight crew operating manual (FCOM) stated:

Use of the autopilot is encouraged for all RNAV (GNSS) approaches to reduce workload...

The autopilot can be used with the flight director in either VNAV or VS mode for an RNAV approach.

Stall warning system

Based on the aircraft’s weight, and using data available in the operator’s FCOM, the flap 15° stalling speed of the aircraft at the time of the occurrence was 81 kt. In contrast, the flap 0° stalling speed was 99 kt.

The aircraft’s stall warning system consisted of two stall warning computers, an angle of attack (AOA) vane on each side of the forward fuselage, a stick shaker on each control column, and a stick push actuator.

The aircraft’s two stall warning computers received AOA data from the respective AOA vanes, as well as true airspeed, flap angle and pitch rate information. The computers used that information to determine a compensated angle which, if greater than the stall warning threshold angle, would activate the stick shaker. That activation occurred at a speed of 6 to 8 kt above the computed stall speed.

If action was not taken by the flight crew in response to the stick shaker, and an aerodynamic stall was encountered, the stall warning computer would activate a stick push actuator to drive the control column forward. This would decrease the aircraft’s AOA to aid in the recovery from the stall.

According to the operator’s SOPs, the recovery action following a stick shaker was to simultaneously:

  • Call “stick shaker”;
  • Advance power levers to within 10 percent of maximum take-off power (MTOP), then adjust for maximum power;
  • Select flap 15 if flap 35 is extended;
  • Gear up with positive rate of climb;
  • Select flap zero when indicated airspeed (IAS) is above flap retraction speed.

The aircraft manufacturer advised that recent updates to the aircraft flight manual (AFM) include an immediate reduction in pitch attitude in response to a stick shaker activation, as well as stating that no configuration changes should be made.

Human factors

The first officer reported that, after passing the IAF, there was an increase in workload, predominantly due to conducting an unfamiliar approach as PF and commencing the approach in IMC. In addition, the approach was being conducted in VS mode, which the first officer had reportedly not used for an approach during line flying. Use of the VS mode required more mental calculations and data entry inputs by the PF to meet the descent profile targets than would be necessary using VNAV mode (where data entry is done before descent and the autopilot flies the required descent path).

The captain reported that the use of VS mode was to increase the first officer’s awareness of ground speed and vertical speed. The aim was to increase the first officer’s skill at maintaining a vertical profile without the use of VNAV mode.

The captain also stated that, as a result of previous flights with the first officer, he anticipated an increase in his own workload due to the need to monitor the approach and the actions of the first officer. Both flight crew reported that the clearance to conduct an RNAV (GNSS) approach caught them by surprise as they were expecting another approach type.They both commented that this increased the time pressure, as they had to re-brief unexpectedly for the RNAV (GNSS) approach.

The first officer and captain both reported inter-personal communication issues with the other pilot prior to the commencement of the approach. The first officer reported not feeling comfortable speaking up in the line-training environment. As a result, the first officer had been scheduled to fly with another line-training captain, which was to take effect in the days following the occurrence.

Both of the flight crew also reported issues during the approach. The use of non-standard phraseologies by the first officer, and the fact that the captain was not aware the first officer was feeling overloaded, affected the conduct of the approach.

When learning a new skill, individuals move from what is known as knowledge-based performance to skill-based performance4. Skill-based actions are possible once an individual is very familiar with a task and they have repeated it to an extent that the actions become predominantly automatic and do not need conscious oversight. Knowledge-based performance is typical during unfamiliar or novel situations and, by contrast to skill-based performance, requires more conscious oversight and typically uses greater mental resources, increasing mental workload.

Analysis

The stick shaker activation was because the aircraft’s speed had slowed to the computed stall reference speed. In this case, due to the increased reference speed switch being left on, the stick shaker activated 10 kt higher than normal for the aircraft’s configuration. The aircraft was not configured in accordance with the operator’s SOPs for the approach. This also contributed to the stick shaker activating at a higher reference speed than if the aircraft was appropriately configured.

In addition, the target airspeed range of 120 to 130 kt for this stage of flight was not met and the action of the auto flight system’s altitude capture feature, which raised the aircraft’s nose to maintain altitude, resulted in a further decrease in airspeed. This speed reduction also contributed to the stick shaker activating.

Following the stick shaker activation at around the FAF, the aircraft was not configured for landing and the speed was not stable. According to the operator’s SOPs, if the safe continuation of the flight is in doubt, a go-around is to be conducted. Given a stick shaker activation is an indicator of an impending stall, which could affect the safety of the flight, a lower risk option for the crew was to have conducted a go-around.

The first officer’s training in the simulator had identified performance issues with speed, descent and power management during the approach and landing phase. While the first officer was successfully re-trained in the simulator during the endorsement, some of these issues reappeared during the approach.

The use of VS mode for the approach was a deliberate decision by the captain to make the first officer consider the vertical profile and power management. While this technique had reportedly helped other first officers in this situation, it would appear that for this first officer’s level of training and experience, the use of VS mode was not appropriate and unnecessarily increased the workload of both flight crew.

The flight crew reported feeling time pressured during the approach, which increased their workload. As a result, the captain turned off the ice protection system without informing the first officer. While this action was done as a result of the captain identifying and completing a required action, it was not conducive to a shared understanding of the system state by both crew. There is a need for clarity in operating roles and close adherence to SOPs during normal operations and this is particularly important in the line-training environment, given the first officer’s level of experience.

Despite the mismanagement of the speed and power during the approach by the first officer, which necessitated the selection of maximum RPM by the captain in order to slow down, the captain did not take over prior to the stick shaker activation, nor was a go-around initiated when the activation occurred. Although the decision of the captain to continue with the approach did not result in a further incident, the lower risk option is for flight crew to discontinue an approach or landing if at any stage there is any doubt as to the safe continuation of the flight.

The inter-personal communication issues reported by both crew appears to have affected their interactions and the learning opportunities for the first officer in the line-training environment. This was supported by the fact that despite having completed CRM training, the first officer reported feeling unable to report feeling overloaded to the captain at the beginning of the approach. The first officer’s performance during the approach may have affected the captain’s decision to continue following the stick shaker activation, as conducting a missed approach or go-around with the first officer overloaded may have further increased the workload of both flight crew.

The reported workload of the first officer during the approach, combined with the level of unfamiliarity of both the approach and the aircraft’s automation, is typical of knowledge-based performance. That is, the first officer’s performance was indicative of increased mental effort and workload, as opposed to the predominantly automatic actions used when conducting a highly familiar task.

As set out below, the investigation identified a number of factors that contributed to the occurrence. Each resulted from individual actions or was specific to the occurrence. The ATSB has assessed each of these safety factors and is satisfied that none of them indicated a need for organizational or systemic action to change existing risk controls. However, the investigation did highlight the importance of effective CRM and of the option of conducting a go-around should there be any doubt as to the safety of the aircraft.

Findings

The ATSB issued the following findings:

Contributing safety factors

  • The stick shaker system activated during the approach as a result of the increased reference speed switch being in the ON position, the associated computed reference speed being reached, and the aircraft not being configured in accordance with SOPs.
  • A lack of communication and ineffective CRM between the flight crew and non-adherence to the operator’s SOPs adversely affected crew actions and coordination.
  • Due to time pressure, inadequate CRM and the increased workload of both flight crew, the RNAV approach was not flown in accordance with SOPs.

Other safety factors

  • Despite being aware that at the FAF the aircraft was not appropriately configured, and the resulting stick shaker activation, the crew did not initiate a go-around/missed approach as recommended by the operator’s guidance material.
  • The conduct of the approach in VS mode rather than VNAV mode increased the workload of the first officer and captain.

Safety action

Operator

The operator has advised that, as a result of this incident the following action was taken:

  • Relevant sections of the training and checking manual have been reviewed and will, subject to Civil Aviation Safety Authority approval, be revised as a result of this incident.
  • The aircraft mechanical checklist was amended to include an item known as “ice protection” to confirm the status of the ice protection system.
  • A procedure was implemented to identify and heighten flight crew awareness of the minimum speed for the environmental and aircraft configuration state.
  • The Standards Department and Procedures Review Group conducted a review of approach workload and submitted the findings to the Flight Standards Review Group. These included better clarity and role definitions within documented procedures; and expanded timing and sequencing procedures to aid in management during high workload periods.
  • A group/industry workshop forum was organized to share experiences and best practices in regard to situational awareness on the flight deck. The workshop identified additional human factors competencies that the operator intends to incorporate into its training program.

Transport Canada

Transport Canada advised that given its current focus on contemporary CRM training for all commercial pilots, once the final report has been released, it will publish a summary of the occurrence in the ASL, with a recommendation for operators to consider using it in their scenario-based CRM training programs.

Decorative line

1 The VNAV mode utilizes the aircraft’s flight management system to fly a pre-determined profile which conforms to the published approach procedure, while the VS mode maintains a constant rate of descent to an assigned altitude entered by the crew.

2 Salas E., Wilson, K.A. & Burke C.S (2006). “Does Crew Resource Management Training Work? An Update, an Extension, and Some Critical Needs”. Human Factors, 48(2) 392-412.

3 Vref: Reference speed that is commonly used to determine an aircraft’s approach speed. Vref is Vs multiplied by a factor of 1.3. Vs is the minimum indicated airspeed at which the airplane exhibits the characteristics of an aerodynamic stall.

4 Rasmussen, J. (1983). “Skills, Rules, and Knowledge; Signals, Signs, and Symbols, and Other Distinctions in Human Performance Models”. IEEE Transactions on Systems, Man & Cybernetics, SMC; 13(3).

Decorative line

The Flight Safety Foundation’s ALAR Tool Kit

The aim of this article is to build awareness of the Flight Safety Foundation (FSF) Approach and Landing Accident Reduction (ALAR) Task Force recommendations, and the associated FSF ALAR Tool Kit, and to encourage its use by Canadian operators and pilots.

Background on the FSF ALAR Task Force

The FSF created the “FSF ALAR Task Force” in 1996 as another phase of its controlled flight into terrain (CFIT) accident reduction initiatives, launched in the early 1990s. The task force final working group reports were presented in November 1998 through a 288-page special issue of the FSF Flight Safety Digest1, at the joint meeting of the FSF 51st International Air Safety Seminar, the International Federation of Airworthiness 25th International Conference, and the International Air Transport Association (IATA), in Cape Town, South Africa. The task force issued detailed recommendations targeting the reduction and prevention of the approach and landing accidents (ALAs). The FSF ALAR Task Force recommendations have been recognized internationally as practical tools for mitigating the risks of ALAs.

Further to those recommendations, the FSF ALAR Tool Kit was developed and distributed by the FSF as an aid to education and training, and as a resource that could be used by a variety of aviation professionals in company management, flight operations and air traffic control. The tool kit, which was updated in 2010, consists of a multimedia resource on a compact disc (CD), and contains the report of the FSF ALAR Task Force, conclusions and recommendations, the FSF ALAR Briefing Notes, videos, presentations, hazard checklists, and, lastly, other documentary notes and products designed to prevent ALAs, the leading causes of fatalities in commercial aviation.

Fundamental components of the tool kit are the 33 FSF ALAR Briefing Notes. They were produced to help prevent ALAs, including those involving CFIT. The briefing notes are based on the data-driven conclusions and recommendations of the FSF ALAR Task Force, as well as data from the U.S. Commercial Aviation Safety Team (CAST), the Joint Safety Analysis Team (JSAT) and the European Joint Aviation Authorities Safety Strategy Initiative (JSSI).

Generally, each briefing note includes the following:

  • Statistical data related to the topic;
  • Recommended standard operating procedures;
  • Discussion of factors that contribute to excessive deviations that cause ALAs;
  • Suggested accident prevention strategies for companies and personal lines of defense for individuals;
  • Summary of facts;
  • Cross-references to other briefing notes;
  • Cross-references to selected FSF publications; and
  • References to relevant ICAO standards and recommended practices, U.S. Federal Aviation Regulations and European Joint Aviation Requirements.

The briefing notes include key topics such as automation, approach briefings, human factors, crew resource management, altitude deviations and terrain avoidance manoeuvres, to name just a few. As examples, check out Briefing Note 2.1 on Human Factors, which expands on the human factors which could be involved in ALAs, and Briefing Note 2.2 on Crew Resource Management, which touches on critical aspects of crew coordination and cooperation. All 33 briefing notes are equally useful in preventing ALAs, and they are available for free download on the FSF Web Site.

The IATA has endorsed the FSF ALAR Tool Kit and has recommended that its members use it. In 2001, ICAO stated that the FSF ALAR Tool Kit contained extremely valid accident prevention information and that member states should consider incorporating the material into their training programs. ICAO then purchased and distributed 10 000 copies of the tool kit at its 33rd Assembly in the fall of 2001. To date, approximately 40 000 copies of the tool kit have been distributed worldwide. The FSF ALAR Tool Kit on CD is available for online sale from FSF in English, Spanish and Russian, at a cost of $95.00 for FSF members or $200.00 for non-members.

Decorative line

1 Special Issue of the FSF Flight Safety Digest, “Killers in Aviation: FSF Task Force Presents Facts About Approach-and-landing and Controlled-flight-into-terrain (CFIT) Accidents”. November-December 1998 / January-February 1999.

Date modified: