More FAA Statistical Manipulations

I hate to be continually blogging about my annoyances with the Federal Aviation Administration (FAA), but my employer just keeps coming up with more stuff noteworthy of mention here, and it’s also my way of venting my frustration.

First, I will establish a little background for those not in the air traffic business so that my story will make more sense.

Air traffic control is highly compartmentalized.  That is, the system is built around the concept that a single controller is responsible for, and can freely control traffic only within his own clearly defined section of airspace.  Entering another controller’s section of airspace requires permission, as does performing some control action on an aircraft before it is within the confines of a controller’s own airspace.

As an aircraft transits the sky it crosses through many different controllers’ airspaces.  Prior to entering the next controller’s airspace a controller usually performs a “handoff”, which is an automated (computerized) method of asking permission to allow that aircraft to enter the next controller’s airspace.  Upon acceptance of that computer “message”, it is legal for one controller to allow the aircraft to enter the other controllers’ airspace and the aircraft is told to contact the next controller on his particular radio frequency.

That cycle is repeated over and over until an aircraft reaches its destination.

The compartmentalization is intended to ensure that only a single controller is separating aircraft within a specific area, simply to avoid confusion and possibly conflicting instructions.

(There are some nuances to that process as well but that’s enough of an explanation to continue.)

Now at the same “Quality Assurance” briefing where they covered the playback of Northwest Flight 188, our QA manager made some interesting comments.

First he noted that the number of deviations at our facility were up.  Deviations are violations of the air traffic rules where an airplane illegally enters another controllers’ or otherwise protected airspace without permission, falling under the broader category of “operational errors”.

(The other type of operational error is a separation error, where two or more airplanes are allowed to get too close together.)

Facilities used to track their days since their last operational error and it was a “metric” the FAA loved to track.  Keeping operational errors in check was also a performance goal for the entire FAA system.

At our facility the number since our last operational error used to be displayed on a board at the front of the control room.

But a few years ago when the FAA decided to start aggressively tracking the air traffic system operational error metric they discovered a problem.

The problem ultimately is that air traffic control in the U.S. is still a system that places sole responsibility for keeping airplanes separated upon the air traffic controller who uses non-automated/manual methods (i.e. his eyes and brains) to keep airplanes apart.  There are some computer systems to help controllers make decisions, but all decisions are the responsibility of the controller who must multitask between all the different aircraft and job responsibilities he has while working.

It’s a system that still prone to human error mistakes because of the high degree of manual involvement by the controller.

That’s why controllers are well paid: to perform a job that requires attention to detail, speed and accuracy with little margin for error.

But it’s an error-prone system because of the high level of human involvement and over the years the FAA has failed to put systems and tools in place to significantly reduce that burden of responsibility off the air traffic controller.

At the same time at some busier facilities controllers were/are pressured to run airplanes as close together as was legally possible to avoid delays which in turn significantly reduces the margin for error.

But since air traffic controllers are human beings, they make mistakes.  They can either miss things altogether or get distracted.  When that happens sometimes operational errors occur.

Air traffic controllers realize this and do their best to not make mistakes, but mistakes happen nonetheless.  That’s part of being human.

However some FAA managers believe that controllers don’t try enough and actually believe that within the current system that all operational errors are preventable, completely discounting the aspect of human error in the equation.

These are the same managers who claim they’re paid extra money because they have the responsibility of ensuring the safety of the entire air traffic system with their policies and procedures.  But their belief that errors are solely due to what amounts to negligence by air traffic controllers is wholesale denial of the root of the problem.  Their attitude is simply a “blame the other guy” approach: simple and easy.

So instead of trying to come up with better systems and tools to minimize human error mistakes, for years FAA managers chose to discipline controllers for having operational errors, and later take away pay for controllers who made mistakes.  The pay penalty system was in place as recently as through September of this last year under the FAA’s imposed work rules.

Not that it should have been a surprise but the FAA found that neither tactic worked.  They discovered that controllers were still making mistakes in spite of the threats the managers were using to “motivate” them.  So much for the managers ensuring or improving the safety of the air traffic system…

The situation made the headlines in 2006 when it was found that at New York TRACON that controllers were having lots of unreported errors and getting airplanes too close together.

Last year the controllers, angry that the F.A.A. was reducing staffing levels at the Tracon, anonymously reported dozens of operational errors, mostly involving ”compression” of planes lined up to land. F.A.A. officials, reviewing tapes of radar displays, were surprised to discover that it was apparently common practice for controllers to squeeze planes slightly closer than three miles apart. The F.A.A. then began random audits of the tapes, and discovered numerous other errors. The error rate was found to be six times higher than previously reported.

If you doubt that controllers were being punished for their mistakes, in the same article, Bruce Johnson, the Vice President of the Air Traffic Organization (ATO) for Terminal Operations at the time confirmed that fact:

<snip>…said in a telephone interview that controllers were being penalized for errors so small that it required intense analysis to find them.

Since the FAA managers are tasked with “mitigation” schemes to eliminate problems (remember they’re allegedly paid for and responsible for ensuring the entire air traffic system works) they finally gave up on their lame-brained heavy-handed approach and tried a completely different approach.

If they couldn’t bully controllers into not making mistakes, they would simply reclassify operational errors to make them go away.  Situations that were previously considered an error would be downgraded to not being an error.

Their first step was to create a class of operational errors called “proximity events“.  This would make errors of the kind occurring at New York TRACON magically disappear.

Then they would separate airspace deviations from separation errors, making deviations an “insignificant” kind of error.

Then a year ago or so they starting transitioning the entire organization into an alleged “safety culture” and “just culture”.  Along with it they started the ATSAP program, which is a safety program based on the airlines’ safety reporting systems.

So virtually overnight the FAA wanted controllers to believe that we went from a culture of blaming controllers; punishing controllers for errors, and failing to give controllers better tools to do their job better, to a safety culture?!

This approach was really just another way for FAA managers to make the error numbers look better without actually doing anything to reduce the errors.  It was another aspect of their reclassification scheme that at the end of the day would make it look like the air traffic system was having fewer errors.

By no longer punishing controllers and having them complain about it the FAA could now quietly make more operational errors disappear.  They had essentially “bought” silence from the controller workforce by not prosecuting them for mistakes.

So at our Quality Assurance (QA) briefing only a week ago the QA manager said that airspace deviations errors were up, but played down that fact saying that they thought it was “artificially inflated” due to the new ATSAP program where controllers were reporting many previously unreported errors.

He wanted to make sure that controllers understood that under ATSAP, errors can be “unknown” or “known” events.  In the case of a “known” event errors must be investigated and paperwork completed by the FAA, which would add to the known error tally.  “Unknown” ATSAP events would be simply (and quietly) added to the ATSAP database without any other processing (i.e., ignored).

Our QA manager encouraged us to use the ATSAP program but reiterated that we didn’t need to make errors “known” events (which would raise the error numbers), instead suggesting we file our ATSAP reports without making the incident known to management (keep them “unknown” events instead) which would mean they wouldn’t have to do normal error processing (and in turn raise the deviation error count).

What I heard was our QA manager essentially telling us to use the “no harm, no foul” approach to deviations and file ATSAP reports on the deviations as “unknown” events.

If the FAA was truly a “safety culture” they would encourage all error reporting so that they could examine the causes of errors and make improvements to the air traffic system.

Instead of using the metrics to criticize controller performance they could use them to determine the shortcomings of the system and try to correct them.

You would think that the if anyone had received the “memo” on the “new FAA safety culture” approach it would have been the Quality Assurance manager…

But the FAA has no real plan to actually reduce the amount of operational errors within the system, other than reclassifying them.

This is obviously just another attempt by FAA management to make it look like they’re doing their jobs better (manipulating the numbers to fool the “performance based” system), when in fact it’s just business as usual and they’re doing nothing of note to improve safety within the air traffic system.

In fact if anything they’re doing exactly the opposite:  making deliberate efforts to hide the safety problems within the air traffic system.

Of course we have a running joke at our facility:  ERAM (the new computer system the FAA is developing with Lockheed Martin that has been delayed and delayed again by problems) will fix everything…

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *