One of the rookie air traffic controllers in my area had an operational error last week.
An operational error in air traffic control occurs when a controller fails to follow the separation standards and gets two or more aircraft too close together, or when he gets an aircraft into another controller’s airspace (or other protected airspace) without permission.
The rookie was transferring a flight to another controller which usually involves taking the data block (the computer information tag on the radar scope associated with an airplane) and flashing it at another controller. This is called an automated “handoff”. The receiving controller sees the new data block flashing at him on his radar display and makes a computer entry to accept the handoff.
In many cases these handoffs are started automatically by the computer when the aircraft gets close to a sector boundary.
When the next controller takes/accepts the data block he has given permission for the first controller to have that aircraft enter the next controller’s airspace and he then transfers the aircraft to the next controller’s frequency. These handoffs are made all across the country as a flight transits from one sector and facility to another.
It is illegal to allow one’s aircraft to enter another controller’s airspace without a handoff or other coordination. Failing to handoff an aircraft prior to entering another controller’s airspace or otherwise coordinate with the next controller is an operational error (more specifically an airspace deviation).
In the rookie’s case, after the automated handoff started the aircraft requested a different altitude. This meant the controller had to take the handoff back (stop him flashing at the next controller) to put in the new altitude and then assign it to the aircraft. He then attempted to make the computer entry to re-initiate the automated handoff, and then placed his attention on other aircraft and duties he needed to perform elsewhere in his sector.
Unfortunately he failed to notice that he had entered an improper computer message and the automated aircraft handoff never re-started. By the time he looked back, the aircraft was now outside of his sector in another controller’s airspace without a handoff; an operational error/airspace deviation had occurred.
This was a simple human error mistake. The controller thought he had started an automated handoff to the next controller but he hadn’t. By the time he noticed his mistake, it was too late.
However, in most cases this automated handoff is started automatically by the computer to ensure that is is done. When the controller took the handoff back after it had already started, the computer “inhibited” the handoff; in other words it wouldn’t flash automatically.
Undeniably the controller made a mistake in failing to complete a handoff or coordination prior to the aircraft entering the next controller’s airspace.
But it does beg the question: why does the computer automatically initiate automated handoffs, but then at some point prevent those automatic handoffs to occur?
I hate to say errors happen all the time, but airspace deviations aren’t that uncommon; most are non-events and don’t result in any actual degradation of safety, but they are in violation of the rules regardless. And the air traffic rules are what keeps the system safe.
We used to say, “No harm, no foul” which meant that if nothing outside of the airspace deviation occurred, no one really cared. Most are “non-events”, and the FAA isn’t really interested in investigating or processing these errors and/or otherwise doing anything about them. But they are inevitable given the way controllers work air traffic.
However a few years ago the FAA got really interested in “metrics” and decided to take steps to reduce the number of operational errors that were occurring within the air traffic system.
You might think they would have come up with new procedures and/or equipment to accomplish this task. Either might have helped reduce human error mistakes (like the one I just described) and reduce errors.
But the FAA doesn’t like to actually acknowledge human error mistakes, mostly because they have no solution for them. As controllers we’re constantly reminded by managers that we just need to “try harder” and these errors would be prevented. It’s the “Doctor, it hurts when I do this” joke ( “Then stop doing that.”). It’s not a real solution but they best they can come up with.
Instead in order to reduce the number of controller errors, the FAA simply redefined how operational errors were reported. They came up with an error category called “proximity events“, wherein even though the minimum separation standards weren’t met, they wouldn’t have to report the error. Magic, less errors!
The new proximity event (from the FAA 7210.663) is defined as (my emphasis):
A loss of separation minima where 90 percent or greater separation is retained in either the horizontal or vertical plane…<snip> A Proximity Event is not an operational error.
So overnight the FAA eliminated some of its operational errors by simply redefining the reporting standards for failures to meet the separation standards. They didn’t rewrite the separation rules for air traffic controllers; they stayed the same. All that changed was the way those events are reported by the FAA.
Unfortunately for our young friend the redefinition of the reporting rules didn’t include airspace deviations. Those are still errors just like they used to be.
The FAA has no real or viable method of actually making the air traffic system safer. Instead they’ve simply changed the reporting system so that the metrics look better; and in bastardizing the reporting system they’re intentionally covering up the number of errors occurring within the air traffic system.
In like fashion, after the multiple bird strikes that caused the U.S. Airways Airbus to ditch in the Hudson River on January 15th, 2009, the FAA proposed legislation to keep bird strike data hidden from the general public (although that proposal was later rejected by the Department of Transportation Secretary).
Apparently the FAA would prefer to keep the flying public in the dark about error and accident statistics.
So not only do they conveniently reclassify operational errors to make them seem less prevalent, they want to cover up other safety related issues as well.
Disturbingly, in the fall of 2007, NASA also refused to disclose safety survey results from pilots gathered in an $8.5 million four year project.
At a briefing in April 2003, FAA officials expressed concerns about the high numbers of incidents being described by pilots because the NASA results were dramatically different from what FAA was getting from its own monitoring systems.
An FAA spokeswoman, Laura Brown, said the agency questioned NASA’s methodology. The FAA is confident it can identify safety problems before they lead to accidents, she said.
So the survey results highlight a lot of safety related problems within the aviation industry, and both NASA and the FAA want it buried, in spite of the fact that:
“The data is strong,” said Robert Dodd, an aviation safety expert hired by NASA to manage the survey. “Our process was very meticulously designed and very thorough. It was very scientific.”
The FAA doesn’t have a consistent or reasoned approach to aviation safety. It’s evident they’re more concerned about the appearance of safety than the real thing. And if the data shows a problem, they’ll simply ignore, reclassify or hide the data.
That’s why the relatively new ATSAP program (a safety reporting system) the FAA is deploying is a hard sell to a lot of controllers. Ideally safety reporting systems free from reprisal or punishment for reporting safety violations are good for improving safety-related systems. But in order for those types of programs to work, the organizations using them have to want to actually improve and address those problems; not rationalize, explain, or hide them away (which is what the FAA is really good at).
Last month a memo written by an FAA manager alleged safety concerns within Denver TRACON due to the inexperience of its controller workforce that forced them to issue additional mile in trail restrictions to adjacent facilities to “spread out” the aircraft landing at Denver airport.
An FAA Director/Manager instead disputed that memo, claiming (my emphasis):
“As the letter is written, I would agree with you it sounds alarming,” said Kathryn Vernon, the FAAs Director of Western Terminal Operations. “And I understand the letter makes it look like we had a situation we had to get under control. I would disagree with that,” said the FAA official. “There is not a safety issue in the Denver airspace and Colorado airspace.“
If FAA management won’t listen to its own managers, why would they listen to controllers?
There are no real safety concerns to FAA management; at least none that they’ll acknowledge.
It’s no wonder that the FAA has long been known as the “Tombstone Agency“.
These days they’re mostly concerned about satisfying their “customers” (the airlines) and making them look good, and not about the flying public who counts on the FAA to mandate and oversee safety concerns within the aviation industry (namely the airline industry).
Unfortunately none of it changes the fact that there are still a lot of safety problems within the aviation industry, including the possibility for human error, both in the cockpit and behind the radar scopes.
And until the FAA stops trying to sell the American flying public a bill of goods, those safety problems will never be seriously or earnestly acknowledged or fixed.