Testing safety-critical systems is not about dashboards, coverage percentages, or velocity.
It is about judgement, responsibility, and professional integrity.
I learned the hard way while working on large-scale flight simulator verification, including qualification activities on Boeing 777 Level D full-flight simulators. These are not academic systems. They are used to train pilots for real-world emergencies. The fidelity has to be high. The evidence has to be credible. And when you affix your signature on test results, documents, your name carries real responsibility.
Here are some of the most important lessons I took from that work — lessons that have stayed with me across aerospace, defence, and other regulated environments.
When I started working on large simulator programs, I assumed the biggest challenges would be technical. In reality, many of the hardest problems were social and organizational.
These systems are built by many specialized teams: avionics engineers, software developers, motion specialists, visual system experts, sound engineers, integration teams, technicians, and more. Issues rarely sit neatly inside one domain.
At first, I behaved like many testers do: write a ticket, document the bug, wait. It didn’t work well. What worked far better was walking to people’s desks, discussing the issue face to face, and debugging together. Over time, I built relationships across the organization. People stopped seeing me as “the test pilot who writes tickets” and started seeing me as someone who genuinely helped solve problems.
By the time I left, I had shaken hands with hundreds of colleagues. That trust made everything faster, smoother, and more effective.
Lesson learned:
One of the hardest skills I had to develop was learning when to say no. In regulated environments, pressure is constant:
But when you sign your name on verification artefacts that may be reviewed by aviation authorities, “yes” has consequences. I learned to respond differently:
Interestingly, in mature aerospace environments, this is not seen as obstruction. It is seen as professionalism.
Lesson learned:
I’ve seen organizations invest huge energy into KPIs, dashboards, maturity models, and reporting structures. Some of that is useful. But in my experience, those things do not create quality. The best managers I worked with focused on:
They didn’t ask for better charts. They asked what was slowing us down and helped fix it.
Lesson learned:
I’ve lost count of how many times I’ve seen this pattern:
Some requirements are ambiguous once you try to test them.
Some architectures are logically sound but practically untestable.
Some systems lack observability, making failures impossible to diagnose.
On projects where verification was involved early, during requirement writing, interface discussions, architectural choices — downstream issues were dramatically reduced.
Lesson learned:
Early in my career, I invested a lot of energy in writing detailed test schedules. Over time, reality kept proving something uncomfortable: the schedule almost never survived intact. The test plan, the document describing tools, processes, responsibilities, and verification strategy — was stable. It was meant to be. But the schedule was a different story:
Eventually, I changed approach. Instead of over-investing in static schedules, I focused more on continuously evaluating the current situation:
That mindset led to better decisions than blindly following outdated plans.
Lesson learned:
Under time pressure, it’s easy to focus on fixing symptoms. But in safety-critical environments, that’s a dangerous habit. I learned the value of documenting defects precisely and classifying them meaningfully. Instead of writing “Software error” we captured:
That level of precision enabled real analysis. Patterns emerged. Root causes were addressed. Recurrence decreased.
Lesson learned:
Working on aircraft systems, simulators, and regulated environments changes you. You become less tolerant of:
And more focused on:
That mindset stays with you. It becomes part of how you approach any complex system, regardless of domain.
Lesson learned:
Tools evolve. Technologies evolve. Methodologies evolve. But the core realities do not.

Whether it’s aerospace, defence, medical devices, or critical infrastructure, the underlying pattern is the same. The quality of the system ultimately reflects the maturity of the people who build and verify it.
Working on Boeing 777 simulators taught me the aircraft deeply. But more importantly, it taught me what engineering responsibility really means.
It does not live in tools.
It does not live in dashboards.
It does not live in process frameworks.
It lives in how engineers:
That is what protects users in the end, whether they are pilots in a cockpit, operators in a control room, or anyone relying on safety-critical systems to behave correctly when it matters most.
We'd love to hear your thoughts! The easiest way to reach us is by emailing info@houseoftest.ch or contacting the author directly.