Testing safety-critical systems is not about dashboards, coverage percentages, or velocity.
It is about judgement, responsibility, and professional integrity.
I learned the hard way while working on large-scale flight simulator verification, including qualification activities on Boeing 777 Level D full-flight simulators. These are not academic systems. They are used to train pilots for real-world emergencies. The fidelity has to be high. The evidence has to be credible. And when you affix your signature on test results, documents, your name carries real responsibility.
Here are some of the most important lessons I took from that work — lessons that have stayed with me across aerospace, defence, and other regulated environments.
1. Technical skills are important, but human skills matter more than you think
When I started working on large simulator programs, I assumed the biggest challenges would be technical. In reality, many of the hardest problems were social and organizational.
These systems are built by many specialized teams: avionics engineers, software developers, motion specialists, visual system experts, sound engineers, integration teams, technicians, and more. Issues rarely sit neatly inside one domain.
At first, I behaved like many testers do: write a ticket, document the bug, wait. It didn’t work well. What worked far better was walking to people’s desks, discussing the issue face to face, and debugging together. Over time, I built relationships across the organization. People stopped seeing me as “the test pilot who writes tickets” and started seeing me as someone who genuinely helped solve problems.
By the time I left, I had shaken hands with hundreds of colleagues. That trust made everything faster, smoother, and more effective.
Lesson learned:
- Communication is part of the engineering system.
- Strong relationships improve system quality.
2. Saying “no” is part of the job and part of the ethics
One of the hardest skills I had to develop was learning when to say no. In regulated environments, pressure is constant:
- “Can you execute all tests in three days?”
- “Can we sign this off now and fix issues later?”
- “Do we really need to rerun this campaign?”
But when you sign your name on verification artefacts that may be reviewed by aviation authorities, “yes” has consequences. I learned to respond differently:
- “No, I can’t safely execute all of that in three days. We need to discuss trade-offs.”
- “We can reduce scope here if we formally accept this risk.”
- “We need more resources, or we adjust expectations.”
Interestingly, in mature aerospace environments, this is not seen as obstruction. It is seen as professionalism.
Lesson learned:
- Independence in verification is not just a process requirement.
- It is a professional and ethical responsibility.
3. Good managers remove obstacles, they don’t obsess over dashboards
I’ve seen organizations invest huge energy into KPIs, dashboards, maturity models, and reporting structures. Some of that is useful. But in my experience, those things do not create quality. The best managers I worked with focused on:
- Removing blockers
- Clarifying priorities
- Solving cross-team conflicts
- Ensuring access to rigs and environments
- Protecting the team from noise
They didn’t ask for better charts. They asked what was slowing us down and helped fix it.
Lesson learned:
- In complex verification work, flow matters more than metrics.
- Leadership quality shows in obstacles removed, not reports produced.
4. Bringing test into requirements and design saves huge pain later
I’ve lost count of how many times I’ve seen this pattern:
- Requirements look clear to their author
- Architecture looks elegant on paper
- Everything feels “done”
- Then verification begins… and problems explode
Some requirements are ambiguous once you try to test them.
Some architectures are logically sound but practically untestable.
Some systems lack observability, making failures impossible to diagnose.
On projects where verification was involved early, during requirement writing, interface discussions, architectural choices — downstream issues were dramatically reduced.
Lesson learned:
- Testability is not a testing concern.
- It is a design property.
5. At some point, I stopped trusting test schedule too much
Early in my career, I invested a lot of energy in writing detailed test schedules. Over time, reality kept proving something uncomfortable: the schedule almost never survived intact. The test plan, the document describing tools, processes, responsibilities, and verification strategy — was stable. It was meant to be. But the schedule was a different story:
- Hardware maturity changed.
- Integration order changed.
- Tools broke.
- Dependencies shifted.
- Unexpected behavior appeared.
Eventually, I changed approach. Instead of over-investing in static schedules, I focused more on continuously evaluating the current situation:
- What is the highest risk right now?
- What has changed since last week?
- Where do we need depth, and where is shallow testing acceptable?
That mindset led to better decisions than blindly following outdated plans.
Lesson learned:
- Discipline is not following the schedule.
- Discipline is continuously adapting based on risk and reality.
6. Root cause analysis separates mature teams from busy teams
Under time pressure, it’s easy to focus on fixing symptoms. But in safety-critical environments, that’s a dangerous habit. I learned the value of documenting defects precisely and classifying them meaningfully. Instead of writing “Software error” we captured:
- “Pointer out of bounds in loop”
- “Race condition during bus initialization”
- “Timeout on socket under load”
That level of precision enabled real analysis. Patterns emerged. Root causes were addressed. Recurrence decreased.
Lesson learned:
- Good defect data is not bureaucracy.
- It is organizational learning.
7. Safety-critical work changes how you think — permanently
Working on aircraft systems, simulators, and regulated environments changes you. You become less tolerant of:
- Vague requirements
- Unverified assumptions
- Weak documentation
- Cosmetic compliance
And more focused on:
- Traceability
- Evidence
- Clarity
- Responsibility
- Long-term consequences
That mindset stays with you. It becomes part of how you approach any complex system, regardless of domain.
Lesson learned:
- Safety-critical experience is not just technical experience.
- It shapes professional judgement.
Why these lessons still matter
Tools evolve. Technologies evolve. Methodologies evolve. But the core realities do not.
- Complex systems are built by humans.
- Humans make assumptions.
- Assumptions must be challenged.
- Evidence must be maintained.
- Responsibility must be taken seriously.

Whether it’s aerospace, defence, medical devices, or critical infrastructure, the underlying pattern is the same. The quality of the system ultimately reflects the maturity of the people who build and verify it.
Final reflection
Working on Boeing 777 simulators taught me the aircraft deeply. But more importantly, it taught me what engineering responsibility really means.
It does not live in tools.
It does not live in dashboards.
It does not live in process frameworks.
It lives in how engineers:
- Communicate
- Push back
- Decide
- Document
- Collaborate
- Take ownership
That is what protects users in the end, whether they are pilots in a cockpit, operators in a control room, or anyone relying on safety-critical systems to behave correctly when it matters most.

