Notes on Engineering progression
Progression in engineering is sometimes treated as a change in scope. More responsibility. Broader systems. Longer timelines. But in practice, it’s not just scope that shifts - It’s the type of thinking the role requires.
In the early stages of a project, most problems are bounded. You’re solving within constraints that are already defined - sensor limitations, protocol requirements, test plans. As roles change, those constraints become yours to define. And the risk isn't in making a mistake in implementation. It's in defining a boundary incorrectly, or missing the interaction that turns a minor issue into a systemic one.
That shift is difficult to notice while it's happening.
Failure as structure, not as event
In implementation-heavy work, failure shows up in logs or misbehaviour. In strategic work, failure is harder to detect. It appears as a missed conversation, a scope that drifts, or a system that is technically correct but operably fragile.
One of the most important transitions for me was learning to detect that kind of failure earlier - not through outputs, but through process shape. Are failure modes being categorised in a way that can be tested later? Is the current design encouraging decisions that will age poorly? Are we making assumptions we’ll forget we've made?
You don’t prevent failure by trying to cover everything. You prevent it by making your blind spots visible.
Subsystems, interfaces, and long-term outcomes
Another shift has been in how I relate to code, and systems, more broadly. Previously, I treated problems as something to solve locally: a function, a module, a logging condition. Now, I spend more time thinking about interfaces - not just technical, but operational. Who owns this decision six months from now? What happens when requirements change? What will make someone override this logic without telling us?
In other words: the work becomes less about control, and more about influence. Less about “how should this be implemented” and more about “how should this behave so it can be understood, maintained, and challenged later?”
This is especially true in embedded systems, where configuration logic, failure handling, and logging often outlive the original design team, even through dramatic changes to the application.
Depth as constraint awareness
Technical depth remains important - but it manifests differently. Rather than focusing on low-level optimisation, I find myself paying more attention to constraints:
- How tightly can this be defined without making it brittle?
- Where can variability be tolerated without loss of integrity?
- What needs to be centralised, and what shouldn’t be?
These questions aren’t always answerable upfront, but they shape the decisions that follow. They also shape how systems are tested, how they are versioned, and what happens when things go wrong.
Influence without ownership
Perhaps the most subtle shift is learning how to shape systems you don’t own directly. When working across firmware, infrastructure, GIS pipelines, and internal tools, it's rarely possible to hold all the pieces. You can’t design in isolation, but you still have to ensure consistency across interfaces and expectations.
That means your influence comes from clarity - not control. From communication that reduce ambiguity. From models that represent the real problem space. From defaults that encourage the right outcome. It’s slower, and it often feels less visible. But it scales.
Progression in engineering isn’t just about complexity or scope. It’s about thinking further ahead, tolerating uncertainty for longer, and designing for change.
Most of the systems I care about now are shaped by decisions that won’t be obvious until much later. The job is to make those decisions visible, testable, and, when necessary, reversible.