About


Hey hey, I’m Eric – a life long engineer, leader, (published) author, military combat veteran, and martial artist who enjoys helping people and systems perform at their best. Across my work in technology, consulting, and team leadership, I’ve learned that strong organizations are built through communication, structure, and thoughtful habits. I share those lessons through my writing, management training videos, and my podcast, “Beyond the Belt”, where I talk with other BJJ Black Belts who’ve pursued excellence in their passions and lives.

Whether it’s building software, coaching managers, or exploring the mindset behind high-level performance, I care about helping people grow in a way that feels real, practical, and sustainable.

Reach out out to me if you have questions or come take a BJJ class with me at Fenriz Gym in Berlin, Germany.

Latest Writing


All Writing

Infrastructure by Adoption: An AI-Engineering First Principle

Infrastructure used to be only created by decision. Now it’s also created by adoption.

Useful tools are getting built everywhere now — a solutions architect writes an integration wrapper to unblock onboarding, a PM stands up a dashboard pulling from three APIs, an engineer builds a CLI to automate a migration. These things work. They get used. They get adopted. They become part of the workflow — other things depend on them. And that’s how infrastructure gets born without anyone recognizing it: not by decision, but by adoption, regardless of who built it or how quickly it was stood up.

The definition of application infrastructure changed. The rules for building it didn’t.

Change is hard

The Transition That Nobody Sees

Two forces make this more common than it used to be:

  1. AI collapsed build cost. People across the organization can stand up functional tools in hours, and the outputs are more capable and widely adopted than the quick fixes of five years ago.
  2. Composable systems are becoming the default: APIs as products, shared data layers, MCP servers, agent toolchains. This all means more of what gets built is designed to be built upon, or accidentally becomes something other things depend on.

The precarious moment is the transition: when a useful tool or workflow duct tape quietly becomes a system that other systems, teams, or agents depend on. In the old world, engineers built infrastructure, engineers recognized infrastructure, and engineers operationalized infrastructure. The people identifying the need and the people building the response were the same. Nobody needed a process for “who figures out if this is infrastructure” because the people building it already knew.

That’s no longer true. When everyone builds, the person who created the thing often doesn’t have the context to recognize that it became foundational. They are probably just solving their own problem. So if recognizing the transition isn’t anyone’s explicit job, it only surfaces when something breaks.

This is how it actually shows up. Someone builds an onboarding tool to unblock a workflow. It works, so more teams start using it. Now every new customer gets onboarded through it, internal teams rely on it to track progress, and it keeps evolving. At no point did anyone decide this was production infrastructure — but it is. And it never went through the processes that would normally apply to something carrying that weight.

The danger isn’t that people are building without permission; it’s that they are building dependencies without awareness. This creates an immediate tension. In an environment that has always prized velocity, any reversion to a manual check feels like a regression. The goal is not to return to centralized control. It’s a trigger-based check that only kicks in when something starts carrying dependency weight. But we have to be clear about the trade-off: the friction of a brief engineering review is a small tax to pay compared to the cost of discovering you’ve been running production infrastructure with prototype-level support.

The New First Principle

When other things depend on what you’ve built, it needs engineering judgment applied, regardless of who built it.

I think about engineering judgment as the accumulated context of knowing where things break and why. It’s the scar tissue from systems that failed at 2 AM, from fixing data migrations that corrupted silently, or from troubleshooting integrations that worked in dev/staging and broke under production load. It’s what tells an engineer whether to build an MCP server, an API, or a CLI tool, and what to expose through each one. It’s knowing what to monitor, how failure cascades, and where the trade-offs actually are. That context is the product of experience, and applying it is what distinguishes a tool that works from a tool that keeps working.

This underlying principle isn’t “engineers should build everything.” Non-engineers don’t build badly — they build without a specific type of context because they’ve never needed it. But the principle is also not “overengineer everything.” If someone builds a data sync service that processes forty records a day and an engineer tries to redesign it with queuing, retry logic, and idempotent writes, they’ve failed the assignment. What that tool likely needed was simple error logging and a notification for when it fails. Overbuilding is the same category of mistake as underbuilding; both reflect misclassified trade-offs.

The principle is about applying that accumulated context — the scar tissue and the “why” of failure — at the moment a tool crosses from “just a tool” to building block: something other systems or workflows depend on. Engineering judgment isn’t about building things “the right way”; it’s about ensuring the rigor of the solution matches the weight of the dependency.

Who’s Responsible for Recognizing It

Everyone who builds is responsible for recognizing it, whether they know it or not. That’s the uncomfortable answer, and it’s also the only one that works when building is no longer confined to engineering. As a leader, your job is to make sure people know that this is now expected of them, give them the vocabulary to recognize it, and make clear what happens next when they do.

This doesn’t mean every builder needs to think like an engineer. It means every builder needs to be able to answer a short set of questions about what they’ve built. These questions will help them determine whether other things now depend on it and if it needs engineering involvement. This is not an engineering checklist, it’s a recognition checklist. And these are all ways of asking pretty much the same question: has this become something other people or systems now depend on?

  • Are other people, teams, or systems now depending on this beyond what I originally built it for?
  • If this tool went down tomorrow, would it affect anyone other than me?
  • Is it handling data it wasn’t originally designed to handle, or data that has compliance implications?
  • Has it grown beyond my ability to support, maintain, or explain how it works?
  • Are agents, automations, or workflows depending on it?
  • Did I build this as a one-off, but it’s now part of how the organization operates?

If the answer to any of these is yes (or even “I think so”), this thing has probably become a building block. In more plain terms, other things now depend on it, and it needs more scrutiny than it’s currently getting. That recognition is the trigger. What happens next depends on the situation.

What To Do About It

The response should scale with the stakes.

At the lightest level, it’s a conversation. An engineer sits down with the builder, the builder walks them through what they’ve built, what workflows it’s a part of, and what decisions were made during building. The engineer’s job in that conversation is to listen to what the builder describes and ask the questions the builder didn’t think to ask — reading between the lines on how a non-engineer explains what they built, and translating that into where the technical risks actually are.

The kinds of questions that matter are practical: How does it handle failure? What happens when the data it depends on is null or malformed? Is there any testing? Are there database indexes on the queries that matter? What monitoring exists and who should be alerted if this stops working off-hours? Many of these are questions the builder can turn around and ask the agent they used to build with. The answers won’t always be complete, but asking them surfaces gaps that would otherwise stay hidden until a production incident reveals them.

This is intended more as coaching and education than a hand-off grilling session. The builder learns what to think about next time. The engineer learns what the tool does and where the risks are. The next thing that person builds will be stronger for it.

When the stakes are higher — when something needs to move up in polish level, or when it’s going to carry real operational weight — the response can be a pairing session or a proper hand-off. That might mean writing a PRD or an RFC that defines the expected capability set, the resiliency requirements, the documentation and monitoring standards. Engineers are used to receiving hand-offs; they’re just used to receiving them from other engineers. The process is the same. The source is different. This will add some time to the hand-off itself, but the mechanics are familiar.

The goal in both cases is the same: make sure that when something becomes foundational, someone with engineering judgment has looked at it and either blessed it, improved it, or flagged that it needs more work, either before too many other things depend on it or before the first client dependency creates an implicit SLA nobody agreed to.

The Principle in Practice

Without this first principle, your organization will discover that something is core to your operations only when it breaks. A prototyped tool quietly starts carrying production-level expectations even though nobody agreed to that transition. The recognition questions never got asked because it wasn’t anyone’s job to ask them. In the traditional workflow, PMs ask “is this ready for customers?” and QA tests it in ways the original developers hadn’t considered. Those functions act as backstops. But when something is built outside the product development workflow — when it was never on a sprint, never went through a product review, and never hit a QA cycle — those backstops aren’t in the loop. The first principle creates a trigger that works independently of that workflow; because the things that most need the backstops, are being created outside of that exact workflow.

This doesn’t happen on its own. Someone in leadership has to set the expectation that builders assess what they’ve built, provide the vocabulary and training to do it, and create a clear path for what happens when a tool starts carrying dependency weight. The recognition checklist is only useful if people know it exists and know what to do when one of those questions becomes a “yes.”

The first principle creates a simple responsibility: recognize when something has dependents, and act on it. That’s the trigger: reassess its classification, reassess its polish level, and get engineering judgment applied. This isn’t a tax on speed; it’s how you keep speed from outpacing your foundations. You can build as fast as you want. But if you don’t recognize what other things have started depending on, you won’t know what your infrastructure actually is until it breaks.

Latest Manager Trainings Videos


All Training Videos

Building a Strategy

In this episode, Eric discusses the importance of building a strategy, emphasizing that it is often overlooked but crucial for effective leadership. He provides a framework for differentiating between strategic and tactical thinking and focusing on outcome-oriented approaches. The episode highlights the need for collaboration between product management and engineering to align goals and create a unified vision. Eric also stresses the importance of understanding market trends, customer needs, and competitive analysis to inform strategic decisions. He introduces various analysis frameworks like SWOT, SOAR, and NOISE to help teams evaluate strengths, weaknesses, opportunities, and threats. The episode also covers the significance of setting clear KPIs and proxy metrics to measure success and guide strategic execution. Finally, Eric encourages transparency and frequent communication to build trust and ensure understanding and alignment across teams.

Delegation

In this episode, Eric discusses the importance of delegation. He emphasizes understanding delegation to use it effectively without falling into micromanagement. The session covers balancing authority and responsibility, empowering team members, and the role of delegation in employee development. Eric provides examples, such as delegating a roadmap task, to illustrate how authority can be assigned while maintaining accountability. He discusses the importance of clear communication, setting expectations, and avoiding pitfalls like over-delegation. The training also highlights the value of building trust, encouraging decision-making, and fostering a culture of psychological safety. Eric concludes by stressing the need for feedback and recognition to support team growth and development.

Latest Beyond the Belt Episodes


All Episodes

Don’t Buy My Book, It’s Old

Straight to Your Inbox

Videos

Manager Training

Beyond the Belt

Writing Archives

contact