Hey hey, I’m Eric – a life long engineer, leader, (published) author, military combat veteran, and martial artist who enjoys helping people and systems perform at their best. Across my work in technology, consulting, and team leadership, I’ve learned that strong organizations are built through communication, structure, and thoughtful habits. I share those lessons through my writing, management training videos, and my podcast, “Beyond the Belt”, where I talk with other BJJ Black Belts who’ve pursued excellence in their passions and lives.
Whether it’s building software, coaching managers, or exploring the mindset behind high-level performance, I care about helping people grow in a way that feels real, practical, and sustainable.
Reach out out to me if you have questions or come take a BJJ class with me at Fenriz Gym in Berlin, Germany.
There is plenty of discussion about AI-enabled development, but very little of it deals with what actually happens inside an organization once the tools are in everyone’s hands. I’m interested in the process stuff—the “where the rubber meets the road” issues that show up in daily operations rather than demos.
These aren’t hypothetical risks; they are the friction points showing up as organizations begin to build development workflows at the speed of AI. Most of these problems will seem obvious once they are named. You might even think, “Yeah, we’ve already solved for that.” But depending on where you are in your adoption cycles, you might only be seeing symptoms.
These AI speed gains don’t come for free. This is an attempt to name the costs, the organizational friction, the workflow breakdowns, and the people problems, so we can move past reacting to symptoms and start designing from a position of understanding. That begins with acknowledging exactly what our newfound speed has actually changed and what it broke.
How to write good code
What Speed Actually Broke
One of the first things to break is the implicit contract between product and engineering. The familiar cycle of specs, handoffs, review processes, and coordination mechanisms was built around a simple assumption: writing code was the slow part of software development.
That assumption no longer holds. AI has moved the bottleneck. Writing code used to be the constraint; now the constraint is everything around it — reading, validating, coordinating, and maintaining shared understanding across teams. Most organizations haven’t noticed yet because they’re still measuring speed-to-build instead of the things that are quietly degrading around that speed.
Working through this at my own company, a few recurring patterns have started to show up. Some of them are well-understood. Some we’re still figuring out management strategies for. All of them will show up in any product and engineering organizations that are seriously adopting AI as a first class citizen in their workflows.
And most of them are people and organizational problems with process-shaped solutions. The underlying pattern is straightforward: AI has moved the bottleneck in software development from writing code to coordinating people. The intent here isn’t to dictate how every team should work. It’s to name these problems clearly enough so that when teams hit them, they have a starting point and some structure to work with instead of figuring it all out from scratch.
The People Problem
Your teams are getting faster and less-informed at the same time. AI makes individuals more capable, but it also reduces the natural incentive to collaborate. When an engineer can get an answer from AI in 30 seconds, they stop asking colleagues. Instead of knowledge spreading through the team, it lives privately in individual prompt sessions that no one else sees. The real problem is that those conversations were never just about getting answers — they were about knowledge transfer, mentorship, shared context, trust and rapport building, and catching blind spots. Junior engineers who bypass learning conversations will ship code but won’t build the judgment and experience, often built through mistakes, to know when the AI is wrong, or not as right as it could be if it had more context. Institutional knowledge stops transferring between people and starts getting mediated through an AI that only knows what’s been documented; which is never the whole picture. If you’re not thinking about deliberately maintaining and reinforcing the human connections, you’ll end up with an org where everyone is individually productive and collectively disjointed.
The knowledge loss goes deeper than just some missed conversations. In the old cadence, knowledge got reinforced organically. Engineers heard things multiple times across stand-ups, spec reviews, planning sessions, and code reviews. The repetition wasn’t an inefficiency of operations (though it can be that too), it was how people built mental models of the system and each other’s thinking. When AI compresses the build cycle, those touch points shrink or disappear. The work moves too fast for understanding to keep up through osmosis. If knowledge sharing is going to survive, it has to become intentional and be prioritized rather than simply a byproduct of a slower process.
There is a morale risk here that’s easy to overlook because it presents like resistance to change. It isn’t resistance, it’s a loss of professional identity. Many of your best people became engineers because the act of solving problems in code is deeply satisfying. Some people just love to code. When the job shifts from writing code to prompting an AI and validating its output, that satisfaction can disappear. This is a real loss that requires acknowledgement and, frankly, grieving. If you don’t recognize that the craft itself is being hollowed out, you will lose the people who value that craft—the same people whose deep system knowledge you now need more than ever to catch the AI’s mistakes.
There’s also a capacity illusion that comes with AI-assisted work. When the AI can generate code quickly, it’s tempting to run multiple work streams in parallel. And why not, the AI is doing the heavy lifting. But the human still has to hold context for each one, validate each one, and make decisions on each one. The cognitive load doesn’t parallelize even if the code generation does. What looks like three projects running concurrently is really one person context switching between three projects at AI speed instead of human speed. The throughput might go up for a while, but the sustainable pace drops. People get tired faster and the quality of their judgment on each individual work stream suffers because none of them are getting full attention.
The Handoff Problems
The people problem above is about what’s happening inside teams and for individuals. The handoff problems are about what happens between teams — though they’re people problems too. Every one of the following involves getting work from one person or team to another. This could be from a PM’s prototype to an engineer’s implementation, from a lab or skunkworks experiment to a production system, from one document to the next review cycle. In the old world, these handoffs were mediated by specs and meetings that moved slowly enough for everyone to stay aligned. AI blew up the speed of building but didn’t change the speed of human understanding. The result is that handoffs are now a primary source of friction, miscommunication, and risk.
Greenfield is the easiest one. A new project built from scratch in a small team. This is ideally one PM and one engineer, scaling to no more than three to five, with a prototype as the communication artifact instead of a spec. The coordination mechanisms are simple: a Slack channel, small frequent PRs, and a clear owner for merge conflicts. This mostly works already if you keep teams small enough.
The harder question comes next, when this prototype needs to become a real product with monitoring, integrated auth and permissions, access to your data and pipelines, operational support and everything else involved in productionization — how does it move from wherever in the org that created it into engineering? Greenfield solves the building problem cleanly. It doesn’t solve the graduation problem. But these things won’t stall, they’ll make it to production. They just make it to production without being productionized. I’ve written before about why building isn’t the hard part; this is where that lands.
Brownfield is where it gets genuinely hard, and it’s where most organizations actually live. A PM can prototype a feature, but that prototype creates an asymmetry of knowledge: it effectively captures the ‘what’ while ignoring the ‘how’ of the system complexity underneath—the dependencies, the performance requirements, and the ways a 20-year-old monolith can turn a small change into a production incident three weeks later.
The old spec cycle (PM spec → technical spec → implementation spec → review → repeat) is too slow, but you can’t skip the planning either. In brownfield, upfront planning isn’t overhead, it’s the risk mitigation step. The engineering lead who catches a dangerous dependency during a complexity triage just saved you an SLA violation. The question isn’t whether to plan. It’s how to keep the risk mitigation value of planning while eliminating the document-heavy overhead that AI is actively making worse. Without this triage, the asymmetry of knowledge remains. It ends up forcing the person with the most context to spend their time reacting to what was built, rather than guiding how it’s built.
Lab-to-production graduation is closely related to greenfield, but still distinct as an ownership problem. People are building prototypes. Sometimes with real customer data. Sometimes already in front of customers. Sometimes before anyone in Product or Engineering hears about the idea and can ask the question of who maintains it, who’s on-call for it, how it fits into the strategy, and which team absorbs it. Even if these questions could be asked, they probably don’t have a clear answer when everyone is moving so quickly. The technical gap between prototype and production is shrinking (good template systems help), but the organizational gap is wide open. And this comes with the same handoff challenges as any production work — someone has to own it end to end.
The Documentation Trap
Spec cycle bloat deserves its own section because it compounds every handoff problem above. AI makes documentation cheap to produce and therefore often quantitatively difficult to read. Every handoff generates longer, denser documents with even more detail. Every review cycle requires more reading to ensure all the nuance is captured and correct. The thing that was supposed to make you faster creates a heavier process because the bottleneck shifted from writing to validating.
If you’re not careful, your engineers spend more time reading AI-generated specs than building. This becomes even harder when the people validating still carry undocumented knowledge in their heads — the gotchas that aren’t in any spec because they never needed to be until now. The AI doesn’t know about the database migration that failed silently in 2021, or the vendor integration that breaks if you send more than 500 records at once. The humans who carry that knowledge are now reading longer documents and catching fewer issues because the volume overwhelms their attention.
This is the feedback loop that isn’t getting enough attention: AI generates more documentation, which requires more human review, which takes longer, which slows down the very process AI was supposed to accelerate. The solution isn’t less documentation — it’s almost certainly going to require rethinking what documentation is for and who (or what) needs to read it at each stage. You can’t solve an understanding problem with a summarization tool.
The PM Throughput Bottleneck
The documentation trap slows down the work that’s already in flight. But there’s a related problem upstream: the pipeline feeding that work can’t keep pace either. There is a growing asymmetry in the contract between Product and Engineering. It’s not just that engineering velocity is increasing; it’s that the volume of work required to direct that velocity has expanded. In the old model, PMs had weeks or months of lead time while engineering built a feature. That buffer allowed for slower, more sequential discovery processes. Now, that buffer has been compressed. This isn’t a matter of PMs simply “moving faster” or lowering their standards to keep the team busy; it’s a structural requirement for Product to operate differently.
The bottleneck has moved upstream. When a team can ship in days what used to take months, the “thinking” work, like validating needs, defining logic, and synthesizing feedback, has to happen at a completely different cadence. But we can’t let this increased throughput trick us into building the entire backlog without the requisite consideration for each feature. Just because we can build everything doesn’t mean we should. If the PM operating model doesn’t evolve to match this implementation speed, the engineering team either sits idle or begins building from half-baked inputs. This happens not because of a lack of skill or talent, but because the old process for defining the “why” wasn’t built for an engineering team that no longer has a “slow” build phase.
The Code Quality Problems
Everything above is about how people coordinate around the code: the handoffs, the documentation, the upstream pipeline. But at some point the code itself becomes the problem.
Merge and duplication get worse at speed. When AI generates code fast, you can get multiple versions of the same utility function in multiple branches before anyone notices. Worse, teams might solve the same problem in different ways in different parts of the codebase, leading to inconsistent behavior across the application. In a monolith, you can’t contain this the way decomposed services can. Decomposition would help, but it’s not realistic on most roadmaps right now when everyone else is speeding up their delivery. The pragmatic answer might be accepting some duplication as the cost of speed and building detection tooling to catch it, rather than trying to prevent it through process.
PR review is the code-level version of spec cycle bloat. AI-generated code can produce hundreds of files and thousands of lines in a single pass. Traditional human review can’t assess that volume meaningfully. But skipping review in a production codebase isn’t an acceptable alternative. The instinct is to use AI to review AI-generated output, and that might end up being part of the answer. But it’s worth being honest about what that actually is — a recursive validation loop where you’re trusting one model to catch the mistakes of another when neither of them have all the context. That might be a reasonable tradeoff. It might also just push the problem one layer deeper.
And regardless of how the code was generated or how many AIs went over the PR, the engineer who submits it owns it. A large PR doesn’t absolve anyone of responsibility — if anything, it raises the bar on making sure you understand what you’re shipping. The discipline of keeping PRs small enough, structured, and reviewable matters more now than it did when humans were writing every line by hand.
And when something breaks, the problem compounds even further. The person on-call for troubleshooting often didn’t write the code — but in the old world, they usually would have. They’d have a mental model of why it was built that way, what the edge cases were, what they were worried about when they wrote it. Now they’re debugging AI-generated code they’ve never seen, trying to reconstruct intent from output. When they get stuck, they escalate to their engineering lead to support, who also didn’t write it and can only offer architectural intuition rather than implementation-level knowledge. The AI that generated the code has no memory of the conversation that produced it. Incident resolution takes longer on code that was faster to produce — another version of the pattern running through this entire piece.
Where the Guardrails Aren’t
The problems listed so far are pretty much all variations on the same theme: speed creating new forms of friction. These last two are different. They’re not about the work itself, but about the guardrails around the work, and the reality that AI is putting people into territory they weren’t operating in before.
GDPR, compliance, and data protection requirements still exist. The risk is different in three contexts: during development (lower concern since you’re working with code, not production data), during troubleshooting (high concern as you may be sending PII to external AI tools), and when building applications that handle PII or personal data (full compliance required before go-live). These often get conflated into a single “be careful with what you send to AI” warning when they all need distinct answers. And those answers can vary by customer and by region. On top of that, contractual obligations add another layer that a blanket policy can’t cover.
DevOps boundaries are inconsistent. Faster development expands the surface area where teams can operate independently. That includes infrastructure, which means mistakes also happen faster. Most organizations have no clear definition of what engineering teams can do independently versus what requires DevOps involvement. There was usually more time in the development lifecycle and DevOps could plan and deliver accordingly. The new result is bottlenecks, inconsistency, and risk from engineers doing DevOps work without the full picture.
This gets particularly dangerous when an engineer can tell an AI to provision infrastructure without understanding whether the AI is following best practices or just following orders. An inexperienced engineer (or non-engineer) with an AI assistant can create a production-facing VM, a misconfigured database, or a security hole with a few prompts, and the AI won’t flag the risk because it wasn’t asked to. On top of that, who maintains this infrastructure and does things like OS upgrades when the engineers aren’t used to following standard DevOps practices on maintenance?
From Naming to Structure
All of these issues are fundamentally people and organizational challenges. Process can provide the scaffolding, but the real work is in the people and the culture.
I’ve written before about the leadership philosophy of context, not control — giving people the information they need to make good decisions rather than telling them what to do. That same principle applies here, this time to the process design itself. This piece is the context part. Naming the problems clearly enough that when your teams hit them, you’ve already been thinking about them. And when everyone is working from the same vocabulary, the org starts learning from itself — teams can share what’s working because they’re describing the same problems.
There aren’t yet clean answers for all of these. Some are further along than others and will come in future posts. But the pattern behind them is already clear: AI has moved the bottleneck in software development from writing code to coordinating people. The first step is naming these problems, because they’re already here whether you’ve articulated them or not.
In this episode, Eric discusses the importance of building a strategy, emphasizing that it is often overlooked but crucial for effective leadership. He provides a framework for differentiating between strategic and tactical thinking and focusing on outcome-oriented approaches. The episode highlights the need for collaboration between product management and engineering to align goals and create a unified vision. Eric also stresses the importance of understanding market trends, customer needs, and competitive analysis to inform strategic decisions. He introduces various analysis frameworks like SWOT, SOAR, and NOISE to help teams evaluate strengths, weaknesses, opportunities, and threats. The episode also covers the significance of setting clear KPIs and proxy metrics to measure success and guide strategic execution. Finally, Eric encourages transparency and frequent communication to build trust and ensure understanding and alignment across teams.
In this episode, Eric discusses the importance of delegation. He emphasizes understanding delegation to use it effectively without falling into micromanagement. The session covers balancing authority and responsibility, empowering team members, and the role of delegation in employee development. Eric provides examples, such as delegating a roadmap task, to illustrate how authority can be assigned while maintaining accountability. He discusses the importance of clear communication, setting expectations, and avoiding pitfalls like over-delegation. The training also highlights the value of building trust, encouraging decision-making, and fostering a culture of psychological safety. Eric concludes by stressing the need for feedback and recognition to support team growth and development.
This is a conversation with Brady McLean, a black belt out of Sheridan, Wyoming, USA. Brady is motorcycle philosopher with a focus on mental health. He’s also a CrossFit coach, brand representative for Go Fast, Don’t Die, and writer of the On Any Someday e-mail newsletter.
This is a conversation with Nora Schoenthal, a black belt out of Cologne, Germany currently living in Barcelona, Spain. Nora is a mother and career person who has taken a hiatus from jiujitsu. She works in HR and is a head of People with a focus on Talent Acquisition and Learning and Development.