Hackathons: A C-Suite Perspective

A two-day hackathon looks like a gift to your team. Two days off the roadmap. Two days of building whatever they want. If you’re a CRO wondering when the feature to help close the deal will be deployed, it looks like you just lit a match on a week’s worth of productivity (because you’re also counting the ramp-down and ramp-up on either side).

But if you’re a CTO or any senior leader trying to figure out how to actually execute an AI transformation, a hackathon is the cheapest, fastest way to simulate what your organization looks like operating at the speed of AI. In other words, your future operating model, pressure-tested without the consequences.

The outputs aren’t the projects people build (though they can be valuable). The output is what breaks, what surfaces, and how your organization actually behaves when you remove the usual constraints. You learn things about your people, your infrastructure, and your operating model that you won’t get from training, surveys, or vendor decks — things that most teams only discover during an incident.

I’ve written before about why hackathons work as an adoption strategy. They offer a vehicle to create the kinds of motivation that training can’t. I also explain why letting people solve their own problems beats mandating curriculum. This piece is the view from the other side of the mountain. It’s about what the hackathon gives you as a leader, and why every one of those outputs is worth the two days.

Know Where Your People Actually Are

Part of the AI transformation process is knowing where your people sit on the adoption spectrum. Not where they say they are in a survey. Where they actually are when you put tools in front of them and say, “build something.”

In practice, you see the spread immediately. A small group takes off and starts pushing the tools in ways you didn’t anticipate. A larger group realizes the barrier to entry is much lower than they thought and starts experimenting. And a third group keeps working the way they always have.

The categories themselves aren’t the interesting part. What matters is that you now have an accurate map of who needs what, who should be pulled into champion roles, where your biggest adoption leverage actually is, where you’re going to need patience instead of pressure, and who will move slower by design. This last group are the people who tend to anchor you to stability. It’s been their instinct and pacing that has historically kept systems from breaking. You still need to bring them along, just more deliberately. Writing them off is how you trade short-term speed for long-term fragility.

You won’t learn any of this from training completion metrics. A hackathon gives you an honest map of your organization in two days. That map determines where you invest your time, your budget, and your coaching energy.

But only if you treat it like a diagnostic, not an event. Right after the hackathon, sit down with your leadership team and walk through what you saw. Who took off. Who needs support. Who didn’t engage. Where AI was used as a sidecar versus a primary tool.

This is where the value compounds. It’s where the signal turns into decisions about where to invest and how to move.

Bring the Shadow AI Into the Light

If you are early in your organizational transformation, then there are already people in your organization using AI. Only they are doing it on personal accounts, outside your security perimeter, with no governance and no visibility. You can’t survey for this honestly. Nobody’s going to volunteer that they’ve been pasting customer data into ChatGPT on their personal login to bypass enterprise friction. And monitoring for things like this destroys the trust you need for adoption to work.

A hackathon solves this. When you create a sanctioned, celebrated space for AI experimentation, people show you what they’ve been doing, even if they pretend to rebuild it just for the hackathon. They aren’t doing it because you asked, but because the context makes it safe. The person who’s been quietly automating their reporting workflow will demo it. The PM who’s been using AI to draft specs will share their process.

That visibility is valuable. Some of what surfaces needs guardrails, like if someone builds a neat new UI for searching log data that contains PII. Some of it should be formalized and shared across teams, like the PMs new tool for drafting PRDs can be abstracted to Engineers for creating RFCs. Some of it is exactly the kind of grassroots problem-solving you want more of. But you can’t make any of those calls if you don’t know it’s happening.

Stress-Test Your Infrastructure at Low Stakes

When two hundred people need AI tool access at the same time, you find out very quickly what your organization isn’t ready for. Most systems fail on concurrency, not capability. Things that seemed fine at 20 or 50 users start breaking in ways you didn’t anticipate at 200 or 300.

Policies are usually the first gap. Data classification rules that made sense six months ago don’t cover the gray areas that show up when someone wants to feed customer feedback or internal data into Claude to find patterns.

Then the infrastructure shows up. SSO configurations that don’t include the tools people actually want to use. API gateway limits, logging gaps, missing audit trails, token tracking that works at low volume but breaks at scale. Some of this you’ve already built. Some of it behaves very differently once usage spikes.

Then the support model gets tested. Someone hits a firewall rule that blocks an API call. Someone else needs a permission escalated. Someone discovers the VPN drops their connection to a tool that works fine off-network. Individually, these are small issues. At scale, they stack. The hackathon tells you whether your IT team can actually respond at the speed broad adoption requires.

And then there’s cost. Are people burning through tokens when a subscription would be cheaper? Or sitting on subscriptions when API usage would be a fraction of the cost? Most people default to whatever they signed up for first. Multiply that across a few hundred people and the difference between intentional and accidental cost management becomes material.

If you’ve built internal templates or starter infrastructure — standardized app skeletons with SSO, monitoring, data access baked in — this is where you find out if they actually work. Can non-engineers use them? Do they hold up when thirty people spin up projects at once? Are the guardrails around customer data tight enough while still being usable? How good is the documentation — for humans and the AIs? The hackathon pressure-tests your build infrastructure the same way it pressure-tests everything else.

AI doesn’t just make individuals faster. It creates bursts of simultaneous demands on your systems and processes that they might not have been initially designed for.

A hackathon surfaces all of this at low stakes.

Engineering Isn’t the Whole Picture

If your hackathon only includes engineers, you’re only diagnosing part of the organization and leaving most of the value on the table.

Include product managers, designers, solutions consultants, marketers, ops people, CSMs; anyone and everyone whose work could be changed by AI. You discover where AI adoption hits organizational boundaries that engineering alone can’t see. The solutions team’s proposal process assumes a turnaround time that AI just compressed by 80%. The marketing team discovers they can automate content approval workflows between systems, cutting out manual copy-paste steps and the errors that come with them. These bottlenecks only surface when those teams are in the room using the tools.

You also see cross-functional connections form that didn’t exist before. A solutions consultant and an engineer end up working on similar problems and start comparing notes. A CSM and a product manager realize they’ve been solving the same workflow problem from different ends. Sometimes this reveals a missing dependency in your org. Sometimes it just means two people who had no reason to talk before now have a direct line to each other. Either way, those connections persist after the hackathon ends.

And you find champions in places you weren’t looking. Your best AI advocate might not be an engineer. It might be someone in pre-sales who realizes they can build onboarding tools, or a PM who discovers they can prototype wireframes in an afternoon. These are the people who’ll drive adoption in their own teams — and you’d never have identified them from inside engineering.

Side Quests That Power the Main Quest

Karri Saarinen at Linear talks about main quests and side quests — the idea that side quests feel productive but only the main quest advances the company’s mission. In normal operations, he’s right. But a hackathon is the one time you want people on side quests.

When someone builds a meal planner, a home renovation estimator, or a fantasy football optimizer, they’re not wasting time. They’re learning prompting patterns, discovering tool limitations, and building intuition for what AI is and isn’t good at. They’re failing in low-stakes ways that build the judgment they’ll need in higher-stakes situations. All of that transfers directly when they sit back down at their day job. The side quest and the main quest teach the same skills.

This is why the first learning hackathon should be open-ended — build whatever you want, doesn’t have to be work-related. When people choose their own problem, the “I don’t know where to start” barrier disappears. They already understand the domain, so the only new variable is the tooling. That’s one thing to learn instead of two. Once your team has that foundation, future hackathons can get more directed — give them themes, point them at specific organizational problems, ask them to build on what came out of the first round. But the first time, let them pick.

Work-constrained hackathons tell you who follows instructions. Open ones tell you who has curiosity, initiative, and creative instincts. The hackathon side quests are the training that sticks because it’s self-directed and personally motivated.

Real Data for Real Decisions

Before the hackathon, your AI transformation budget is projections and vendor promises. After, it’s based on how your organization actually behaved.

How many seats do you really need? Which tools/vendors did people gravitate toward? What does cost-per-user look like across different platforms? Where did the productivity gains show up, and where didn’t they? Do you have the right protections and limits in place?

AI comes at a real cost, and you have to justify it — to the CFO, to the board, and to the rest of the company. But you also have to be able to justify it to yourself. Your job isn’t just to fund technology. It’s to make sure the spend translates into real outcomes.

“Here’s what three hundred people built in two days” is a fundamentally different conversation than “here’s what a vendor, a consultant, or AI thinks we should budget.” You have demo-able outputs. You have usage data. You have a map of organizational readiness.

That converts a meaningful portion of your AI investment from a speculative line item into something grounded in observed behavior.

Two Days of Recon

Every one of these — adoption patterns, shadow AI visibility, infrastructure limits, cost behavior, cross-functional connections, real budget data — is something you need to understand, and usually only discover when it’s already causing problems.

The two days aren’t only a gift to your team, they’re reconnaissance. The projects people build, and the relationships they form along the way, are a bonus. The real output is how your organization actually behaves when it’s running at AI speed.

Most organizations don’t choose when they learn this. They just delay it. In other words, you can learn this safely, or you can learn it when it matters.

Skip the algorithm. New posts, straight to your inbox

Don’t Buy My Book, It’s Old

Straight to Your Inbox

Videos

Manager Training

Beyond the Belt

Writing Archives

contact