AI broke the model compliance was built around. The issue is no longer preventing people from building software. The issue is now knowing what already exists. When anyone in the company can ship working tools, governance stops being primarily a process problem and becomes a visibility problem. That visibility comes from culture, not tools or process.
Someone in your sales org vibe-coded an internal dashboard with all of your sales numbers in it over the weekend. They deployed it on Vercel and made it publicly accessible while forgetting to put a password on it. As a CTO, this sounds like a nightmare that you want to be happening to someone else.
That’s much less of a hypothetical than I want it to be. The only reason it didn’t turn into something worse is that the person who built it, also excitedly shared what they’d built to a larger group. During the showing, the missing password came up when others tried to access it. Thankfully, it was fixed in just a few minutes.
Imagine if that same person didn’t share it publicly. Maybe no one would have clocked that they were able to just access it without a password. In that case, you would probably find out in one of three ways: 1) someone tells them in private, they quietly patch it and the rumor mill carries it to your doorstep, 2) an external auditor finds it during your next ISO (or SOC or whatever) review, or 3) someone outside the company finds it first. All three come with their own challenges. The best version of this requires a culture where people openly share their work.
A few weeks ago, Francesco Panina, CTO at Trustfull, described a story in a LinkedIn thread which reinforces that these things are happening everywhere and right under our noses:
An internal CSV dedupe tool someone wrote over a weekend ended up wired into three automated compliance reports. Nobody labeled it infrastructure, it stayed unversioned for months, and the team didn’t notice until it broke during an audit. Building was trivial, finding out it existed in that role was the expensive conversation.
That last line is the actual shadow IT problem in 2026. Building is cheap. Recognition — knowing a thing exists and what role it’s playing — is expensive. Most of the writing about shadow IT focuses on the building side. And the lens (unfortunately) often paints those builders in a negative light; as rule breakers. Building in the age of AI has gotten so easy that the difficulty for compliance has migrated almost entirely to the discovery side. Discovery means knowing what people are building, what data they’re using, and where they’re deploying it. This is where most leaders are flying blind. Visibility is the precondition for governance. You can’t assess or remediate what you can’t see.
Culture is what determines if that visibility is proactive or reactive. If sharing what you’ve built is the default, recognition happens through enthusiasm. This can be as easy as somebody posting in Slack, somebody else seeing it and simply asking, “are you using that for the compliance reports?” The recognition happens proactively because many people see the work while it’s still evolving. If sharing isn’t the default, recognition happens reactively through audits, outages, or customer complaints.
When the Pipeline Isn’t Enough
For anyone who is accountable for ISO 27001, SOC 2, or any other compliance system, that asymmetry isn’t an abstract idea. When building lived inside engineering, the development pipeline was the visibility layer. Code review, change management, deployment pipelines — governance had a place to apply itself, because the people working on it were specifically trained on compliance, privacy, and security. AI expanded the builder population faster than governance can adapt. The population of people who can ship working software is now roughly the whole company, and on top of that, they make their own pipelines. Without a different visibility layer, the compliance work doesn’t actually have anything to attach to.
People aren’t building in the shadows because they’re sneaky. There are many reasons these things get done behind closed doors: asking takes too long, the official path is full of judgment, peer reactions to unpolished work are unpredictable, and most orgs (often through fear-based compliance) make hiding easier than showing. These reasons aren’t character flaws; they’re how humans behave when the environment makes openness feel costly. Leaders who forget this end up writing policies for the people they want to have, not the people they actually have. The policies don’t work because the environment is the problem, not the humans. Fixing the environment starts with making it safe to be visible.
Visibility Becomes Culture
Psychological safety gets talked about so broadly that it often stops meaning anything operational. Here, it’s concrete: people have to believe that showing unfinished work, asking naive questions, or exposing rough edges won’t be punished. The moment sharing feels risky, visibility collapses and the work goes private. In an AI-enabled organization, that’s a definite culture problem. When you don’t know what people are building and deploying, it becomes a governance problem too.
The mistake most orgs make when they try to “share more” is treating it as a single mechanism. The mechanisms themselves are not the point; it’s the behavior. Each one lowers a different social cost of visibility: sharing interest, admitting confusion, showing rough work, announcing shipped work, and building in public with permission. Compliance benefits because the organization sees more of the work earlier, before it hardens into accidental infrastructure.
(Slack) #inspirations. I saw something cool out there. This lowers the first social hurdle by saying “this looks useful” before anyone has committed to building anything. Here, visibility starts before execution. People are more likely to show what they’re experimenting with later if they’ve already practiced sharing what caught their attention.
(Slack) #ai-help. I’m stuck. This lowers the cost of admitting confusion before people give up or create unsafe workarounds in private. Questions like “why is Claude hallucinating this API endpoint?” or “how should I structure this workflow?” surface early while the work is still malleable. The point is not just education. The real value is visibility into where people are experimenting, struggling, and improvising. This channel dies the moment people get embarrassed or put down for asking basic questions. If confusion becomes private, the resulting systems do too.
(Meeting) Builders Unscripted. I want to show this live (or we want it demoed live). A weekly demo session open to anyone building things, regardless of role or team. No slides, no scripts, no polish requirements. The demos create motivation, sharing of ideas, and ambient awareness across the org. The important part from a product or compliance perspective is that work becomes visible while it’s still rough enough to change. Someone notices a potential security concern. Someone recognizes duplicated effort. Someone else realizes they already solved part of the problem and communicates it. Visibility this early is dramatically cheaper than discovering the same thing during an audit or outage review. This is one of those meetings that I ran initially and have since handed it off. These things tend to work better when they belong to the builders rather than feeling imposed by the top of the org chart.
(Slack) #releases. This is live. Anyone can post shipped work here: internal tools, automations, experiments, fixes, dashboards, workflows. This skews more towards lightweight organizational visibility than formal documentation. Before AI, engineering systems naturally created inventories through pull requests, deployment pipelines, and change management. Now that the builder population includes everyone, that visibility layer has to emerge socially instead. #releases acts as a low-friction inventory of what actually exists inside the company. The mechanism only works if posting stays cheap. The moment sharing requires specific templates, approvals, or process overhead, people stop doing it and the work disappears back into DMs and side projects. At Mapp, we also have a Notion agent that, in just a few seconds, turns a ramble of, “here’s what I shipped in this release” into a clean paste-able post. This makes the structural work automated and humans only have to provide the substance.
(Event) Hackathons. Temporary permission structures for public experimentation. For a fixed period, building becomes the assignment rather than a distraction, and unfinished work stops carrying the same social risk. People who would never demo during the rest of the year suddenly prototype openly because everyone else is doing it too. This matters more than most leaders realize, both for the people building and for the people watching. A surprising amount of “serious infrastructure” starts life as something someone only felt comfortable showing because the environment temporarily suspended judgment. Many of the things that later appear in #releases first surfaced as a hackathon project.
Not every org needs all of these specific channels and meetings. The point is that any one alone would likely fail. Someone who’d never demo at Builders Unscripted will post in releases after the fact. Someone who’d never post in any channel will hear about it at a town hall when something that got shipped gets referenced. The redundancy is the design.
When Visibility Compounds
Visibility doesn’t just help to mitigate shadow IT. Over time, it produces something much more valuable on the path to becoming AI-native. Being an AI-native company has multiple facets. One of the core ones is whether your organization can learn from its own activity at the speed AI enables it. Most companies think “AI-native” means employees using Claude Code/Co-work, Cursor, or Codex. That’s AI-enabled, not AI-native. Moving from AI-enabled to AI-native is a much harder shift, and it’s a cultural one. AI-native means using AI to build the visibility your organization needs to learn from its own work fast enough that the learning compounds.
The mechanisms above create different ways for people to make work visible. There’s a complement to that: a way for people to surface what they tried to build and couldn’t, or what they shipped and immediately wished was different.
(Slack) #product-feedback. Here’s what’s missing or broken. The intake mirror of #releases. When someone is building and runs into a missing API endpoint, a set of data missing enrichment, an integration gap, or a capability they expected to exist and didn’t, now they have a place to put it that doesn’t get immediately lost in Jira bureaucracy. Customer feedback from the commercial teams, like CSMs and account managers goes here too. An agent reads each post, asks follow-up questions if needed, and neatly files everything into a Notion database that we use during roadmap planning, requirements writing, and figuring out who has the right expertise in that specific corner of the product. Overall, it functions primarily as a signal channel rather than a structured feature-request queue, though it captures both.
Once the organization develops enough visibility, the mechanisms start reinforcing each other. Someone sees something interesting in #inspirations, gets unstuck to build it through #ai-help, demos a rough version at Builders Unscripted, or skips that and posts the finished version in #releases, then leaves behind feedback about the missing API endpoints, integrations, or data gaps they hit along the way in #product-feedback. The next person starts with a lot more knowledge and insight at their fingertips instead of thinking through everything from scratch.
Visibility compounds when the organization’s experiments, releases, questions, and failures become searchable instead of disappearing into chat history and DMs. Once people can query what already exists inside the company (prior experiments, integrations, workflows, APIs, failed attempts), each round of building gets cheaper. Teams stop rediscovering the same things over and over because organizational memory becomes accessible at the moment someone is building. At Mapp, that includes searchable Slack history, recorded, transcribed and stored demos and meetings, structured feedback capture, and agents that help organize the information into something searchable and reusable.
When the loop is closed, people aren’t just seeing what got shipped — they’re seeing the lifecycle, from idea to deployment to iteration. This includes things like which internal APIs were involved, which Notion patterns were used, or which vendor integrations were leveraged. That’s how good internal patterns spread through the company instead of every team rebuilding the same workflows independently.
When the Mechanisms Work
The Vercel deployment from the opening is what this looks like when it works. He isn’t a software engineer, and “lock down the deployment” is not a step that lives in the muscle memory of someone whose job is selling. There was no malice or negligence involved. He just built a useful tool to solve his own problem, deployed it quickly, and missed a step that experienced builders would have caught reflexively. He shared what he’d built in a forum where someone would see it. Someone clicked through to the URL, realized they were looking at the full sales dashboard without being asked for a password, and replied with some version of “uh, is this supposed to be open to the world?” Fixed in minutes with limited exposure.
This would play out differently in most organizations. Same person, same tool, same missing password; but he doesn’t share broadly because he’s worried about having his hand slapped for not going through the right channels. Now you find out one of three ways: he tells someone in private and they patch it (best case), an external auditor finds it during your next ISO review (Francesco’s case), or someone outside the company finds it first (worst case) and you really hope they tell you. All three can be expensive in their own way: in audit findings, in customer trust, or in the time it takes to clean up. When sharing is the default, catching things like a missing password becomes part of the normal back-and-forth of work rather than a fire drill.
What This Doesn’t Mean
One natural reaction to all of this is concern that low-friction sharing means low-quality output — that you’re trading off process for visibility. That’s not the trade-off.
The most common objection to a culture-of-sharing argument isn’t “people will dump work over the wall.” It’s “you’re adding more work for people, and they’ll either do it sloppily or have AI do it for them and create slop.”
Posting in #releases doesn’t trigger documentation, and it doesn’t raise the bar for productized delivery. The act of sharing is intentionally cheap: no template enforcement, no mandatory cross-posting, no approval workflow, no required PRD. Friction at the point of sharing is what creates the shadows in the first place; the whole mechanism collapses if that friction comes back. Resist the temptation to add structure. Let an agent support adding that structure post-hoc.
Your Head of Compliance might hear “celebration” and envision a breakdown of discipline. The reality is the opposite. Formal compliance, the SOC 2 reports, the ISO evidence folders, the access reviews, the actual proof, is a downstream consumer of visibility. It can only govern what it can ingest. By the time a “weekend project” becomes accidental infrastructure, it’s usually too late to retroactively apply controls without a massive tax. When the social layer captures the birth of a tool in #releases or a demo, you get visibility at the start of the lifecycle instead of at the point of failure. At Mapp, we use agents to bridge this gap: the agent watches the “celebration” channels, identifies anything that crosses a risk threshold (like handling PII or hitting a production database), and flags it to compliance for review.
That is the point where the social convention becomes formal process. Documentation, compliance review, and privacy review are real and they kick in when the situation calls for it: when something is used by multiple teams, when it touches customer data or PII, or when other systems start depending on it. That’s the same threshold logic from the polish levels framework; the lens applies based on what the thing has become, not based on who built it. Visibility is the cheap part. Governance scales with the stakes of what got built. The compliance side of this gets its own future post.
Same Mechanism, Different Population
The #releases channel isn’t new. I ran a version of it years ago at a 2,000+ person company. It worked there for the same reasons it works now: visibility across teams, lightweight discovery, recognition of useful work, and occasionally catching duplication before it hardened into infrastructure.
What’s changed isn’t the mechanism. It’s the population it serves.
At 2,000+ people, the “tech team” was a (large, but) bounded group, and #releases tracked what they were shipping. Other functions watched, but they were rarely building software themselves. In a smaller AI-enabled company, the builder population expands far beyond engineering: sales, finance, operations, and customer-facing teams are all capable of shipping internal tools and automations now; and they all do. The same mechanism suddenly matters to the whole company because building is no longer isolated to engineering. The channel itself was never the important part. The important part was making visibility cheap enough that people actually participate. Smaller orgs get more out of the same mechanism. At 2,000 people, a #releases post might reach a few hundred relevant readers. At 200, it reaches everyone who could possibly care. The same effort, several times the return.
The Overlap is Intentional
I know that this sounds like a lot of Slack channels and meetings layered on top of an already noisy workday. Most leaders who consolidate visibility into fewer mechanisms are doing it in good faith. The intention is usually to reduce distractions, context-switching, and information overload. The intention is right; the trade-off is what’s wrong.
The problem is that cultural visibility doesn’t spread reliably through a single channel or meeting. Different people participate through different mechanisms. Someone who never demos live will still post after the fact in #releases. Someone who ignores every Slack channel might still hear about a project when it gets referenced in a town hall or leadership update. The overlap is intentional.
Most of these mechanisms are also opt-in attention rather than mandatory process. Mute the channels. Skip the meeting. Watch the recording later (at 1.5x/2x) or ignore it entirely. The important thing is not that everyone participates everywhere. The important thing is that the work becomes visible somewhere before it turns into accidental infrastructure.
The visibility layer only works if people encounter it through multiple paths. The cost of a few extra channels is much lower than the cost of any one of them being the single point of failure for organizational visibility.
The New Visibility Layer
AI expanded the population of people capable of shipping working software far beyond engineering, but the visibility systems around that work didn’t expand with it. That’s why so many organizations now feel simultaneously more productive and less governable.
Most companies are still trying to solve this bureaucratically with tighter process and more approvals. I think that’s backwards. The organizations adapting best are not the ones trying hardest to stop people from building. They’re the ones enabling building while making it visible early enough that governance can keep up.
That visibility layer is cultural before it’s technical. The moment visibility feels risky, the work goes private and governance becomes reactive again.
Celebration sounds culturally soft until you realize it’s functioning as governance infrastructure. In AI-enabled organizations, the same culture that accelerates experimentation is also the thing that makes the organization governable again.