The Platform Nobody Uses

You can build the most impressive platform in the world. But if developers don't want to use it, the platform has failed. Developer experience (DevEx) is not a nice-to-have โ€” it is the metric that determines whether a platform survives or dies.

What DevEx Actually Looks Like

Consider what good DevEx feels like. You push code, CI runs automatically, the service is deployed within five minutes, and logs and metrics are available immediately.

Now consider the alternative. You push code and wait 40 minutes for CI. You manually edit a configuration file. You message someone on Slack to request deployment approval. You SSH into a server to check whether the service is actually running.

Both scenarios produce the same end result. But the experience along the way is entirely different, and it is this difference that determines developer productivity and satisfaction.

The SPACE Framework

Microsoft Research proposed the SPACE framework as a systematic way to understand developer productivity. It serves as a useful lens for platform teams evaluating their own effectiveness.

DimensionWhat It MeasuresPlatform Example
SatisfactionHow developers feel about toolsSurvey: "How easy is it to deploy?"
PerformanceOutcome qualityChange failure rate, rollback frequency
ActivityVolume of actionsDeploys per day, PRs merged
CommunicationCollaboration qualityTime to get help, doc quality
EfficiencyFriction to complete tasksTime from commit to production

The trap to watch for is optimizing a single dimension in isolation. A platform that is fast but frustrating to use does not constitute good DevEx. One that is easy but unreliable doesn't either. Only when multiple dimensions are considered in balance can the platform deliver a meaningful developer experience.

What to Measure

Time-based metrics can be measured directly. These include the time from commit to production, the speed of provisioning a new environment, how long it takes a new developer to reach their first deployment, and CI duration. Because they produce clean numbers, these are the easiest indicators to track.

Adoption metrics reveal whether developers are actually using the platform. This means tracking the percentage of teams following the golden path, the ratio of self-service usage to ticket requests, and whether developers return after their first use. If adoption is high but satisfaction is low, the likely explanation is that usage was forced rather than chosen.

Satisfaction metrics capture how developers feel. The approach involves running short quarterly developer surveys, tracking NPS for internal tools, and analyzing support tickets for recurring themes.

Feedback Loops

It is important to maintain both short and long feedback loops. In the short loop, a developer reports friction and the platform team ships a fix within days. This builds trust. In the long loop, quarterly surveys reveal trends and the platform team adjusts the roadmap accordingly. This builds strategy.

The worst thing a platform team can do is ask for feedback and then take no action. When this happens repeatedly, developers stop reporting what is broken and start building workarounds on their own. By that point, trust in the platform has already been lost.

Anti-Patterns That Kill Platforms

The "build it and they will come" mentality leads to skipping onboarding, documentation, and migration support entirely. The result is a platform that nobody adopts.

The opposite extreme is equally dangerous: forcing teams onto a platform before it is ready. Mandating adoption is the fastest way to burn through the trust that a platform team depends on.

Then there is the trap of celebrating "1,000 deploys this month" while overlooking the fact that half of them failed. Tracking activity metrics without measuring outcomes allows the numbers to obscure reality.

The golden path works smoothly under normal conditions, but the real test comes when an error occurs. If all the developer sees is a cryptic message with no guidance, they will route around the platform entirely.

Over-abstraction follows the same pattern. When too much complexity is hidden, nobody can debug problems when they arise. Abstraction is valuable, but the moment it becomes a wall rather than a window, it turns into an obstacle.

The Simple Test

A platform is a product, and developers are its users. No one would ship a customer-facing product without caring about UX. An internal platform deserves the same consideration.

Every platform decision should pass one test: does this make developers' lives easier, or harder?

In the next post, we'll look at golden paths in practice โ€” how to design default routes that make the right thing the easy thing.