Why Great Ideas Get Watered Down (and How to Stop It)
The Art of Keeping MVPs Bold When Everyone Wants to Play it Safe
Most products don’t die from bad execution—they die from starting too safe.
There’s a painful irony in how we build MVPs today: we water down bold ideas in the name of “starting small,” only to miss the very insights these experiments are meant to uncover.
Conventional wisdom tells us to be conservative:
"Let's be realistic here..."
"We can innovate after we get traction..."
"Users expect certain features..."
But this approach creates a dangerous paradox:
By trying to de-risk our experiments, we actually increase the risk of building something nobody wants.
Think about it—if your MVP is meant to test your boldest assumptions and learn quickly, shouldn’t it lean into what makes your idea different rather than hide it?
The MVP Dilution Pattern
Here’s a controversial take:
We’re far too obsessed with competitor analysis when building MVPs.
Early in my career, I worked in an emerging product category. It quickly became clear that in a new market, everyone is just guessing.
Copying competitors is like mimicking the person next to you during a test neither of you understands.
You might get lucky—but you’re just as likely to repeat their mistakes.
How Ideas Get Watered Down
Time and again, I’ve seen teams fall into this trap. They start with bold visions, rooted in genuine insights. But then something dangerous happens:
Ideas get watered down through a series of seemingly reasonable decisions:
“If they’re doing it, maybe we should too...”
“Nobody else does it that way...”
“Start with the proven approach...”
Each choice seems sensible in isolation. But together, they create three fatal flaws that kill most products before they have a chance to matter.
1. You Blind Yourself to Reality
When teams copy competitors, they miss critical truths:
You don’t know their context or what’s really working for them.
You don’t know which features they regret building.
You don’t know the problems they’re actually failing to solve.
Building based on assumptions about others’ decisions makes you a fool twice:
First, for guessing their context wrong.
Second, for abandoning your own unique insights.
2. You Lose What Makes You Different
Safe MVPs default to validating the obvious. You might confirm that users like slightly better notifications, but you’ll miss whether your bold, radical idea solves their deeper, more critical problem.
By prioritizing industry norms and competitor features, you dilute the very vision that set you apart.
What’s left?
A weaker, less compelling version of what’s already out there.
3. You Get Useless Feedback
Safe MVPs rarely succeed or fail in meaningful ways. Instead, they drift into a dangerous middle ground where neither outcome tells you if your core idea was right:
Success feels tepid: “Users kind of like it.”
Failure feels unsurprising: “We didn’t execute well enough.”
The result?
Neither outcome generates actionable insights to guide your next steps.
Your Edge: Your Unique Insights
Your greatest strength lies in your unique insights—the context of the experiences and observations that shaped your hypothesis.
This is what sets truly innovative products apart. When you dilute it with competitor features, you don’t just weaken your product—you miss the chance to test what truly matters.
Here’s the paradox:
The safer your MVP feels, the greater the risk that it becomes irrelevant.
Instead of learning from bold experiments, you’re left with a product that offers no clear path forward.
A Framework for Bold MVPs
Building a bold MVP doesn’t mean ignoring practical constraints. It means focusing on what matters most: testing the assumptions that could make or break your vision.
Here’s how to do it:
1. Find Your Sacred Hypothesis
Every product has assumptions, but not all assumptions are equal. Some are important—others are sacred. Sacred hypotheses are the beliefs that everything hinges on.
Your sacred hypothesis is the one assumption that, if wrong, invalidates your vision entirely.
Most teams spread their MVP too thin, trying to validate everything at once. Instead, focus on this one core belief—because if this is wrong, nothing else will matter.
Think about it this way:
Even if your UI, performance, and features are flawless, would your product still matter if this assumption was wrong? That’s the question your sacred hypothesis answers.
Examples:
Figma’s Hypothesis Might Have Been:
“Will designers embrace real-time collaboration as a fundamental part of their workflow?”
It wasn’t just about building a “better design tool.”
The hypothesis likely went deeper: if designers weren’t ready to shift from static to dynamic workflows, Figma’s entire vision of real-time collaboration would fail.
Slack’s Hypothesis Might Have Been:
“Will teams shift away from email and adopt channel-based communication as their primary mode of interaction?”
It wasn’t just about building a “better chat app.”
Slack likely needed to validate whether teams could break deeply ingrained habits tied to email, or if they’d only use Slack as a supplemental tool.
These examples highlight how sacred hypotheses focus on behavioral shifts, not just product features or market size.
2. Design for Strong Reactions
Your MVP shouldn’t aim for broad appeal—it should seek polarizing responses.
The goal isn’t to make something everyone likes a little—it’s to find out if anyone loves it a lot.
Safe ideas often lead to vague, lukewarm feedback. What you need is clarity, and strong reactions—both positive and negative—are the best teachers.
Case Study: Notion
When Notion launched, it ignored traditional document organization norms like folder structures and separate apps for notes, tasks, or databases. Many enterprise users found it confusing, but knowledge workers who craved flexibility became its most vocal advocates.The takeaway?
Polarizing responses validated Notion’s core assumption: some teams will trade familiarity for flexibility.
Focus your MVP design on features that explicitly test your sacred hypothesis, even if they challenge user expectations or norms.
3. Set Clear Learning Thresholds
Most MVPs fail, not because they don’t gather data, but because teams don’t know what data would change their minds.
Before writing a single line of code, you need to define clear thresholds:
What evidence would validate your sacred hypothesis?
What would invalidate it?
What would tell you you’re asking the wrong question entirely?
Examples:
Figma’s Learning Threshold Might Have Been:
“Do design teams stop exporting files for collaboration within their first week?”
If teams still exported files after using Figma’s collaborative tools, it would have signaled resistance to real-time workflows.
Slack’s Learning Threshold Might Have Been:
“Does internal email usage drop by 80% within one month of team adoption?”
If email usage stayed the same, Slack would have known its hypothesis about replacing email as the primary communication mode needed to be revisited.
These examples highlight how learning thresholds force binary clarity—metrics so specific that you can’t rationalize ambiguous results.
4. Embrace Strategic Ignorance
The hardest part of building a bold MVP isn’t deciding what to test—it’s deciding what to ignore.
Every user interview will surface “critical” features. Every stakeholder will have their own “essential” priorities. The art lies in filtering out what doesn’t directly serve your sacred hypothesis—at least for now.
How to Focus:
Be transparent about what you’re choosing not to test and why.
Align with stakeholders early to avoid misaligned expectations.
Remind yourself: You can always add features later, but you can’t retroactively test your sacred hypothesis.
Mindset Shift:
If you bury your sacred hypothesis under "must-have" features, you’ll never learn if your core assumption was right. You’re not saying no—you’re saying “not yet” to ideas that aren’t critical to your current learning goals.
The Bottom Line
Building MVPs isn’t about creating something safe enough to launch—it’s about creating something bold enough to learn from.
Your MVP doesn’t need to be complete. It doesn’t need to be polished. It doesn’t even need to succeed.
But it must test your sacred hypothesis.
Here’s what most teams forget: Early adopters don’t want a watered-down version of what already exists. They want a glimpse of what could be.
They’re not looking for marginally better version of today—they’re looking for different.
So, give them different.
→ The risk isn’t in being wrong—it’s in being so careful that you never learn if you’re right.
Three ways to help support my work:
Like, Comment, and Share—Help increase the reach of these ideas.
Subscribe for free—Get future posts and join our community.
Become a paid subscriber—Support this creative journey.
Keep Iterating,
—Rohan
→ Connect with me on LinkedIn, Bluesky, Threads, or X.
→ I’m always open to interesting conversations and collaborations! Reach out at rohan@productartistry.com