What is a minimum viable product (MVP)?
A minimum viable product, or MVP, is the simplest version of a product that allows a team to start learning with real users. The term was popularized by Eric Ries in The Lean Startup, where he defined it as "that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort."
The key word is learning. An MVP is not a stripped-down version of your full product vision. It's a learning vehicle — a deliberately minimal artifact designed to test a specific hypothesis about your users, your market, or your solution.
What an MVP is not
The concept of the MVP is widely misused, which has diluted its meaning.
An MVP is not a rough version of your whole product. Building every feature but with low quality is not an MVP — it's a bad product. An MVP is narrow in scope, not sloppy in execution.
An MVP is not a prototype. Prototypes are for testing usability and design. An MVP is meant to create real value for real users and generate genuine behavioral data. Users interact with an MVP in their actual workflow, not in a controlled test.
An MVP is not a one-time event. The MVP is the start of a build-measure-learn cycle, not an end point. After launching and learning, you update your assumptions and ship the next iteration.
An MVP is not the smallest possible thing you can build. It's the smallest thing that can teach you what you need to know. Sometimes that requires more functionality than teams expect; sometimes far less.
The origin of the MVP concept
Eric Ries introduced the MVP as part of the Lean Startup methodology, which applies lean manufacturing principles to product development. The core insight is that most startups fail not because they can't build their product, but because they build the wrong product.
The antidote is to shorten the feedback loop between building and learning. Instead of spending a year developing a fully-featured product and discovering it doesn't resonate, you test your riskiest assumptions early with the minimum investment required.
Ries drew on the earlier work of Steve Blank, who championed customer development — the idea that founders should spend as much time learning about customers as they do building products. The MVP is the artifact that operationalizes that learning.
How to define the right MVP scope
The most common mistake teams make with MVPs is not making them minimal enough.
Start with your riskiest assumption. What is the single thing, if wrong, that would invalidate your entire approach? That's what your MVP should test. Everything else is scope creep.
Define success criteria before you build. What would you need to observe to call this MVP a success? Be specific: a conversion rate, a retention metric, a number of return visits. Without a clear success criterion, any result can be rationalized.
Consider non-software MVPs. Some of the most effective MVPs involve no code at all. A landing page, a manual process run by a human, or even a video can test demand before any product is built.
Resist the pull toward completeness. Stakeholders and engineers often push for "just one more" feature before launch. Each addition delays learning. If the feature doesn't directly test your core hypothesis, cut it.
Famous examples of MVPs
Dropbox launched with a three-minute demo video explaining a product that didn't fully exist yet. The video drove sign-ups from 5,000 to 75,000 overnight, validating massive demand before significant engineering investment.
Airbnb started as a website where the founders rented out air mattresses in their own apartment during a conference. They handled everything manually — photos, messaging, check-in — to test whether strangers would pay to stay in someone else's home.
Zappos founder Nick Swinmurn tested the idea of online shoe retail by photographing shoes at local stores, posting them online, and fulfilling orders by purchasing the shoes from the store when someone ordered. There was no inventory, no warehouse, no logistics infrastructure — just a test of whether people would buy shoes they couldn't try on first.
Each of these MVPs was designed to answer a single critical question with the least possible overhead.
How to learn from an MVP
Launching an MVP is not the end of the work — it's the beginning. The value of an MVP comes from what you do with what you observe.
Measure behavior, not opinions. What users do is more reliable than what they say they'll do. Track retention, activation, and task completion rather than relying solely on survey responses.
Talk to your early users. Quantitative data tells you what is happening. Qualitative conversations tell you why. A drop-off in your activation flow needs both — the analytics to spot it, and an interview to understand it.
Update your assumptions explicitly. After each learning cycle, write down what you assumed, what you observed, and what you now believe. This creates a record of how your thinking evolves and keeps the team honest.
Decide: pivot, persevere, or kill. Based on what you learn, you have three choices. Persevere if the data supports your hypothesis. Pivot if a different approach looks more promising. Kill it if the evidence is clearly against you. Most teams are too slow to pivot or kill — speed here is a competitive advantage.
How MVPs relate to product discovery
The MVP sits at the intersection of product discovery and delivery. Discovery work — user interviews, problem framing, prototype testing — informs what the MVP should test and how to scope it. The MVP is then built and released as a delivery artifact, but with the explicit goal of generating discovery insights.
Thinking of MVPs as part of continuous discovery, rather than a one-time launch event, keeps teams in a productive learning loop. Each iteration of the product is an opportunity to test new assumptions and get closer to a solution that genuinely works for customers.
Should you be using a customer insights hub?
Do you want to make faster product decisions with better data?
Do you share research findings with your product team?
Do you collect and analyze customer feedback?