In This blog

    Finding your MVP

    Hey fellow Testers!
    Do you have an idea what an MVP is?

    If you are sports fans, you will probably say:

    “Sure I know, it is the Most Valuable Player!
    But what the heck does this have to do with testing?!“

    But I am not referring to “that” MVP…

    We are not looking for the Most, we are looking for the Minimal!!!

    In this context, MVP stands for something completely different:

    Minimum Viable Product

    An MVP then is a product or feature that has just enough functionality in it so that it can be released to users and still provide them value (that is why some people mistakenly call it Minimal VALUABLE Product – not great but close enough to the meaning).

    The idea is to release the product quickly, and then closely evaluate the feedback of users (both direct feedback, as well as indirect feedback via usage monitoring) in order to continue developing the feature or the product, in the way that users really want us to do it.

    The objective is to keep the “guessing part” out of Product Management & Design, by letting users tell us directly what advanced functionality they want in our products.

    This way we waste less time adding features that people do not need or will not use.

    Schedule a Demo

    How do we know how little should the Minimal be?

    This is a tricky question, as it will still be a matter of discussion between all parties involved. Believe me, it always is!

    Developers will want to do as little as possible, this means less work for them and less risk.

    Product Owners will want to add more functionality, as they already know (or at least so they think!) what is needed based on the countless users they have talked to already.

    Sales and Support will prefer to release something “complete” and not something that looks half developed. After all, they are the ones who will need to explain to users why we specifically chose to release something even-though we know it still requires more functionality.

    Quality…? Well, this is where we can provide valuable inputs. The idea is for us to help reach the delicate balance between releasing JUST ENOUGH functionality to give users value, while still having space to know what additional functionality we want to add.

    It is not trivial, but as internal customer advocates, and by understanding the usage patterns together with the functionality of the system, we testers have the knowledge needed in order to help the team make this complex decision.

    Preparing the feature to capture the feedback from our users

    OK, so you managed to define with your team what the Minimum Viable Product is, and to release it.

    You are Done! Right?

    The whole idea of this exercise, or type of release, is to enable the team to gather feedback regarding how to continue developing the product, and so you need to make sure the product is “ready” to provide that feedback and the team is ready to collect it.

    Always remember that feedback can come in a number of forms and different channels. The most simple and immediate feedback you can capture is whether people are using this new feature. Take into account to separate between the ones who checked the feature out one time (click-click and done!) vs. those who adopted the feature as part of their work.

    Feedback can be direct when you ask customers to tell you what they think, or indirect, when you measure operations and actions they did on your application using approaches like telemetry and log analysis.

    Most times you will want to get both.

    Preparing the users to give feedback!

    Preparing the users to give feedback

    An important point to remember is that customers do not work for you – it is usually the other way around!

    And so you cannot expect them to simply go ahead and answer your emails when you ask them to provide feedback on the feature you just released.

    Don’t get me wrong, some will be more than happy to provide you feedback, comments and even short novels explaining what they felt and thought about the feature (and when they do you better honor their time and commitment by answering to their points and questions), but most users will simply disregard your emails asking for feedback – after all they have real work to get done.

    A great way to get this feedback is by letting them know why their input is important, and by explaining that this is the best time to request things that will be implemented as part of the feature. Later on, when they get around to telling you what they want, it may be too late to include their wants and needs.

    Also, make this as personal as you can, so that they understand why you are asking them instead of many of your other users.

    Expanding Quality to cover more aspects of customer happiness

    This topic is another example of a larger theme I have been talking about lately, how can we expand our job to cover more areas and to ensure the happiness of our customers even more.

    I hope it is already clear by now to most of you that Quality is not (only) the lack of bugs or the response time of your web app. These are important elements of your user experience, but not the only ones.

    Quality is Customer Happiness.

    And by helping shape features that have the functionality users want to use, then we are helping our team expand the Quality of our products.

    Schedule a Demo
    PBS LogoDXC Technology LogoBoots LogoMcAffee LogoNCR LogoRoblox LogoAIA LogoEnvisionHealthcare LogoWendy's Logoeasyjet LogoAST LogoUCSF Logo
    PBS LogoDXC Technology LogoBoots LogoMcAffee LogoNCR LogoRoblox LogoAIA LogoEnvisionHealthcare LogoWendy's Logoeasyjet LogoAST LogoUCSF Logo
    About Joel Montvelisky
    Joel Montvelisky

    Joel Montvelisky

    Joel Montvelisky is a Co-Founder and Chief Solution Architect at PractiTest. He has been in testing and QA since 1997, working as a tester, QA Manager and Director, and Consultant for companies in Israel, the US, and the EU.
    Joel is a Forbes council member, and a blogger. In addition, he's the founder and Chair of the OnlineTestConf, the co-founder of the State of Testing survey and report, and a Director at the Association of Software Testing. Joel is a conference speaker, presenting in various conferences and forums worldwide, among them the STAR Conferences, STPCon, JaSST, TestLeadership Conf, CAST, QA&Test, and more.

    Related resources


    Navigating Through Modern Software Testing Complexities


    Solving the Integration Testing Puzzle with Bas Dijkstra


    Taming the Chaos: How to Manage Testing in Complex & Robust Environments


    The 2024 State of Testing™ Report is now live!

    Resource center
    In This blog
      mail twitter linkedin facebook