Category: Product

I am writing about the product. I write subjectively, honestly and based on my experiences.

  • Everything’s a Gamble: Validating Your Backlog with Experiments

    Introduction

    Validating product ideas through experimentation is a crucial practice in product management. Rather than making assumptions about what users want, product managers must treat everything in the backlog as a hypothesis that needs to be tested.

    The lean startup methodology emphasizes the importance of getting out of the building and testing your ideas with real users. As Steve Blank says, “No facts exist inside the building, only opinions.” Rather than developing products based on hunches and internal discussions, we need to verify our assumptions by running experiments that involve target users.

    This validation mindset is key because we often think we understand our users, but we can be wrong in our assumptions about their problems, needs, and behaviors. Running quick experiments allows us to collect real data on how users respond to potential solutions. This reduces risk and ensures we build products that effectively serve user needs.

    Validating ideas through experimentation is not just about avoiding failure – it helps companies pivot faster and identify winning products sooner. By testing product concepts early and often, we can focus energy on the ideas that have the most potential to delight users and achieve business goals.

    Everything in the Backlog is a Hypothesis

    Product managers often make the mistake of treating everything in their backlog as facts and certainties, rather than assumptions and hypotheses that need validation. The truth is, every new product idea, every feature on your roadmap represents a hypothesis about what will bring value to users. You believe that building that feature will drive business outcomes like increased engagement, retention or revenue. But until you test that assumption with real users, it remains just that – an unproven hypothesis.

    Approaching your backlog with the mindset that “everything is a bet” is immensely powerful. It forces you to question your assumptions and prevents you from wasting time building features users don’t want. The core principle is that you should test your hypotheses early and often through experiments, not build out your roadmap based on hunches. Prototyping and releasing minimum viable versions allows you to validate what resonates with users, so you can double down on what delivers results. With an experimentation mindset, you turn ideas into facts.

    Validating Through Experimentation

    Every product hypothesis needs validation through experimentation. Rather than guessing what users want, product managers should design and run experiments to test assumptions.

    There are several types of experiments that can validate hypotheses:

    • Prototype testing: Create a prototype of a feature or product and get feedback from target users. This can range from low-fidelity sketches to clickable prototypes. Observe how users interact with it and incorporate feedback into the next iteration.
    • Landing page tests: Build a landing page describing the product and drive traffic to it from target customer segments. Measure conversion rates, clickthroughs, signups, etc. to gauge interest.
    • A/B testing: Release variant versions of a product or feature to subsets of users. Analyze the usage data to identify which variant better achieves the desired metric.
    • Email/ad campaigns: Run focused email campaigns or online ads for the product concept and track engagement. Are people clicking through or signing up?
    • Exploratory user research: Interview or survey potential users about the product concept. Gauge their enthusiasm, understand pain points, and clarify the target market.
    • Beta tests: Release an early product version to a limited set of users. Collect feedback, monitor usage metrics, and gain insights to improve the product before a full launch.

    The key is to identify your biggest assumptions and focus experimentation efforts on validating those product hypotheses first. Using data to make decisions builds confidence in product direction and improves the chances of success.

    Determining Key Metrics

    Choosing the right metrics to measure experiments is critical for understanding if a feature or change had the intended impact. Rather than relying on vanity metrics like clicks or downloads, focus on metrics tied to core business or user goals.

    For example, if the experiment involves a new sign up flow, measure metrics like sign up conversion rate, drop off at each step, and quality of new users. If testing a new recommendation algorithm, measure metrics like engagement, clicks/orders per user, and revenue per user.

    Ideally, have a small set of quantitative metrics that map to overall objectives. Be specific in defining each metric and how it will be calculated prior to running tests. Avoid vanity metrics that seem positive but don’t actually indicate performance. Track metrics over both the short and long-term to account for changes over time.

    Set clear hypotheses and target metric thresholds for each experiment. For example, aim to increase conversion rate from landing page by 10% or get 5% more users to regularly engage with new feature. This helps interpret results and identify meaningful vs. statistical changes.

    Prioritizing Experiments

    When it comes to experimentation, you can’t test everything at once. You’ll need to prioritize which hypotheses to validate first. Focus your experiments on the biggest risks and assumptions in your product roadmap.

    For example, if you’re planning a major new feature but aren’t sure how customers will respond, test demand for that feature before fully building it out. Or if you’re redesigning your signup flow, test the new flow against the old one before rolling it out completely.

    Prioritize experiments that have the potential to make the biggest impact. Look for assumptions that, if proven wrong, would significantly influence your roadmap and strategy. Test those risky hypotheses first to avoid wasted effort and build confidence as you move forward.

    Some key areas to focus experiment prioritization:

    • New features with high dev investment
    • Significant changes to core flows
    • Redesigns of critical pages
    • Major marketing and go-to-market initiatives
    • Pricing changes or new business models

    By validating the biggest assumptions early, you can refine your roadmap, focus engineering capacity on proven solutions, and avoid costly false directions. Move fast by testing your biggest risks before you build.

    Running Effective Experiments

    When running experiments, it’s important to follow best practices to get valid, reliable results. Here are some tips:

    • Have a clear hypothesis. What do you think will happen and why? Spell out your assumptions. This focuses the experiment and helps interpret results.
    • Isolate variables. Change only one factor at a time so you know what caused the effect. If you change multiple things, you won’t know which impacted the outcome.
    • Use A/B testing. Split your audience into two groups – the control gets the current version, the experiment gets the change. This isolates the variable.
    • Choose relevant metrics. Pick metrics that will validate or invalidate your hypothesis. Focus on the key outcomes that matter.
    • Run enough iterations. Test until statistical significance is reached. For web experiments, often hundreds or thousands of users are needed.
    • Randomize users. Assign users randomly to groups to avoid sampling bias. Randomization ensures fairness.
    • Analyze results correctly. Use statistics, not gut feelings. Beware of things like novelty effects wearing off.
    • Learn and improve. No experiment is a complete failure if you learn something. Iteratively improve based on insights gained.

    Following structured best practices for setting up and analyzing experiments makes it more likely you’ll get valid results and actionable insights. With the right approach, experiments can inform smart product development.

    Analyzing and Learning

    Once an experiment is complete, it’s critical to thoroughly analyze the results and extract key learnings. This is the most important part of the process.

    Evaluate whether your hypothesis was proven or disproven based on the metrics you defined upfront. Dig into the data and try to understand why users responded the way they did. Look for any surprising or unexpected results.

    Some key questions to ask:

    • Did we observe the desired behavior change in our target segment? Why or why not?
    • How did the key metrics we defined compare to our hypothesis?
    • Are there differences we should analyze by segment, cohort, or attribute?
    • What user feedback or qualitative data did we gather from the experiment?
    • What worked well that we should amplify going forward?
    • What didn’t work that we should revise or remove?

    The learnings from each experiment build on top of each other, so make sure to document the results thoroughly. Look for patterns and insights that apply more broadly beyond the specific experiment. Track key learnings over time to continuously improve.

    Be sure to share results across your team and organization. Experiments are wasted if the lessons don’t lead to changes in strategy, priorities, and execution.

    Iterating Quickly

    A crucial advantage of validating hypotheses through experiments is the ability to learn and iterate quickly. Each experiment provides an opportunity to gain insights into what resonates with users and what doesn’t. As you run experiments, pay close attention to the results and feedback. Look for patterns and key learnings that can inform future iterations.

    Resist the urge to theorize and make assumptions. Instead, let the data guide you. If a hypothesis is invalidated, use that learning to update your thinking. If an experiment shows positive results, double down and expand on what’s working. Small tweaks and adjustments add up over time.

    Move fast, leverage learnings, and continually refine based on real user data. The faster you iterate, the quicker you hone in on product solutions users want. Be nimble and flexible, evolving the product as you go. Don’t get stuck on a predetermined path, be open to pivot based on new insights. Iterating quickly allows you to stay aligned with user needs even as they change over time.

    The key is to establish a rapid cycle of ideation, experimentation, learning and iteration. By implementing this build-measure-learn loop, you can iterate your way to product-market fit faster than the competition. Speed matters when it comes to innovation, so focus on quick experiments that drive continuous improvement. The faster you iterate, the faster you win.

    Avoiding Common Mistakes

    Conducting experiments effectively requires avoiding some common pitfalls that can undermine results:

    • Confirmation bias – Looking only for data that confirms your hypothesis, and ignoring contradictory data. Remain objective and acknowledge all results.
    • Small sample sizes – Testing with too few users leads to variability and inaccurate conclusions. Determine minimum sample sizes upfront for statistical significance.
    • Changing multiple variables – Altering more than one thing at once makes it impossible to know which change impacted the metrics. Isolate each variable and test them independently.
    • No control group – Having a baseline to compare against is crucial. Run A/B tests or keep part of your product unchanged as a control.
    • Stopping too soon – Ending an experiment prematurely before collecting enough data can miss long term effects or trends. Run tests long enough to achieve stastical confidence.
    • No actionable metrics – Focusing on vanity metrics that don’t directly measure outcomes. Define quantifiable, meaningful metrics aligned to key goals.
    • Not testing repeatedly – One-off tests in artificial environments provide limited value. Build a culture of continuous experimentation.

    Proactively avoiding these missteps will lead to higher quality results from experiments to validate product hypotheses. Failing fast to proven learning is the desired outcome.

    Conclusion

    Taking an experimental approach to product development is critical for product managers. Rather than assuming that every idea and feature will be successful, product managers should view their backlogs as a series of hypotheses that need validation.

    By designing and running experiments, product managers can test key assumptions and gain valuable insights into what resonates with users. This enables more informed product decisions, reducing waste and increasing the chances of shipping something customers truly want.

    A validation mindset also encourages rapid iteration. Failures become learning opportunities rather than setbacks, as experiments reveal areas for improvement. Product managers can quickly pivot based on user feedback, optimizing the product experience over time.

    In today’s competitive landscape, winning products come from validating ideas early and often. Product managers who embrace experimentation are better equipped to identify and double down on what delivers real value. While experimentation takes work, the payoff is immense. Validated learning leads to customer-informed products that solve real problems and satisfy market needs.

    By treating everything as a testable hypothesis, product managers can focus their efforts on creating products users love. And they can avoid wasted time and resources building features no one wants. Experimentation transforms product discovery from guesswork to a scientific, evidence-based process. For any product manager seeking innovation and growth, it is an indispensable approach.

  • Product Development Philosophy: Stoicism

    To paraphrase Epictetus: The chief task in product development is to identify and separate matters so that I can clearly say which are externals not under my control, and which have to do with the choices I actually control. Where then do I look for focus? Not to uncontrollable externals, but within myself to the choices that are my own…

    Focus

    Be aware of external factors and monitor them. However, focus your attention on product development on the metrics you have control over. Find the relationships between these metrics. But continue to work on internal metrics. That’s where the impact/effort relationship is most significant.

    Hypotheses or wishes

    It`s hard to build hypotheses that are based on external factors. How can they be verified? Non-verifiable assumptions are more of an afterthought. There is no point in conjuring up reality.

    Purity of product metrics

    We should select metrics for our assumptions in such a way as to be sure that we can verify our assumptions. The more metrics the better. I realize that sometimes there is not enough data. That’s when it makes sense to move your focus to where the data will be. If there is no data, why are we focusing on this hypothesis/assumption? Isn’t it better to move to where the greatest impact will be?

    Say “No” and gain more focus

    If you agree to everything you will be distracted from the main path. Gain more focus. Say “No” to stakeholders and features idea. Say “No” for your internal gut feeling. Write everything down, and prioritize in their own time. Now focus on what is really needed.

    To Summarize

    Focus on what you can influence. Execute your assumptions slowly and without panic. Take small enough steps to avoid tipping over. Granularize your hypotheses and expectations. Step by step. This brings us seamlessly to the Kaizen philosophy in product development. There will be more about that some other time.

  • Product vs. 100 duck-sized horses

    One of my favorite questions in online AMAs is: Would you rather fight one duck the size of a horse, or a hundred horses the size of a duck? As cursory as the question may be, I also ask it in my job interviews – as a follow-up question. What kind of answer do I expect? 100 horse-sized ducks, of course. Why?

    Iteration, iteration, iteration

    It’s better to face smaller challenges than one big one. If you really haven’t read my previous blog posts, I’m already explaining the advantages of this approach. You have more flexibility. Here, the problems are also repeatable, and as a result, we can approach subsequent clashes smarter, and it’s all because of the:

    Learning and drawing conclusions

    There is nothing wrong with being wrong. The problem begins when mistakes are repeated. Therefore, it is essential to learn lessons and Action Points for the future. In order not to make the same mistakes again. If the source of the mistake is external – as in the case of those proverbial ducks – then we simply approach each battle wiser.

    Estimating

    How long will it take you to fill an Olympic-sized pool with a garden hose at constant pressure? If you think about the challenge as a whole right away it’s hard to say. Here, thanks to iteration, we are able to determine this accurately. The Olympic pool has X liters of water. A garden hose will expel 10 liters of water in Y seconds. The rest is math

    Of course, it’s not effort that we are able to mathematically break down. But sometimes we just need to use our imagination. Sometimes we must try something to build an idea of the resources it will consume. There is nothing wrong with that – quite the opposite. Build a hypothesis and validate it but keep metrics in mind.

    Experiments

    They are advisable in product development. But it’s important to approach experiments in a structured way. Define appropriate Succes metrics, timeframes, etc., and don’t be afraid to say “I was wrong.” If , on top of that, you prioritize your experiments and make sure they don’t distort each other’s metrics – you have a ready-made approach to Idea Backlog validation.

    Summary

    Break problems down into smaller ones until you can quantify and prioritize them. Draw conclusions and build experiments and in this way you will be able to defeat any stakeholder, I mean duck 🙂

  • RICEH prioritetization model

    One product many ideas – we know it? Fortunately, there are many prioritization models. You can read about the five basic ones in the nngroup article. One of them is the RICE model, which scares many people off because of its math and fractions. Completely unnecessary.

    Classic RICE Model

    There are a lot of articles on this that explain this exactly, but for the record. You evaluate all your features in the following categories:

    • Reach – assumed number of users in a specific time. For example, 10k Users per month. If all features concern the same cohort (e.g. new users) – you can ignore the Reach aspect.
    • Impact – What impact will this have on our product (metric).
    • There are many methodologists. I suggest 1-5
    • Confidence – certainty that this feature will help us achieve it. Based on data, research, reports and experience. E.g. 80%
    • Effort – How much time do we need to implement this feature. Include. Business, Design and Development. Depending on the degree of detail, you can estimate in Men -Days, -Weeks, – Month or on a scale of 1 – 5

    (Reach x Impact x Confidence) / Effort = RICE Score

    You multiply the first three numbers by yourself. Divide the result by Effort. The number obtained in this way gives you the value of your feature. If you do this for each featura – you can prioritize them.

    Everything is fine, but what if it fails?

    A hypothesis can be a failure

    Not every hypothesis is correct. To be wrong is a product thing. It is important to document your metrics well and draw conclusions for the future. The key here is to understand that undoing a hypothesis change is not always a simple process.

    The cost of the hypothesis rollback is a cost component: user experience, business (operational), development. Depending on these components, this cost can be high. This cost should also be included in the prioritization of hypotheses.

    The rollback cost of a Hypothesis lowers its value

    A well-implemented hypothesis should have a cohort of users and an experiment so selected that the cost of the rollback is minimal. This is the case in a perfect world. In fact, we know how it is;)

    What is the cost of the hypothesis rollback to the RICE model? Like Effort, it lands on the negative side (in the denominator). However, it is not the same as Effort. Moving on to the point:

    (Reach x Impact x Confidence) / (Effort x Hipotesis Rollback) = RICE Score

    In the picture version it looks like the picture below. See also figjam template

    RICEH prioritetization template

    How to conduct a prioritization workshop?

    Remotely and efficiently 🙂 Just like the classic RICE. I don’t want to reinvent the wheel here. In brief:

    Participants: The workshop should be attended by people who are familiar with the topic. And they have the appropriate competences to be able to experimentally estimate all topics. Asynchronous work with featuras before the workshop should help.

    Workshops can be divided into the business part (first columns) and the rest. But it is an option. Better to do it holistically in three steps.

    1. Alignment of knowledge
    about the featers and determination of reach. If there is space – you can discuss here, but it is also worth leaving the discussions with the estimates. Look down.

    2. Voting with the scrum methodology.
    Everyone estimates in secret. Everyone reveals their estimates on a given signal. Extreme assessments should share their rationale.

    3. Adding Up Points
    Here’s where math comes in, but luckily we have calculators.

    RICEH is not a perfect method for prioritizing hypotheses

    This variant of RICE has its drawbacks, but in my opinion it is much more tangible than the classic impact / effort. However, it is important to use prioritization methods and to discuss. The discussion in the product should be structured and focused on the effect and development.

    If you use the RICEH method – please give me honest feedback. Preferably on Twitter.

  • Decision matrix template

    Summary: The template you will find in Figma. The decision-making register can be a simple table. You will definitely need to make some changes over time. Go ahead. Act, experiment. Do whatever you can to improve product development.

    Decisions in the product lifecycle

    Sooner or later, developing the product will put us at a crossroads. In a perfect world, we have a complete set of data to make the right decision. In fact, we have to work with what we have. Then it is worth relying on the resources we have and gathering them in one place. This can be done in the form of a meeting or asynchronously.

    Types of decisions

    There are many methodologies for the division of decisions, but I do not want to break down into the first parts, moreover, for the purposes of this article, I choose the simplest of them. To put it simply, we have two types of decisions – developmental and security-related.

    Development decisions are those in which we choose the direction of further development paths. They should strictly depart from the strategy and be based on data and research.

    Safety decisions are made more often “ad hoc” as a result of unforeseen events. Often their task is to minimize losses and risks. These decisions are most often made based on experience and intuition. But it needs to be structured.

    Decision Matrix

    Based on the discussions in which I participated and other workshop models, I developed a decision matrix. Nothing revealing, but in my opinion a good starting point. In brief:

    • list all the possibilities.
    • check if we have the data that allows us to refer to it
    • define the advantages of these solutions
    • define the negative consequences
    • can we minimize these consequences?

    …. and that’s it.

    Decision Matrix Template

    We have all the arguments in black and white, or in color on a whiteboard.

    The decision and its consequences

    it is undertaken in the awareness that all topics have been raised. During the meetings, such a decision is made by one person. During asynchronous work, it happens that the decision is made by the body. It is important to note who made what decision and put such a note in the decision log.

    A decision register is essential

    The registry seems like redundant documentation, but it’s better to spend a few minutes archiving your decisions so that you don’t come back to the same topic every six months. This is very important in asynchronous work, where over-communication is crucial.

  • Timebox your designers work. Always

    This quite controversial title is not clickbait. I know and understand what the design process is. I am aware that the less time the designer has there, the better solution he will provide. Nevertheless, I tend to impose a time frame on the design team.

    The product must move forward

    Each design can be sanded endlessly. The sooner we validate our assumptions, the better. Continuous Delivery is product development in line with the Kaizen philosophy. Minimal progress but steady. The sooner we start relying on quantitative data from production and feedback from real users, the better.

    Designing a continuous process?

    Yes. I fully agree with it, but designing without granulation and delivery is a process that does not translate in any way into improving the user experience. Know when to “Hands off” and leave some questions and hypotheses for the next iteration. Mockups and prototypes have no value for users. Good User Experience is only when the user comes into contact with the project in real use.

    Documentation and structure of the design process

    Does it mean YOLO Design and the so-called Pixel Monkey? NO!!! A well-documented design process and a constant repository of test reports – all this helps us with the quality of the solutions provided. This way you reduce the time needed to redo discovery processes and ask questions that have already been asked. Such documents are a great starting point for further work.

    How much to limit the time of the UX team?

    Here is your favorite answer. “It depends…” There is no middle ground. This should be jointly worked out between PM and the design team. Full understanding that the goal is to increase the value of the product as quickly as possible and at the same time address further work. That’s why we employ smart people to ask them for their opinion 🙂

    Summary

    “Product Guy” and the UX team should find an understanding of the current problems and pain of users together. Find “Quick Fixes” and granulate thick bands so that the user iteratively gets better and better product. Step by step…

  • Don’t be a product monk

    In the 16th century, an anecdote about the monks’ discussion of the number of teeth in a horse was popular. The discussion was supposed to take place two centuries earlier.

    And so a group of monks argued about how many teeth the horse had. The discussion lasted several days and three factions were separated. From Aristotle through the “ancient texts”, the number of teeth was 30, 45, or 50. After several days of fierce discussion, the young monk suggested to go outside and check how many teeth the horse actually has.

    The proposal to revise the assumptions sparked outrage among the elders. They beat the young monk and threw him out the door. After several days of theological debate, they came to the same conclusion.

    I could add a few paragraphs about validating assumptions and using data. But why. This story requires no comment. Don’t be an old monk.

  • Product philosophy – decision theory

    The product strategy should be developed on the basis of many factors, but sometimes requires a flexible approach. At some stage of granulation, you have to make smaller or larger decisions.

    According to Aristotle, in order to make a correct decision one should pay attention to three aspects. 23 centuries later, it is still relevant today. In terms of products, it is even advisable. And so, in order to make a good decision, we need time, data and experts.

    Time is an ally of good decisions

    As bizarre as it may sound, the more important the decision, the more delayed it should be. The classic “Get some sleep with it” can really just help. Procrastination may seem like a bad strategy, but only if you are doing nothing. How should we use this time? According to Aristotle, we should do at least two things:

    Always verify information

    Data. Important decisions should not be based on guesswork. The decision is a reaction to events, facts that exist in the direct or indirect environment of our product. We should be sure that the data that forces our reaction is true.

    Verification should also be based on reaching for own information resources and own research. Open reports, quantitative data, insights from qualitative data. It should always be at your fingertips.

    Consult with experts

    Data overinterpretation, cherry picking…. There are a lot of cognitive / decision errors and to be aware of them you need to look for an objective and broad perspective. Independent decision should be the last resort.

    A guy in golf once said that he didn’t hire smart people to tell them what to do. Let’s go a step further – you employ smart people, incl. so that they advise you and tell you what to do. If you trust your decisions so much, trust yourself to hire smart people who also make good decisions.

    Summary

    The product strategy is not built “ad hoc” and neither should the roadmap be changed. The decision-making burden should be spread over the time that we will spend on data verification and consultation with experts. It seems obvious, but with practical application it can be different.

  • Automotive Product Experience – Seat Leon 2021

    I’ve been a fan of the Seat brand for a very long time. I recently switched from a 2018 Seat Leon to a 2021, and it’s been a long time since I’ve been so disappointed. To be clear – the car is great, the smell of newness and all. I have the car less than 3 weeks / 2 thousand km. But here it’s not about the novelty effect, it’s about the functional solutions and the experience it evokes. Below are my subjective thoughts and frustrations about changing the car model to 3 years younger. I will try not to bore you.

    Everything tactile

    Everything – including the heating and volume is tactile and is supposed to create a wow effect. In my opinion, this is a very misguided idea. It’s certainly comfortable in sterile conditions, but not on the road. The most bizarre thing is that these toouchpads are totally unlit so during night driving it’s a game of hide and seek. Strongly dangerous. Interior lights are also touchpads. Fog lamps too. The whole idea of Touch only works great when stationary. On the road – very much not.

    Operating system

    Let’s not kid ourselves – the built-in e-sim and touch panel was supposed to be one of the assets of the new car. Unfortunately, this is not the case. The operating system is …. very slow. I feel like I’m playing with an android phone from a few years ago. As we are already at the system – keyboard with German layout (z/y). It is beautiful.

    The built-in navigation is some kind of drama. It’s time to admit my mistake and go the way of Google/Apple Maps. Seriously. Plus the popups that I can’t get past… etc. etc.

    Information Architecture

    Lack of structure and hierachy. Example – the gear shift assistant about raising or reducing the gear informs not with the font size (as it was in the 2018 vintage), but with the direction of the arrow, which is a total filigree. Preventing questions – I know when to change gear, but if the assistant is based on a pixel arrow, it might as well not be there. Another example? We learn about horizontal navigation in the touchpad through arrows, vertical navigation through dots.

    Qualitative errors

    QR codes redirecting to a 404 page. Internet radio that doesn’t work, or predefined applications (such as Tidal). Someone might say that these are details and quality errors can happen to anyone. Probably yes, but they also translate into the overall perception of the product. That’s why I mention them.

    Seat Connect application

    Here it is very multi-level bad. It’s kind of a patchwork monstrosity. An example? When registering, I have to declare a courtesy return (in 2021? – really?). The various screens are extremely inconsistent (checkbox on the left or right – carpe diem!). There are many examples – but I was most amused by the fact that for localization: Poland, language: English – the application panicked and started working in Spanish 🙂 Aha – email communication is a strong shot in the occiput of decency.

    Android Auto works on the cable

    A little frustrating. There are a lot of patents on the web to get around this. I don’t understand why one of them can’t be factory installed. But ok, I would have swallowed it if someone had predicted where I should put the connected phone in such a way that it would not interfere with the gear shift. On a more interesting note, this android auto connection informs me of the phone limit. Bug to the feature?

    Car key

    The car key is the size of a caster. In the age of minimalism, this is a slight travesty. All the buttons have the same tactile design, so from a distance I am unable to open the car without taking the keys out of my pocket. It is also worth mentioning that the key fob mount is based on the emergency part of the key….

    Offline also to be improved

    Cupholders are the size of a small espresso – two medium coffees no longer fit. Oh – and forget about sliding the armrest in case you have large drinks. There is no such option. You have to choose. Coffee or driving comfort….

    To sum up – it’s bad

    As the direction in the product approach is maintained, my love for Seat will smoothly turn into Stockholm syndrome. I will no longer take another car without a test drive 🙂 Lest it be that I’m just complaining. Ergonomics of the steering wheel has improved strongly on the plus side. Bolding, navigation, bottom bevel – love it. On top of that, the side pockets in the doors finally accommodate liter drinks.

    The pluses I will probably discover more. But it was the downsides (minor, or critical) that made me commit this post instead of enjoying the new car 🙂

  • Prioritize your product. One priority

    In this post, I touch on the problem of blurring a product by giving too many priorities at once. The lack of a strategy and a consistently implemented plan ends up with an excessive fragmentation of the product. It is also important how we approach the measures. First things first.

    What is priority

    The word priority entered English in the 15th century. For several hundred years it functioned in a single form. It meant the first, most important, or previous thing. It wasn’t until the 1900s that we added the plural of this term and started talking about priorities.

    At the beginning of this century, the word singular priority is used less and less. After all, it is so important that we want to achieve multi-level success, to do several things at the same time. Both in the personal sphere and with our product. But multi-level success cannot be realized in several directions at once. It is also worth noting that the word success occurs rather in the singular.

    Kanban has a limit of “In progress”

    I do not know if you remember yet, but in the kanban boards there is such functionality as the limit of “In progress” ticks. Yes – a long time ago it was turned on by default in task management tools. Now it’s optional. Multitasking rules. But are you sure?

    There’s a hell of a lot of research into how multitasking kills productivity. It applies to both the work of an individual. But it is also transferred to the image of the product. The product should implement a correctly defined strategy.

    One priority is not stagnation

    For clarity. The product can be developed in several directions at the same time. Of course, but sooner or later these directions or features will begin to intersect. Then priority should enter on a white horse.

    Priority should be engraved in the minds of those responsible for the product. If you’ve read my previous entries, you know that everyone is responsible for the product. So everyone should know what we are doing. Where are we going and when a certain stage of the journey will be completed. Like the three wise men going to Bethlehem. With this lame metaphor, we move on to the next aspect:

    North Star Metric – one metric to rule them all.

    The product must be measured. From the beginning to the end. How else to define success? There can be several dozen records. But it’s worth finding this one. One to rule them all, One to find them all. There is such a metric. There is also a simple framework as we want to make it. North Star Metric.

    I do not want to elaborate on this topic. Smarter than me did it much better, for example here. I mention this metric because this is the fucking compass for your product. Which should always indicate priority. One priority.

    Essentialism at the end

    Develop a strategy. Set a metric. Set a Priority, one priority, and stick to it. Sure, you can do anything hurrrra, but how long will you ride that cart before you crush yourself. Essentialism isn’t just a brilliant way to arrange your life. It is also a great framework for a product strategy. But more on that later. 🙂