Category: AI

  • AI in Product Management: Q1 2026 quick Rundown

    The first three months of 2026 confirmed what many PMs had already felt at the end of last year: AI has stopped being an add-on to work and become its scaffolding. The question is no longer whether you use AI in your day-to-day. It’s how deeply and how deliberately. Here’s what happened in Q1 and what it means for product managers.

    Numbers that define the moment

    94% of product professionals use AI regularly, with nearly half embedding it deeply into their workflows, saving 1-2 hours per day (Product School, 2026).

    ~25% of PM tools have meaningful agentic capabilities today. The rest are still copilots: they answer questions but don’t act autonomously (AI PM Tools Directory, February 2026).

    40% of enterprise applications will include task-specific AI agents this year, a Gartner forecast that is materialising faster than expected.

    Theme #1: The copilot era is ending, the agent era is beginning

    For the past two years, AI in a PM’s work functioned like a very good assistant: it answered when you asked, completed when you started, generated when you prompted. The human was always in the driver’s seat.

    Q1 2026 brings a clear signal that this model is ending. The industry is shifting hard toward agentic AI, systems that execute multi-step tasks across multiple tools at once (email, CRM, code repositories, documents) with minimal human intervention between steps.

    Chatbots answer. Agents act. That’s the line. And every time you cross it, the first question isn’t “what should the agent do?” It’s “where does a human stay in the loop, and what happens when the agent gets it wrong?”

    aipmguru.substack.com, March 2026

    Concrete moves from Q1: Microsoft in February released new Copilot Studio features enabling businesses to build and deploy autonomous agents across enterprise applications. OpenAI in March launched advanced agentic capabilities allowing agents to plan, reason, and act with minimal human oversight. The agentic AI market was valued at $4.54B in 2025 and is projected to reach $98B by 2033.

    The practical consequence for PMs is straightforward and uncomfortable at the same time: designing agentic products is an entirely different craft from designing classic features. You need to decide where the agent stops and asks. What counts as a critical failure versus something fixable after the fact. How a user understands what the agent just did on their behalf.

    Theme #2: The PM role shifts from execution to orchestration

    If one sentence were to summarise Q1 from a PM’s perspective, it would be: AI is absorbing the operational substrate of product management so the PM can focus on what is irreducibly human.

    What AI is starting to take over in day-to-day work:

    • Feedback collection and synthesis. Tools like Dovetail, Kraftful, and Productboard AI automatically tag, classify, and surface patterns from hundreds of user conversations. PMs no longer spend 30% of their week building reports.
    • Documentation. PRDs, user stories, acceptance criteria, release notes are AI-drafted and PM-edited. Writing from scratch becomes an editing task.
    • Routine analytics. Funnel anomalies, cohort monitoring, regression detection happen automatically. PMs receive alerts, not raw data.
    • Backlog grooming. AI triages, deduplicates, and scores incoming requests against existing priorities.

    What follows: PM value concentrates elsewhere. According to Airtable research, 92% of product leaders now own revenue outcomes, more than double from just a few years ago. The PM is no longer a roadmap and feature manager but someone who connects business strategy, AI system capabilities, and user needs.

    When building software is cheap because of AI, the most expensive thing you can do is build the wrong thing. That puts the PM at the centre of the business.

    Techcanvass, February 2026

    Theme #3: Vibe coding changes prototyping for good

    Andrej Karpathy coined the term, Collins Dictionary named it Word of the Year 2025: vibe coding, describing what you want to build in natural language and iterating through AI instead of writing syntax.

    In Q1 2026 this stopped being an experiment. PMs without a technical background are building working prototypes in Cursor, Bolt, Windsurf, or Lovable in hours. Slack operates with small cross-functional squads (sometimes one designer, one engineer) that use AI to prototype constantly and discard dead ends without hesitation.

    The practical consequence for discovery: the cost of testing a hypothesis has dropped dramatically. If your validation process still assumes several weeks to build an MVP with the dev team, you likely have a learning speed problem, not a resource problem.

    Miro AI is also changing product workshops: it automatically clusters sticky notes, identifies patterns from brainstorming sessions, and transforms unstructured discussions into structured insights without manual processing after every meeting.

    Theme #4: Product explainability — you’re now optimising for AI, not just humans

    This is one of the less obvious but significant signals of Q1. Amy Mitchell (Substack) points to a new PM responsibility: products are increasingly evaluated by AI systems before a human ever interacts with them. Search, recommendations, comparisons, and purchasing guidance now happen through AI-mediated answers.

    Product explainability is the degree to which a product clearly communicates its purpose, value, behaviour, and limits, both to people and to AI systems. If AI doesn’t understand your product, it won’t recommend it in the right context.

    In practice this means growing pressure to structure your product knowledge base in a machine-readable way, keep feature descriptions current, and expose product knowledge as structured data accessible to AI.

    Theme #5: Data debt is the new technical debt

    Q1 clearly surfaced a new area of PM responsibility: the quality of data feeding AI models. In the past, technical debt meant messy code. Today, “data debt” means poorly labelled datasets that cause model hallucinations, faulty recommendations, and flawed priorities.

    The PM is becoming the de facto guardian of data quality entering the AI system. If the input data is biased, the product will be too. This is not an engineering problem. It is a product decision.

    According to Deloitte data, only 11% of organisations are actively using AI agents in production. 42% are still developing their agentic strategy roadmap, and 35% have no formal strategy at all. One of the main reasons: legacy systems were not designed for agentic interactions, and data pipelines are too messy for AI to operate on autonomously.

    Tools standing out in Q1

    Productboard AI — intelligent roadmap recommendations and automatic feedback synthesis across channels. Strong integration with engineering workflows.

    Linear AI — goes beyond task management: automatically creates and assigns issues from product discussions, generates sprint summaries and release notes from commit history. Becoming the execution layer for product strategy.

    Miro AI — product workshops for distributed teams. Automatically clusters ideas and transforms brainstorming sessions into structured insights.

    Innerview — automatic transcription and analysis of user interviews in 30+ languages, with AI-generated summaries and pattern identification. Reduces analysis time by ~70%.

    Bagel AI — niche but precise: links qualitative feedback directly to business context (revenue risk, customer segments, churn exposure). Addresses a gap most prioritisation tools ignore.

    Perplexity Comet — automates competitive research, pulling data into spreadsheets without manual collection. Significantly reduces weekly time spent on competitive intelligence.

    What this means for you as a PM

    Three things worth thinking about heading into Q2:

    1. Is your discovery fast? If validating one hypothesis takes weeks, you have a learning speed problem, not a resource problem. Vibe coding and AI prototyping tools should now be part of your standard discovery toolkit.

    2. Do you know where the agent should stop and ask? If your product uses or will soon use agentic AI, the key design question is not “what does the agent do” but “where does the human stay in the loop.” This requires deliberate UX decisions, not just technical ones.

    3. Is your data ready? AI agents need clean data to operate autonomously. If your data layer is messy, no agent will extract value from it. This is now a product problem, not just an engineering one.

    Verdict

    Q1 2026 is not another quarter of incremental AI improvements. It is the point at which a gap is becoming visible between PMs who have reorganised their way of working around AI and those who have simply added AI to an existing process.

    The tools are there. The patterns are becoming established. The question is no longer “is it worth it.” It’s “how fast.”

    Sources: Product School 2026, AI PM Tools Directory (February 2026), Airtable Predictions Report, Deloitte Emerging Technology Trends, Gartner, aipmguru.substack.com, Techcanvass, Amy Mitchell Substack, IBM Think, DataM Intelligence Agentic AI Market Report.

  • AI in Product Management:Q4 2025 quick Rundown

    The last quarter of 2025 felt less like “AI is coming” and more like “AI just moved into your spare room, changed the Wi‑Fi password, and started shipping features.” If Q3 was about shiny models, Q4 was about infrastructure, regulation and a quiet but very real shift in what it means to build products as a PM.​

    1. AI is no longer a feature, it’s the environment

    Remember when “let’s add an AI feature” sounded bold and visionary? In Q4 it started to sound more like “let’s add CSS.”
    A few things crystallised:

    • Consumer spend on AI apps grew triple‑digit year‑on‑year, with productivity and dev‑tools eating most of the pie.
    • CEOs started talking less about “experimenting with AI” and more about “AI as a line item in the P&L and infrastructure strategy.”​
    • The interesting part: access is basically universal. The moat is no longer “we use a powerful model” but “we actually designed a product that does something useful with it.”​

    If you are still pitching “we’ll plug in a model and see what happens,” you are playing last season’s game.

    2. Welcome to the Agent Era (for real this time)

    In Q3, agents were mostly a buzzword plastered on top of chatbots. In Q4 they started to look like systems. Not “ask me a question,” but “give me a goal and I’ll go do things.”​

    On the supply side:

    • Major players doubled down on agentic AI: orchestration, tools, memory, multi‑step workflows became first‑class citizens in the ecosystem.
    • The infra world woke up: AI‑optimised data platforms, vector‑native storage and “AI data engines” became fashionable ways to say “we’d like your entire data stack, thanks.”​

    On the product side, this changes your job in a very specific way:

    • You’re no longer designing single interactions, you’re designing behaviours over time.
    • You stop asking “what’s the prompt?” and start asking “what is the loop: observe → decide → act → learn?”

    If your discovery deck still frames AI as “smart autocomplete,” you’re underestimating what’s now possible – and what your competitors are testing.

    3. Regulation stopped being background noise

    Q4 was the quarter when AI regulation moved from conference slides to Jira epics.​

    A few highlights:

    • The EU AI Act moved into the “this is happening, plan for it” phase, including obligations for general‑purpose models and high‑risk use cases.
    • The US took a slightly different turn with a new executive order focused on keeping innovation and competitiveness alive while still nudging companies toward responsible AI.
    • China and a handful of other countries pushed new standards around safety, transparency and governance for generative systems.

    For PMs, this isn’t just legal trivia. It affects:

    • How you log model outputs and user interactions.
    • How much explainability you need to bake into the UI.
    • Where your data can physically live – and which model you’re even allowed to call in a given region.​

    “Move fast and break things” now comes with compliance officers, DPIAs and model risk committees attached.

    4. Infrastructure quietly became the real power play

    While social feeds argued about benchmarks, someone signed cheques worth hundreds of billions to build AI infrastructure.

    • Investment in AI data centers and compute crossed the 300B USD mark, with mega‑campuses measured in gigawatts becoming a thing.
    • Countries started talking seriously about sovereign AI: local models, local data, local compute as a strategic asset.

    This trickles all the way down to product decisions:

    • “Which model should we choose?” becomes “how do we design so we can swap models and regions without rewriting everything?”
    • Architecture choices (multi‑cloud, on‑prem options, edge vs cloud) stop being purely technical and turn into product strategy tools.

    If your PRDs don’t include at least a paragraph on “AI architecture assumptions,” future‑you will curse present‑you.

    5. AI tools for PMs grew up a little

    Q4 also brought more AI for product managers, not only AI in products.​

    • We saw more vertical copilots – including ones tailored to financial product design and regulatory‑heavy workflows.
    • Generic “summarise this doc” quietly evolved into “help me design an experiment, generate a first PRD draft, and flag risk in this backlog.”​

    The interesting twist is adoption:

    • A clear majority of PMs now use some form of AI in their daily work, but the spread is huge between “I ask ChatGPT to rewrite my emails” and “I run my discovery, roadmap scenarios and opportunity sizing through an AI stack.”​

    The gap between those two groups is likely to widen. Not because of tools – everyone has them – but because of workflow design.

    6. So what do you actually do with this?

    If you’re a product manager trying to make sense of Q4 2025, here’s the short version you can paste into your own notes:

    • Treat AI as infrastructure, not add‑on. Identify the parts of your product where agents and automation can sustainably take over workflows, not just single clicks.​
    • Design with constraints in mind: regulation, data locality, model governance, observability. The boring stuff is now part of the core value proposition.​
    • Invest in agent‑native discovery: think in terms of goals, environments, and feedback loops, not prompts and screens.​
    • Upgrade your own stack: pick 2–3 AI tools and go deep, not wide. Your edge won’t be “I use AI,” but “my product practice is re‑architected around it.”​

    Q4 didn’t give us one big “AI moment.” It gave us something more dangerous and more interesting: AI becoming part of the background. The water we all swim in. And once something becomes the water, opting out stops being a strategy. It just means you’re the last one to notice you’re already underwater.

  • The Agentic Friendly Revolution 2026: Why WCAG is Your Secret Weapon

    Stop pretending we’re building websites only for humans. If your 2026 roadmap doesn’t prioritize becoming Agentic Friendly, you’re effectively building a digital museum.

    As PMs, we’ve spent a decade obsessing over “pixel-perfect” layouts and “delightful” animations. Meanwhile, a tectonic shift is happening: users are tired of clicking. They want to delegate. Whether it’s buying a pair of shoes via OpenAI’s Instant Checkout, booking a flight, or summarizing a complex B2B dashboard, users are sending AI agents to do the heavy lifting.

    If your product spits out a cryptic error because a bot couldn’t parse your “fancy” custom UI, you aren’t just losing a lead—you’re being blacklisted by the AI ecosystem. Here is why Agentic Friendly is the only design philosophy that matters now.

    1. From “Browsing” to “Intent Resolution”

    The era of the “user journey” is being replaced by “task execution.” Whether it’s E-commerce, SaaS, or GovTech, an agent acts as the universal proxy.

    • The E-commerce Angle: Tools like OpenAI Instant Checkout bypass your beautiful cart page entirely. They look for standardized hooks to trigger a purchase instantly.
    • The SaaS Angle: If a user tells their AI, “Update my subscription and add three seats,” the agent needs to find your settings toggle without getting lost in a labyrinth of nested modals.
    • The Lesson: An Agentic Friendly site provides a clear path for intent resolution. If a machine can’t “execute” your product in one thought, a human won’t bother either.

    2. WCAG: The Secret Language of Agents

    Most companies treat Web Content Accessibility Guidelines (WCAG) as a legal tax or a checkbox for the compliance team. They’re wrong. It’s the literal blueprint for Agentic Friendliness.

    • The Lesson: An AI agent sees the world exactly like a screen reader. By following WCAG—using clean heading hierarchies (H1-H6), logical tab orders, and explicit aria-labels—you are writing a manual for the AI. Semantics is the new SEO. If a blind user can navigate your app, an AI agent can thrive in it.

    3. Structured Data: The End of “Guesswork”

    Don’t pray that an AI will “hallucinate” the correct data from a hero banner or a floating div. An Agentic Friendly site is a structured site.

    • The Lesson: Schema.org (JSON-LD) is no longer just for Google snippets; it’s your site’s API for the world. Whether it’s a Product price, an Event date, or a JobPosting salary, OpenAI’s agents use these tags to verify facts. Without them, you’re just “noise” that the agent will skip to avoid giving the user incorrect information.

    The Stakeholder Pivot: WCAG is Finally “Sexy”

    For years, pitching WCAG to stakeholders was like asking them to pay for a root canal—they knew they had to do it, but they hated every minute. That has officially changed. In my recent board meetings, I’ve stopped talking about “accessibility compliance” and started talking about “Agentic Reach.” Suddenly, the C-suite is listening. Why? Because when you show them that a WCAG-compliant site is the only way to get featured in OpenAI’s Instant Checkout or to be the “default” choice for AI personal assistants, accessibility stops being a cost center and becomes a competitive moat. WCAG has finally taken its rightful place in the boardroom. It’s no longer a boring legal requirement; it’s the high-leverage metric for 2026. We aren’t just “fixing the site for the few”; we are indexing the business for the trillion-dollar agentic economy. If you want the budget, stop pitching “fairness” and start pitching “machine-readability.”

    TL;DR / The Verdict

    Being Agentic Friendly doesn’t mean building your own LLM. It means you need to stop sabotaging the machines trying to interact with your business.

    • Is it WCAG compliant? If yes, you’re 80% Agentic Friendly.
    • Is it “Instant” ready? If your flows are too complex for an agent, they are too complex for a modern human.
    • Is the data structured? If not, the agent will move to a competitor who provides facts, not fluff.
  • From Prompts to Agents: Level-Up an AI Workflow

    Let me cut to the chase: today I’m zooming in on conversational AI, mostly the LLMs. Those models that speak, write, and “think” alongside us. Why? Because this space is moving faster than most product teams, and it’s suddenly relevant to anyone who types into an AI chatbox… or wants their workflow to run without them.

    What’s the Real Gap: Great Prompt vs. Manual Agent

    I’ve built a lot of prompts. Maybe you have too a template for user stories, competitor benchmarking, release checklists. But I’m no stranger to agents either. Here’s the gist:

    A rich, reusable prompt is like a supercharged search query. I run it, get my result, tweak it, and maybe save for another day.
    A manual agent? This is next-level: I hit “Run,” and it does a bunch of stuff for me… decisions, context, maybe several steps in a row. Sometimes, it even tells me what’s next.

    Here’s how I see it:

    • Every prompt needs me :/ every time, start to finish.
    • An agent remembers why it exists. It’s got decision trees, action lists, context carryover.
    • Prompts answer questions. Agents actually do stuff.

    How Do I Turn My Epic Prompt Into a Manual Agent?

    Some days, my prompts feel so advanced it’s a shame not to call them agents. So I made the upgrade, and you can too. Here’s my go-to checklist for this transition:

    1. I clarify my agent’s goal. Not just “answer this,” but “do this, in this way, so I don’t have to micromanage.”
    2. I restructure the prompt to include context, instructions, inputs, and possible next actions.
    3. I add logic. If X, then do Y (like “if negative feedback, escalate to user research”).
    4. I let it remember stuff…context, previous answers, anything useful.
    5. I teach it how to format and deliver results (like, “output table for Jira” or “send summary to Slack”).
    6. I connect it to a manual trigger: button is goood enough for the beggining. Forget about API and webhook… for now 🙂
    7. I run tests, fix bugs, repeat until smooth enough

    Manual triggering means I’m still in control. But after that? My agent runs a show, not just answers a question.

    The Critique: It’s NOT Just About More Prompts

    Most guides focus on better prompts, bigger libraries, smoother templates. But honestly, that misses a few elephants in the room:

    • Not all AI skills live inside prompt engineering. Ethics, validation, project management… this all matter.
    • Work with agents and LLMs is rarely linear. Sometimes I jump from prompt to agent, sometimes zigzag according to project needs.
    • Teams need versioning, sharing, actual process. A prompt library doesn’t cut it if I want repeatable business results.
    • New models keep popping up. Prompt skills must evolve with them.

    Lessons I Stick To:

    • Prompting is where I start, Agents is where I scale.
    • Sometimes, a killer prompt is enough. Sometimes an agent saves me hours.
    • Choice depends on my process, not on the “ultimate AI workflow.”
    • Testing, context, and iterating win every time.

    Final Thoughts

    Looking back, what moved the needle for me wasn’t just writing more advanced prompts. It was building mini-systems that gave me leverage, saved my time, and made AI a background player instead of an inbox. Agents aren’t for everyone or every scenario. But if you want to level-up, they’re worth the leap.

    If you need some mentoring – feel free to hit me.
    For you its free. For teams its not 🙂

  • Small Automations > One Big One. n8n for Product Managers

    I was recently working on a process map that was supposed to be automated by one big workflow. The project was meant to be transformative – one solid automation that would move everything. Two months later it was in a drawer. It was too complicated, it broke with every change to our processes, and on top of that, nobody understood it.

    That’s classic waterfall thinking, but in the world of automation. And today I want to flip that idea on its head.

    A Lesson from Product

    You know how PMs talk about MVPs and iterations? “Instead of one big feature, build something small and check for feedback”. The same works for automation. Instead of trying to automate your entire product discovery flow with one workhorse – build ten small, independent ones.

    Take collecting customer feedback. It would be easy to think: “I’ll build one powerful automation that collects feedback, analyzes sentiment, categorizes it, sends it to Google Sheets, posts to Slack, and generates a summary for the report”.

    But what happens? If something goes wrong at the sentiment stage – the whole flow breaks. If you change the category structure – you need to rewrite the logic. If OpenAI goes down – nothing gets through.

    Instead: ten separate, small, independent automations, each doing one thing – but doing it well.

    How This Looks in n8n

    I recently analyzed the most frequently used workflows on this platform by product managers. And you know what? Everyone who says “it works” operates on exactly this principle.

    First automation: feedback lands in Google Sheets. Period.
    Second: every new row in Sheets triggers sentiment analysis.
    Third: sends a Slack notification if sentiment is negative.
    Fourth: weekly, gathers all feedback and sends it to Notion.

    Each workflow is 5-10 nodes. Easy to build. Easy to fix. Easy to understand.

    And you know what’s the best part? You can work asynchronously. Feedback appears in Sheets a minute after it arrives? No problem. Sentiment doesn’t get analyzed right away? Nothing breaks, because it’s a separate automation.

    Specifically: What to Automate for a PM

    If you’re working on product discovery, a few small things are worth your attention:

    Monitoring Product Hunt – a small workflow that checks every hour what’s new. Sends to Slack. That’s it. This automation takes 15 minutes to put together.

    Collecting feature requests – form arrives, request goes to Google Sheets. And that’s all. A separate automation categorizes them. Yet another one, separate, aggregates them weekly and sends the report.

    Tracking competitors – one workflow takes screenshots of competitor pages every Friday. A second workflow compares them to the previous week and sends differences to Notion. Two separate processes.

    Analyzing sales calls – one automation extracts transcripts. Another analyzes them. A third tags insights. Each does one thing – but does it well.

    The Philosophy of Small Pieces

    There’s something deeper here. Sometimes I hear PMs say: “but this will be scattered”. Well, of course it will. And that’s a feature, not a bug.

    Scattered systems are:

    • Easier to debug (something broke? One thing, not ten)
    • Easier to scale (want to add a new category? You modify one workflow, not the entire structure)
    • Easier to understand (new team member looks at a diagram with 5 nodes, not 50)
    • More resilient (if YouTube API breaks, your competitor analysis doesn’t suffer much, because it’s a separate thing)

    This is a principle that engineers know as microservices. But we PMs don’t usually think about it when working with automation.

    The Problem with Big Plans

    There’s always someone who says: “But if everything is scattered, it’ll be a mess”. Of course, if you don’t organize it. But that’s not a problem with small automations. That’s a problem with documentation and process.

    Two files:

    1. Google Doc with a list of all workflows (what it does, when it triggers, who maintains it)
    2. Tag in n8n: product-discoverymonitoringanalysisreporting

    Done. Now you have transparency.

    When to Start

    Don’t wait for the “perfect automation”. Build a workflow for collecting feedback today. It’ll take 30 minutes. Tomorrow add a second one for analysis. In a week you’ll have a system that actually works, instead of one hybrid monolith sitting on the shelf.

    Will it be imperfect? Yes. Will it be scattered? Yes. Will it work? Also yes.

    And that’s what matters for a product manager – things that work, that you can change without apocalypse, and that actually save time in discovery work.

    Instead of one big dream about automation, take ten small wins.

  • AI in Product Management:Q3 2025 quick Rundown

    The last few months have really shaken things up.

    Seriously, did any product manager expect this pace of change? Many probably struggled to keep up (myself included). What happened…? Long story short: a lot, fast, and sometimes a bit chaotic.

    Enter GPT-5: The New Star

    OpenAI announced GPT-5 as the best model in the world. Ever seen a model decide by itself when to answer instantly or when to take its time? Now you have it. No need to ask twice. GPT-5 makes decisions for us, and get this – everyone gets access. No paywall. Your competitor, your buddy, your LinkedIn connection -they’ve got it too. Ever wondered why everyone suddenly talks automation?

    Sometimes it slows down, other times it answers instantly. Mini and nano versions are out there. Few hallucinations (still happens, let’s not kid ourselves). Safety tests? Thousands of hours.

    Gemini 2.0/2.5 – The Agent Era

    Google’s not sitting idle. Gemini 2.0/2.5 moves towards “agents” – systems that don’t just answer, they do stuff for you. Actually, they do. “Gemini for Home” understands what your camera shows, not just “motion detected.” I had a weird case recently – a noise and the AI said, “Courier dropped off a package, left.” Strange, useful. PMs, time to change your mindset – UX and data architecture are changing. It’s no longer text interaction, but background agents doing their thing.

    Claude 4 and a Million Tokens of Context

    Anyone in research noticed Claude 4? A million tokens of context (not a thousand, or ten thousand, a million). Throw everything at it – customer feedback, competitor analysis, docs. Claude handles it. Deliverables last too – generate a report, send it to Slack, check back next week, still there.

    AI Hallucinations – A Real Threat

    One big topic: hallucinations. AI can make stuff up confidently. Regulated industries (fintech, medtech) – one mistake means lost trust. Compliance nightmare. A user burned once might never come back. PMs must watch production outputs, craft smart prompts (not as easy as it sounds), and critically evaluate results – don’t blindly trust the model.

    AI-Powered PM Tools Are Booming

    The PM tool race with AI is heating up: Jira AI, Productboard AI, Notion AI (imagine PRDs without it?), Aha! AI. Automated feature prioritization, workflow automation, predictive task assignment = faster and smoother. But hey! 75% of PMs already use AI tools, so if it’s not you, someone else is ahead. Don’t say I didn’t warn you.

    Agentic AI. The Future of Product Discovery

    Agentic AI is about to change the game. Not chatbots, but agents who decide, act, gather data, and recommend – deliver results, not just answers. Product discovery? Instead of searching “triathlon bike,” the agent collects data, shows options, advises. Real personalization, real-time insight.

    What Does This All Mean?

    A few simple facts:

    • AI speeds up everything. Time to rethink infrastructure and processes; two weeks is the new three months.
    • Hallucinations exist and will. Monitor and fix.
    • Agents are real and here. UX changes, tools evolve.
    • Context is currency. Don’t miss out.
    • Access is universal. You win not by the tool but how skillfully you use it.