Category: AI

  • From Prompts to Agents: Level-Up an AI Workflow

    Let me cut to the chase: today I’m zooming in on conversational AI, mostly the LLMs. Those models that speak, write, and “think” alongside us. Why? Because this space is moving faster than most product teams, and it’s suddenly relevant to anyone who types into an AI chatbox… or wants their workflow to run without them.

    What’s the Real Gap: Great Prompt vs. Manual Agent

    I’ve built a lot of prompts. Maybe you have too a template for user stories, competitor benchmarking, release checklists. But I’m no stranger to agents either. Here’s the gist:

    A rich, reusable prompt is like a supercharged search query. I run it, get my result, tweak it, and maybe save for another day.
    A manual agent? This is next-level: I hit “Run,” and it does a bunch of stuff for me… decisions, context, maybe several steps in a row. Sometimes, it even tells me what’s next.

    Here’s how I see it:

    • Every prompt needs me :/ every time, start to finish.
    • An agent remembers why it exists. It’s got decision trees, action lists, context carryover.
    • Prompts answer questions. Agents actually do stuff.

    How Do I Turn My Epic Prompt Into a Manual Agent?

    Some days, my prompts feel so advanced it’s a shame not to call them agents. So I made the upgrade, and you can too. Here’s my go-to checklist for this transition:

    1. I clarify my agent’s goal. Not just “answer this,” but “do this, in this way, so I don’t have to micromanage.”
    2. I restructure the prompt to include context, instructions, inputs, and possible next actions.
    3. I add logic. If X, then do Y (like “if negative feedback, escalate to user research”).
    4. I let it remember stuff…context, previous answers, anything useful.
    5. I teach it how to format and deliver results (like, “output table for Jira” or “send summary to Slack”).
    6. I connect it to a manual trigger: button is goood enough for the beggining. Forget about API and webhook… for now 🙂
    7. I run tests, fix bugs, repeat until smooth enough

    Manual triggering means I’m still in control. But after that? My agent runs a show, not just answers a question.

    The Critique: It’s NOT Just About More Prompts

    Most guides focus on better prompts, bigger libraries, smoother templates. But honestly, that misses a few elephants in the room:

    • Not all AI skills live inside prompt engineering. Ethics, validation, project management… this all matter.
    • Work with agents and LLMs is rarely linear. Sometimes I jump from prompt to agent, sometimes zigzag according to project needs.
    • Teams need versioning, sharing, actual process. A prompt library doesn’t cut it if I want repeatable business results.
    • New models keep popping up. Prompt skills must evolve with them.

    Lessons I Stick To:

    • Prompting is where I start, Agents is where I scale.
    • Sometimes, a killer prompt is enough. Sometimes an agent saves me hours.
    • Choice depends on my process, not on the “ultimate AI workflow.”
    • Testing, context, and iterating win every time.

    Final Thoughts

    Looking back, what moved the needle for me wasn’t just writing more advanced prompts. It was building mini-systems that gave me leverage, saved my time, and made AI a background player instead of an inbox. Agents aren’t for everyone or every scenario. But if you want to level-up, they’re worth the leap.

    If you need some mentoring – feel free to hit me.
    For you its free. For teams its not 🙂

  • Small Automations > One Big One. n8n for Product Managers

    I was recently working on a process map that was supposed to be automated by one big workflow. The project was meant to be transformative – one solid automation that would move everything. Two months later it was in a drawer. It was too complicated, it broke with every change to our processes, and on top of that, nobody understood it.

    That’s classic waterfall thinking, but in the world of automation. And today I want to flip that idea on its head.

    A Lesson from Product

    You know how PMs talk about MVPs and iterations? “Instead of one big feature, build something small and check for feedback”. The same works for automation. Instead of trying to automate your entire product discovery flow with one workhorse – build ten small, independent ones.

    Take collecting customer feedback. It would be easy to think: “I’ll build one powerful automation that collects feedback, analyzes sentiment, categorizes it, sends it to Google Sheets, posts to Slack, and generates a summary for the report”.

    But what happens? If something goes wrong at the sentiment stage – the whole flow breaks. If you change the category structure – you need to rewrite the logic. If OpenAI goes down – nothing gets through.

    Instead: ten separate, small, independent automations, each doing one thing – but doing it well.

    How This Looks in n8n

    I recently analyzed the most frequently used workflows on this platform by product managers. And you know what? Everyone who says “it works” operates on exactly this principle.

    First automation: feedback lands in Google Sheets. Period.
    Second: every new row in Sheets triggers sentiment analysis.
    Third: sends a Slack notification if sentiment is negative.
    Fourth: weekly, gathers all feedback and sends it to Notion.

    Each workflow is 5-10 nodes. Easy to build. Easy to fix. Easy to understand.

    And you know what’s the best part? You can work asynchronously. Feedback appears in Sheets a minute after it arrives? No problem. Sentiment doesn’t get analyzed right away? Nothing breaks, because it’s a separate automation.

    Specifically: What to Automate for a PM

    If you’re working on product discovery, a few small things are worth your attention:

    Monitoring Product Hunt – a small workflow that checks every hour what’s new. Sends to Slack. That’s it. This automation takes 15 minutes to put together.

    Collecting feature requests – form arrives, request goes to Google Sheets. And that’s all. A separate automation categorizes them. Yet another one, separate, aggregates them weekly and sends the report.

    Tracking competitors – one workflow takes screenshots of competitor pages every Friday. A second workflow compares them to the previous week and sends differences to Notion. Two separate processes.

    Analyzing sales calls – one automation extracts transcripts. Another analyzes them. A third tags insights. Each does one thing – but does it well.

    The Philosophy of Small Pieces

    There’s something deeper here. Sometimes I hear PMs say: “but this will be scattered”. Well, of course it will. And that’s a feature, not a bug.

    Scattered systems are:

    • Easier to debug (something broke? One thing, not ten)
    • Easier to scale (want to add a new category? You modify one workflow, not the entire structure)
    • Easier to understand (new team member looks at a diagram with 5 nodes, not 50)
    • More resilient (if YouTube API breaks, your competitor analysis doesn’t suffer much, because it’s a separate thing)

    This is a principle that engineers know as microservices. But we PMs don’t usually think about it when working with automation.

    The Problem with Big Plans

    There’s always someone who says: “But if everything is scattered, it’ll be a mess”. Of course, if you don’t organize it. But that’s not a problem with small automations. That’s a problem with documentation and process.

    Two files:

    1. Google Doc with a list of all workflows (what it does, when it triggers, who maintains it)
    2. Tag in n8n: product-discoverymonitoringanalysisreporting

    Done. Now you have transparency.

    When to Start

    Don’t wait for the “perfect automation”. Build a workflow for collecting feedback today. It’ll take 30 minutes. Tomorrow add a second one for analysis. In a week you’ll have a system that actually works, instead of one hybrid monolith sitting on the shelf.

    Will it be imperfect? Yes. Will it be scattered? Yes. Will it work? Also yes.

    And that’s what matters for a product manager – things that work, that you can change without apocalypse, and that actually save time in discovery work.

    Instead of one big dream about automation, take ten small wins.

  • AI in Product Management:Q3 2025 quick Rundown

    The last few months have really shaken things up.

    Seriously, did any product manager expect this pace of change? Many probably struggled to keep up (myself included). What happened…? Long story short: a lot, fast, and sometimes a bit chaotic.

    Enter GPT-5: The New Star

    OpenAI announced GPT-5 as the best model in the world. Ever seen a model decide by itself when to answer instantly or when to take its time? Now you have it. No need to ask twice. GPT-5 makes decisions for us, and get this – everyone gets access. No paywall. Your competitor, your buddy, your LinkedIn connection -they’ve got it too. Ever wondered why everyone suddenly talks automation?

    Sometimes it slows down, other times it answers instantly. Mini and nano versions are out there. Few hallucinations (still happens, let’s not kid ourselves). Safety tests? Thousands of hours.

    Gemini 2.0/2.5 – The Agent Era

    Google’s not sitting idle. Gemini 2.0/2.5 moves towards “agents” – systems that don’t just answer, they do stuff for you. Actually, they do. “Gemini for Home” understands what your camera shows, not just “motion detected.” I had a weird case recently – a noise and the AI said, “Courier dropped off a package, left.” Strange, useful. PMs, time to change your mindset – UX and data architecture are changing. It’s no longer text interaction, but background agents doing their thing.

    Claude 4 and a Million Tokens of Context

    Anyone in research noticed Claude 4? A million tokens of context (not a thousand, or ten thousand, a million). Throw everything at it – customer feedback, competitor analysis, docs. Claude handles it. Deliverables last too – generate a report, send it to Slack, check back next week, still there.

    AI Hallucinations – A Real Threat

    One big topic: hallucinations. AI can make stuff up confidently. Regulated industries (fintech, medtech) – one mistake means lost trust. Compliance nightmare. A user burned once might never come back. PMs must watch production outputs, craft smart prompts (not as easy as it sounds), and critically evaluate results – don’t blindly trust the model.

    AI-Powered PM Tools Are Booming

    The PM tool race with AI is heating up: Jira AI, Productboard AI, Notion AI (imagine PRDs without it?), Aha! AI. Automated feature prioritization, workflow automation, predictive task assignment = faster and smoother. But hey! 75% of PMs already use AI tools, so if it’s not you, someone else is ahead. Don’t say I didn’t warn you.

    Agentic AI. The Future of Product Discovery

    Agentic AI is about to change the game. Not chatbots, but agents who decide, act, gather data, and recommend – deliver results, not just answers. Product discovery? Instead of searching “triathlon bike,” the agent collects data, shows options, advises. Real personalization, real-time insight.

    What Does This All Mean?

    A few simple facts:

    • AI speeds up everything. Time to rethink infrastructure and processes; two weeks is the new three months.
    • Hallucinations exist and will. Monitor and fix.
    • Agents are real and here. UX changes, tools evolve.
    • Context is currency. Don’t miss out.
    • Access is universal. You win not by the tool but how skillfully you use it.