Sometimes I think about the line between forest and field.
Where the moss ends and the sun opens up. That moment of transition. Not quite shade, not quite clarity. That’s what AI feels like lately. A liminal threshold between what machines can lift and what we must still carry by hand.
In two recent essays, I’ve been sitting with this tension: how much we automate, and how we move beyond the initial spark of AI “vision” into something structurally sound. You don’t need to read both to feel the thread. But I think you’ll find yourself somewhere in the stretch between them.
🧠 Big Idea
Automate a third. Amplify the rest.
There’s a quiet revolution in acknowledging what we don’t automate. This piece explores a simple idea: most AI use cases today can automate about 30% of a task. The rest? That’s where humans still matter deeply.
Instead of replacing human labor, we’re redistributing attention. Letting AI do the rote, so people can tend to the ambiguous, the sensory, the political, the relational. Design critique. Conflict resolution. Ethical tradeoffs. This isn’t the afterthought of automation. It’s the soul of it.
If you’re designing for AI, the challenge isn’t “how do I replace the workflow?” but “what part of this task should be amplified instead of removed?”
From vision to viable roadmap
Even with the best intentions, AI strategy can fall into vagueness. Vision decks are easy. Roadmaps are not.
This second piece outlines how to move from dreamy ambition into structured coherence. It’s about diagnosing constraints before chasing scale. Mapping data maturity before promising personalization. Designing feedback loops before investing in dashboards.
A good AI implementation plan isn’t just technical. It’s emotional. Cultural. Procedural. Strategy means choosing what to not build, not just what to pilot. This piece is especially for product managers and UX leads navigating orgs hungry for AI, but not yet ready for the weight of it.
⚡ Quick Wins: Threshold Questions for AI Builders
Before your next sprint, planning doc, or product meeting, take a pause and ask:
What part of this workflow can be responsibly automated?
What part requires human care, nuance, or dissent?
What structural or cultural blockers are preventing a coherent AI roadmap?
You’re not just building tools. You’re shaping thresholds between sense and speed, silence and signal. Make them walkable.
🧭 Product Hunt Radar
Fresh launches worth watching through the AI adoption lens:
Convo – Convo automates qualitative user research. The tool uses AI to analyze user feedback and deliver insights, helping product teams make better decisions based on user research data
Currents AI – Currents AI turns social‑media data into insights. Its AI monitors social discussions in real time and highlights market and consumer trends, giving companies deeper insight into user behavior for decision‑making
Futudo – Futudo is an app that “joins your past with your future.” The AI learns about you over time and suggests actions relevant to your personal situation, aiming to reduce decision anxiety and help you think through life goals
🪴 Productivity Prompt: Tend Your 70 Percent
What should stay human?
This prompt works best during early project scoping, roadmap reviews, or when you’re sensing that a feature is becoming “too efficient to be useful.”
Let it slow you down in the right way.
You are my AI UX strategy coach. Given that I’m planning an AI-integrated product feature and need to decide which parts to automate, recommend how I can identify the tasks or touchpoints that should remain human-led to preserve nuance and trust. Include examples of invisible labor that teams often overlook, and guide me through how to surface and prioritize them in a product planning session.
Try answering this with your team in your next product review, or journal on it solo. If you do, I’d love to hear what surfaced.
🌎 In the News
🧠 OpenAI published a new framework for measuring political bias in LLMs, introducing spectrum-based scoring and a “model completion disparity” metric that could help AI UX teams evaluate outputs more rigorously before launch, especially in sensitive domains. Read it
🏢 Deloitte is rolling out Claude AI to 500,000 staff globally, while quietly refunding clients over $98,000 for hallucination-related errors, highlighting both the scale of enterprise adoption and the real cost of getting AI wrong. Read it
💼 Facebook has quietly revived its job board feature, just as AI continues to displace entry-level roles, signaling a push to re-engage workers most vulnerable to automation. Read it
🌿 Workshop Spotlight: Thinking with AI: From Curiosity to Creation
What if AI could think with you, not for you?
Join this free virtual session for artists, designers, product thinkers, and curious professionals exploring how to collaborate with generative tools like ChatGPT.
In 90 minutes, you’ll learn to use AI as a creative partner and leave with a small prototype or idea sketch that reflects your own creative process.
🗓 October 25, 10 AM CT
💻 Free and virtual
🎟 Join on Meetup →
✨ Come curious. Leave with something you made alongside a machine.
✍️ Vibe Check
If you’re feeling the tension between doing and deciding, between speed and care, you’re not alone. I’m in it too.
Not everything needs to scale. Some things need space.
📬 Closing CTA
If either of these posts sparked something, whether a disagreement, an insight, or a question, I’d love to hear it.
You can reply to this email or find me on Bluesky or LinkedIn. If you're looking for more reflections like this, the blog is quietly growing here.
Talk soon,
Lex
curious, always building

