🌱 A Note Before We Close This Chapter
I want to tell you something honest before we close out this season.
I came into Learning and Reset thinking I'd write a tidy series about habits. I'd share frameworks, spotlight tools, give you a prompt to try, and we'd all feel more organized by the end. That was the plan.
What actually happened is that I wrote about losing prompts I'd spent hours crafting. I wrote about buying AI tools I didn't need because a launch post made them feel urgent. I wrote about debugging a security layer on a cloud instance at 10pm when I thought it would take twenty minutes. I wrote about the specific kind of cognitive weight that comes from carrying three AI subscriptions with three different contexts and three different reasons, none of which you made intentionally.
I was writing about the gap between knowing something and having built something that carries that knowledge forward, and somewhere in the middle of writing about it, I crossed into the second category.
That's what this issue is about. Not the reset. The moment after it.
🔍 Theme Exploration: The Gap Between Knowing and Doing
The transition from "learning about AI" to "actually implementing AI" feels like it should be a clean handoff, but it almost never is.
Here's what I've noticed across the research I've done this season, inside my own practice and in the communities where I watch founders and business owners talk about their tools: the overwhelm doesn't come from not knowing enough. It usually comes from knowing too much without having a structure to put it in. You absorb information about prompting, and agentic workflows, and PromptOps, and AI ethics, and then you open a blank chat window and still don't know where to start.
That's a systems problem, not a knowledge problem.
Creativity and Implementation, the frame we're moving into, is about closing that gap from the creative side rather than the operational side. Because here's what I've come to believe: prompting is a creative act. It's not a technical procedure you execute correctly, and it's not inspiration you wait for. It's a design practice, and design practices have their own discipline.
When you learn to write better prompts, you're not just learning to get better outputs.
You're learning to think more precisely about what you actually want, which means you're getting better at thinking. That skill compounds in ways that have nothing to do with whatever model you're using.
Implementation doesn't mean shipping. It means doing the work that comes after you decide. Deciding to build something, deciding to use a system, deciding to stop paying for a tool that isn't earning its seat. Implementation is where intentions become decisions, and decisions are the only unit of progress that actually matters.
You've been in a reset. Now you're in the part where you find out what the reset was for.
📖 From the Medium archives, here's what this season looked like in practice:
⚖️ The Ethics of Choosing Your AI Stack Choosing which AI tools to keep stopped being a features-and-pricing question and became an ethics question for me this season. When Anthropic refused to allow its models to be used for mass domestic surveillance or autonomous weapons and the federal government responded by designating them a supply-chain risk, Claude went from #42 to #1 on the App Store in days. That moment clarified something I'd been circling: what a company is willing to refuse is a selection criterion, and pretending otherwise is its own kind of decision.
🕰️ The Shadow of Prompt Decay: What PromptOps Is and Why It Matters There's a specific kind of loss that AI practitioners don't talk about enough: you spend an afternoon crafting a prompt that works, close the tab, and two weeks later you can never quite get back to where you were. That's prompt decay, and it costs more than frustration. This piece breaks down what PromptOps actually is, why most teams are still treating it as an afterthought, and the two tools I built at a Lovable build event to start closing that gap.
☁️ I Built My Own OpenClaw Lab on Free Cloud Infrastructure I almost bought a Mac mini on impulse when OpenClaw dropped, and I'm glad I stopped myself. This is the full walkthrough of how I set up a personal AI lab on Oracle Cloud's free tier, running OpenClaw and Ollama on an ARM instance with 4 OCPUs and 24GB of RAM at no cost, isolated from my personal machine and reachable from any messaging app I already use. The Mac mini can wait.
🌿 The Last Lesson in Learning and Reset The research I did this season kept surfacing the same sentence across different communities: "It's not ideas that slow me down. It's execution. I'm overwhelmed and I have no system." This piece is about what I found when I stopped studying the problem and started building something to solve it, including the parts that didn't go smoothly and why that's actually the right update.
🛠️ Tool Spotlight: Lovable

TOOL: Lovable
WHAT IT DOES: Turns natural language prompts into fully functional web applications, so you can build and iterate on real products without writing code from scratch.
WHO IT'S FOR: Founders, consultants, and creatives who have a product idea they've been sitting on because the build felt out of reach, and who want to move from concept to something they can actually show people.
THE HONEST TAKE: Lovable is genuinely fast and the output quality is higher than most no-code tools I've used, but it works best when you come in with a clear PRD and some intention behind your prompts. Vague inputs produce vague apps. The more specific your instructions, the more useful the result. I had a session where getting an agent to cooperate inside Lovable took real problem-solving, and that friction ended up teaching me something. That's the right relationship to have with a tool.
TRY IT FOR: Building a single-feature MVP of something you've been sketching in your notes. One input, one output, one clear job to do. Start there, and if you're not already on Lovable, you can sign up here and get 10 extra credits when you join.
💻 Prompt for Productivity
Prompt: I'm transitioning from a period of learning and research into active implementation. Here is what I've been studying: [paste your notes, key takeaways, or topic areas]. Help me identify the three decisions I need to make first, the one thing I keep avoiding, and the smallest possible version of implementation I could start today.
WHEN TO USE IT When you've done the research and the knowledge is sitting unstructured, and you need something to help you decide where to actually begin.
WHAT IT DOES It forces prioritization at the decision layer rather than the task layer, which means you're not building a to-do list, you're figuring out what you've been circling. The "smallest possible version" prompt inside a prompt keeps the output actionable rather than aspirational.
TIP Add "be direct, I don't need encouragement" at the end if you want less affirmation and more friction.
⚡ Quick Wins: Simplify Before You Scale
Export your ChatGPT conversation history before switching tools. It takes up to 24 hours, but your context doesn't have to disappear just because your subscription does.
If a prompt worked, write it down the same day. Not in a draft, not in a chat tab, somewhere with a retrieval system behind it.
Set a 10-minute timer before opening a new AI tool you're curious about. Use it to write down one specific thing you want to test. If you can't name the test, you're buying on hype.
Close the tab that's been open for three weeks. If you haven't acted on it, the information already did its job or it didn't.
🔧 Behind the Scenes
The honest version of this transition is that I have several half-built things and one thing I'm actually finishing.
Archive is live in beta, and if you've ever lost a prompt that actually worked, this is what I built for you. Not for the prompt itself but for what it cost you to rebuild it, the revision cycles, the "I know I had something better than this" feeling, the time you spent getting back to good. Archive gives your best prompts a home with version control and retrieval built in, so that work doesn't disappear between sessions. If you're a creative, a consultant, or anyone running AI at any real volume, I'd love to have you in the room while it's still being shaped. 🔗
Polaryx is in early MVP, and before I release it to beta, I'm specifically looking for feedback from healthcare leaders. If you're in a healthcare environment and you're responsible for making changes in your department, whether that's workflows, tools, or team operations, I want to hear from you before this goes wide. Reach out directly at [email protected].
If you're in that in-between place right now, building something that isn't finished and questioning whether you're a builder or just someone who writes about building, I want you to know that's the right place to be.
😬 Uncomfortable Question
If you took everything you've learned about AI in the last six months and couldn't read another article or watch another demo for thirty days, what would you actually build with what you already know?
📣 What's Coming
A few things worth knowing:
I'm working on a PromptOps course on Udemy that's going to cover everything from prompt structure and version control to running prompts at scale without losing your mind. It's in development now, and I'll be sharing updates as it comes together. Stay tuned.
I've also officially been accepted as a Lovable Ambassador, and to celebrate I'm planning workshops on how to use Claude and Lovable together in your workflow, because that combination is genuinely worth spending time on. Details on dates and format will be announced on LinkedIn first, so if you're not already following along over there, that's where to be. 👉 Follow me on LinkedIn so you don't miss it.
💌 One Ask
The next series is about prompting as a creative practice, and I want to know what you're implementing right now. Hit reply and tell me one thing you're building, one decision you've been putting off, or your honest take on whether to rebrand before or after a live event. I read every response. 🌿

