Productive AI
Author: Peter Kaminski Issue: 2025-04-16
Productive AI
by Peter Kaminski
You know those parents and teachers who said, “Do as I say, not as I do,” and how annoying that was?
Well, that’s me today. Sorry!
Here’s me saying some advice for using the new-fangled Large Language Models (LLMs) like Claude and ChatGPT:
- Use them as assistants (help you with research and structure). Do not use them as oracles (machines that spit out facts).
- Don’t write (or read) AI summaries–seek out the originals instead.
- Don’t use an AI to write finished editorial content, write it yourself with the AI acting as your assistant.
This is still good advice for folks who aren’t yet very familiar with LLMs.
However, with enough experience, context, and editorial responsibility? Turns out you can break those rules, at least sometimes. (Still not all the time!)
Here’s me doing things with Claude 3.7 Sonnet, ChatGPT 4o (and up just recently), Gemini Advanced 2.5 Pro (experimental):
- Still use them as assistants, but sometimes with a lot of freedom. Among many other tasks for LLMs, I’ve been using Claude as a software developer, and I still check its work, but I give it lots of work, and hard things, and it does a pretty good job.
- Use them as oracles. If it’s not a critical thing, I ask an LLM and may just trust it. Sometimes I cross-check with another LLM, sometimes I check with online or other references. I always cross-check when it’s important, but not always when it’s not too big of a deal. Trust but verify; know when to verify, or don’t trust.
- Gemini Advanced 2.5 Pro (experimental) and GPT-4.1 are finally at the point where I think they’re doing good summaries. I’ve also gotten more skilled at prompting for good summaries, especially asking for tweaks (longer, shorter, different language, different focus, etc.) A big part of it is using a model that’s “smart” enough; another big part of it is using a more complex and generative prompt than “write a summary”.
And finally, perhaps still a little controversially, I think I will start publishing a few certain pieces of finished editorial content that was synthesized by an AI in the Plex.
WAIT! Before you close your browser tab and give up on me, let me explain a little more. 🙂
- Not all content; not whole stories. Just tactically useful, short pieces in service of the overall editorial needs of the publication. Structurally editorial pieces, not creative nor personal pieces.
- No banal content. It’s stupid to flood the web with more content just for the sake of distributing content. However, to create productive and useful content, editorially checked by a human for content and value? I think we’re starting to get to the point where that’s useful if done, in certain cases, with taste and good editorial responsibility.
- I will take personal editorial responsibility for anything I publish. To be worth reading, somebody has to take responsibility for what is written, and AI cannot (yet) take personal responsibility for its writing.
- I will make it very clear that what content was AI-generated. I will specify the model and version that created the content. I will provide background context for how the content was created, sometimes including the prompts. (For what it’s worth, it’s not just the prompts that create the output, it’s also the process the prompter went through to synthesize, iterate, and select the content from their available LLMs. Just sharing the prompt is not always useful.)
(The bullets above are developed from a similar list I provided as an addendum to Day 13 of Pete’s AI Homework: “As to when it’s okay to offer AI-generated content as final content, some thoughts.” More of the backstory: I had used Claude to sketch out the Day 13 post, and it did a great job! I ended up shipping that as the core Day 13 post, but then I also added a lot of context and how-I-did-it explanations, so it ended up being a collaborative post between me and Claude.)
As background, along with being a human writer and editor for decades, I have a lot of experience under my belt with AI. I take this relatively small step with a lot of care and expertise:
- 2+ years working personally and professionally with LLMs and other AI.
- 1.5+ years teaching others how to use LLMs and other AI.
- Careful ongoing review of a number of different models, to know their strengths and weaknesses, and when and how they can be trusted or not.
- Back-end technical knowledge about how LLMs work under the hood, including creating multiple chat interfaces using underlying AI APIs.
- And again, making sure I take editorial responsibility for the final product.
I’ve integrated AI productively into my personal and professional life in other places; now it’s Plex’s turn, too. If you’ve got comments, let me know how you feel!
Related:
- Peter Kaminski (author)
- 2025 (year)
- Topics: