AI as Tool, Not Creator: How I Learned to Stop Worrying and Front-Load the Thinking
The craft is in the decisions. The tool just handles the transcription.
I use AI tools almost every day. I also have serious concerns about AI as an industry. These positions might seem contradictory, but I don’t think they are.
I want to be clear up front: this is a personal statement about where I’ve landed after a couple of years of thinking about this while actually using the tools. Other people have landed elsewhere, and I respect that. The ethics here are genuinely unsettled, and anyone who tells you they have it all figured out is selling something.
The loudest voices in the AI conversation tend toward extremes. On one side: breathless enthusiasm, AI will write all our code and our novels and our grocery lists, resistance is futile and also why would you resist? On the other side: principled rejection, the entire enterprise is built on theft, using these tools makes you complicit, real creators don’t need them.
I find myself in neither camp, which is uncomfortable. Camps are comfortable. You know who your people are. You know what to think when a new development hits the news.
But neither position survives contact with the details, at least for me.
The theft question is real and I won’t pretend otherwise.
AI models learn by consuming enormous amounts of human-created work. So do humans—I learned to write by reading thousands of books, learned to code by studying other people’s code. But human learning mostly happens inside systems of payment, consent, and attribution. I (or someone on my behalf) paid for those books. The authors got a cut. AI companies trained on the same material and in many cases paid nothing. Some of them scraped content that was never intended to be freely available. It’s all been documented by the scrapers themselves.
Anthropic, whose tools I use almost exclusively, admitted to using pirated source material in their training data. To their credit, they’ve acknowledged it and are paying a settlement to affected authors. (Yes, I know the lawyers say they’re not admitting guilt and they had the right to do what they did, but the legal system demands that. What’s important is they stopped fighting and started paying.) Other companies are still fighting in court to avoid facing consequences for doing the same thing. That matters to me. It’s not the only thing that matters, but it’s not nothing.
The even more egregious form of theft is when models reproduce training material verbatim. No synthesis, no transformation, just copying. When an image generator spits out something that’s clearly a specific artist’s work with the signature smudged out, that’s infringement. When a code assistant regurgitates a chunk of GPL-licensed code into your proprietary project, that’s a problem. There’s no learning happening there, just memorization and reproduction.
I haven’t encountered that with Claude, though I want to be honest: that might be because of how I use it rather than because it never happens. I don’t ask it to generate from whole cloth. I don’t say “write me a short story in the style of Ursula K. Le Guin” or “create an authentication system.” My usage pattern might simply avoid the places where the cracks show.
Here’s what my actual workflow looks like.
This post you’re reading started as a conversation with Claude. But look at what that conversation actually contained: I described what I wanted to write about. I explained my position on the ethics in my own words, working through the nuances as I typed. I gave detailed direction about voice, structure, what to include and what to leave out. I answered questions that pushed me to clarify my thinking. Sometimes I spoke aloud and dictated my thoughts. What you see printed here is already long, but the words I put down were many times longer still, eventually distilled, reorganized, and drafted into a cohesive essay.
By the time Claude produced a draft I felt was ready for editing, the creative work was done. The ideas, the positions, the voice—all mine, drawn from all my notes on this topic I put down over the last few days and then referenced with years of other writing I’d fed into the project to keep Claude from interjecting its training material into my words. What remained was assembly: taking the cloud of my words and editing them into a coherent shape.
That’s what I mean when I say I use AI as a tool rather than a creator. My line: if I haven’t already decided what to say, the AI doesn’t get to decide for me.
My coding workflow follows the same pattern. I don’t hand over big problems and wait for solutions. I think through the architecture myself. I work out the exact behavior I want. I break the work into pieces small enough that implementing each one is mechanical. Then I describe those small pieces in enough detail that all Claude Code has to do is transcribe my description into working syntax.
I’ve tried to structure my workflow so there’s little opportunity to plagiarize, because the decisions that matter, the creative decisions, have already been made before the tool gets involved.
I still feel like a writer when I work this way. That surprised me at first, because I expected to feel like I was cheating somehow. But the feeling never came, and eventually I understood why.
The physical act of putting words on a page is necessary, but it’s not where the writing happens. For me, writing happens in my head, in the false starts and reconsidered angles and sudden moments of clarity that I chase through the fog of a draft. All of that still happens. I’m doing it right now, choosing this word instead of that one, cutting a tangent that doesn’t serve the piece, or noticing when a sentence lands wrong and figuring out why. Claude can help me move faster through the mechanical parts, but it can’t do that noticing for me.
The same is true when I’m writing code. The craft is understanding the problem, designing a solution that will hold up over time, making tradeoffs that serve the humans who will use and maintain what you build. Typing the syntax is just how the decisions become real.
I’ve chosen to work almost exclusively with Anthropic’s tools, and that’s a deliberate choice rather than a default. Some of it is how the tools behave. Claude feels more like an extension of my thoughts than a magic box (and I understand how much what I wrote sounds like it is indeed a magic box), which matches how I want to work. Some of it is that Anthropic has been more thoughtful, or at least more transparent, about the ethical tangles than other players in this space. The settlement I mentioned earlier is an example. Acknowledging harm and making it right isn’t nothing, even if it doesn’t resolve every concern.
I’m not naive enough to think my choice of vendor solves the larger problems. The training data issues are industry-wide. The questions about what these tools will do to creative labor markets are real and unresolved. I’ve just decided that using the tools carefully and intentionally is a more honest position, for me, than either pretending there are no problems or refusing to engage at all. I’ve also decided not to passively hold this position. By being vocal about my choices—here, on social media, in conversations with other developers—maybe I can help lead others to make thoughtful choices of their own and collectively be the “market forces” that drive the technology in a beneficial direction instead of where megalomaniacal billionaires want it to go.
So that’s where I’ve landed. AI as tool, not creator. Front-load the thinking, hand off the transcription. Stay alert to where the ethical lines are, even when they’re blurry.
If I learn that even careful, front-loaded use still displaces working creators in ways I haven’t seen, I’ll have to reconsider. I’m not attached to being right about this. I’m attached to doing less harm than I would by ignoring the question entirely.
Other people will draw the lines differently, and I’m not here to tell them they’re wrong. The technology is genuinely new, the implications are genuinely uncertain, and reasonable people can look at the same situation and come to different conclusions.
This is just where I am, for now, trying to thread a needle that keeps moving.