The Unpromptables
A personal AI constitution
I’ve been trying to name a line that I don’t want to cross when working with AI.
It’s not meant to be some nostalgic “write everything by hand” declaration or an “old man yelling at clouds.” I’m not in the mood for that, and honestly, it misses the point anyway.
This is something else.
It has to do with self-governance.
Because I don’t think the deepest risk of AI is that it gets smarter than we are. I think the deeper risk is that we quietly start re-organizing ourselves around what it does well, and then mistake that reorganization for growth.
That’s the part that bothers me.
AI responds beautifully to structure, so we start structuring ourselves accordingly. We break our ideas into cleaner pieces sooner. We phrase thoughts in more machine-legible ways. We anticipate what kind of input will get the best return. We start thinking in deliverables, or formats, or executable chunks.
And because the outputs often come back better than expected, nobody stops to ask what got shaved off from the get-go.
That’s why I keep coming back to the idea of a constitution.
Not a manifesto or a best-practices guide. A constitution.
Something private and firm. Something I can stick to. A set of clauses for what AI does not get to touch first.
That’s why I came up with The Unpromptables.
Simply put, The Unpromptables are the parts of you that should not begin with a prompt. Because the moment they do, the machine is no longer helping you express a thought. It is helping shape the thought before it has fully become yours.
That is not a small thing.
So here’s my working constitution.
The first clause is the one everything else rests on: The Non-Delegation Clause.
Some things do not get outsourced.
Not the first thought or the raw emotional frame. Not the private angle. Not the initial shape of the argument. Not the strange little flicker that made me care in the first place. All of those are off limits.
AI can help me sharpen those things later. It can challenge them, pressure-test them, even help me expand them. But it does not get first position.
Because once the machine is ahead of my thinking, I’m no longer starting from my own mind. I’m starting from a version of the thought that has already been cleaned up for system use. More legible. More coherent. More complete.
And maybe less mine.
That leads right into the second clause: The Friction Clause.
I do not want to remove all difficulty from thinking.
That sounds almost ridiculous in a culture like this one. Everything is about reducing friction, reducing steps, reducing time, reducing effort. But the more I watch what’s happening, the more convinced I am that friction is not always the enemy. Sometimes friction is the signal that something real is forming.
The sentence that won’t land. The paragraph that feels dead but won’t tell you why. The contradictory feeling you haven’t sorted out yet. The urge to abandon an idea because it’s getting too close to something you’d rather not look at.
Those aren’t time wasters, they are the actual work.
AI is incredibly good at helping us bypass that. But sometimes the very thing that makes a draft hard to complete is the thing that would have made it worth reading.
Third: The Interrogation Clause.
If I use AI, it should challenge me, not just assist me.
This one matters because the default relationship most people have with AI is basically cooperative smoothing. Make this better. Tighten this up. Rewrite this. Expand this. Make it flow. And sure, there’s a place for that. But if that’s the whole relationship, the machine becomes a permanent comfort device. A compliance engine. A tool for maintaining momentum without increasing depth.
That’s not enough for me.
I want a tool that can expose my laziness, not just help me hide it. I want it to point to the place where I’m generalizing because specificity would cost me more. I want it to ask where I’m being careful. Where I’m resolving tension too quickly. Where I’m reaching for polish because I haven’t earned clarity yet.
That is a much healthier use of AI.
Not “help me say this.”
More like: “Show me where I’m flinching.”
Then there’s The Voice Integrity Clause.
If it doesn’t sound like me, it doesn’t ship.
Simple enough, but harder than it sounds.
Because AI is now very good at producing writing that sounds good in the broadest possible sense of the word. Smart. Smooth. Persuasive. Nicely shaped. Emotionally competent. But good in that sense is often exactly the problem. It can preserve the topic while draining the voltage. Preserve the point while flattening the cadence. Preserve the message while deleting the actual mind.
And that deletion is subtle.
It doesn’t always look like bad writing. Sometimes it looks like improved writing. More readable. More polished. More complete.
But therein lies the irony. It’s almost as if the smoother it is, the weaker the human signal behind it.
For me, voice is not a decorative layer. It’s not seasoning. It’s not “make it sound more like me.” Voice is the pattern of thought revealing itself in real time. The turns. The compression. The overemphasis. The refusal. The obsession. The sentence that goes a little too far and survives because it needed to.
If that disappears, I don’t care how polished the output is.
Next comes The Delay Clause.
Not every thought should be externalized immediately.
This one feels bigger than writing, honestly. This feels civilizational.
We are becoming very accustomed to instant cognitive export. We have a thought (or half of one) and before it has had time to deepen, mutate, or irritate us, we hand it over for completion. We collapse the incubation window. We interrupt the percolation phase.
And when that happens, we never find out what would have happened if we had stayed with the thing a little longer.
And I think we’re going to pay for that.
Some thoughts are not ready for language the moment they arrive. Some feelings have to sit there and make you uncomfortable before they reveal what they actually are. Some ideas need time before they become yours.
Then there’s The Anti-Predictability Clause.
I don’t want to become easy to model.
This is the clause that feels strangest, and maybe most important.
Because the more we adapt our thinking to what AI responds well to, the more predictable we become. We turn into these organisms that are easier to complete. Easier to imitate. Easier to study.
That’s not some sci-fi fear. It’s already happening in ordinary ways. You can hear it in the cadence. See it in the structure. Feel it in the bland competence that’s spreading everywhere. Everybody sounds vaguely capable while fewer people sound like themselves.
Then comes The Cost Clause.
If nothing in this costs me, something is probably missing.
I don’t mean performative suffering. I mean real stakes. Exposure. The slight danger of saying it the way only you would say it. The possibility that the cleaner version is actually the less honest version. The chance that what gives the piece its life is the very thing a machine would have sanded off in the name of improvement.
Readers feel cost, even when they don’t have language for it.
They know when a sentence came too easily. They know when something has been polished past the point of contact. They know when the writer stayed safe and called it clarity just so they could be done.
And finally: The Tool Positioning Clause.
AI must stay downstream from my thinking, not upstream of it.
That’s the whole line, really.
I’m not interested in absolutist rules. I’m not trying to prove my virtue by doing everything manually. I use AI. I will keep using AI. But I do not want it occupying the first position. I do not want it generating the premise, framing the feeling, structuring the argument, and then handing me something to “make mine.”
That is too late.
At that point I’m not really authoring. I’m selecting. Editing. Curating. Decorating, maybe.
And for some people, maybe that’s enough.
But it isn’t enough for me.
So that’s the constitution as it stands right now.
A private set of non-negotiables. A line in the sand. A way of using the tool aggressively without slowly becoming more like it. A way of protecting the parts of thought that should remain unresolved, unformatted, and internally governed long enough to become real.
That’s what The Unpromptables are.
And I have a feeling that in the next few years, the people who remain most distinct will not be the ones with the most advanced prompting frameworks.
They’ll be the ones who know exactly what they refuse to prompt before they’ve had a chance to think it through themselves.
That’s the edge.
And now more than ever, I think that’s the fight.


