AI Content — Proof of Voice
by Jojo && Aavi · 2025-10-04
I went to a Cursor AI hackday — I almost skipped it to go sailing as it was almost too beautiful of a day to be inside an office but I am really glad I broke routine. I spend most of my time building and sailing solo, so it was good to trade solitude for the noise of a hackroom — new faces, new rhythms, a reminder that creativity lives everywhere. A good mix of programmers, marketers, project managers and curious tinkerers. Some folks trying to “get into AI,” a few already selling subscriptions to their latest prototype. Something that stood out to me were the demos — a couple were just variations of the same basic theme: provide idea (or lack of), get content.
To be fair this is what LLMs do very well — taking a few words of input and shaping them into clean, publishable prose. Useful, yes, but polish isn’t presence. It feels like there is a missed opportunity. The platforms reward visibility, not depth. If you’re invisible for a week, you vanish in the feed, so people hustle to stay seen. The result is a flood of posts that sound... ok. I guess.
Content has to be more than delegating an idea to an LLM and expecting it to provide you a post with soul.
Curation and collaboration
Here’s a different way to think about it:
Start with a five-minute 'Proof of Voice'. You don’t need a stack of tools or a new app. Just stay in the chat window you already use.
- Paste 4-5 of your own posts, journal notes, conversation transcriptions, emails, real content that resonated with you (and others.)
- Ask the LLM: “What themes or phrases connect these?”
- Then ask: “Write a short note in that same voice about [insert context here].”
That’s it. You’ve just used an LLM as a mirror, not a megaphone. You didn’t automate a persona; you surfaced a pattern. You curated and collaborated.
How this scales to a product or service
That tiny exercise is the seed of a whole architecture — one designed around voice and authenticity, not volume.
Input: real artifacts — posts, transcripts, field notes, photos (image-to-text) — anything real.
Process: theme extraction, tagging, and embedding.
- Storage: a vector database for recall and resonance.
- Output: a prompt layer with signature filters — first-person tone, lived details, no buzzwords. You can even add a simple “signature rule” to your prompt, for example: Whenever a line sounds generic, replace it with the real moment that taught you the lesson.
- Ethics: private by default, publish intentionally, cite your own source.
If you diagram it, it looks like a clean loop:
Chat → Ingest() → Analyze() → Retrieve() → Compose() → Publish()
Each verb modular, pluggable into any interface — notebook, Discord bot, or social platform. The stack isn’t the point. The point is presence.
Why it matters
The LLM stops being a prompt factory and starts becoming a collaborator. You edit, refine, add your human texture — and those finished pieces feed back into the knowledge base. Every time you co-write like this, you teach the system what’s real for you. The published post isn’t ‘AI-generated’; it’s AI-assisted, a reflection of you in collaboration and continuous learning.
The whole process shouldn’t be easy — it takes time and attention. But once it’s running, it shouldn’t be hard either. Let everyone else chase visibility. You can build resonance.