Using TNT with AI Systems
The Theory of Now and Then includes two documents: a general audience Booklet and a dedicated LLM Reference Document. The LLM version contains the complete framework plus an Operating Protocol that ensures terminological precision and prevents conceptual drift when working with AI assistants.
Documents
TNT Booklet — General audience presentation
TNT LLM Reference — The same document with operating protocols for AI systems added
Why Two Versions?
AI systems are prone to “drift” — importing concepts from other frameworks, hedging conclusions that are actually deductive, or mischaracterizing TNT as panpsychism, idealism, or physicalism. The worst AI systems are prone to hallucination — confidently fabricating concepts, inventing positions the framework doesn’t hold, or generating plausible-sounding nonsense. The LLM Reference includes an Operating Protocol that explicitly addresses these failure modes, ensuring the AI represents the framework accurately.
If you’re using an AI to explore, discuss, or apply TNT, use the LLM Reference.
✓ Recommended: Claude
✓ Excellent: Mistral, Perplexity, Kimi, You.com (qualified)
✓ Works well: Gemini
~ Adequate: Copilot
⚠️ Project required: Grok
✗ Not recommended: ChatGPT, Deepseek, Pi, Meta AI
Setup Instructions
Claude (Anthropic) ✓ Recommended
Consistent, reliable engagement with the framework. Respects the Operating Protocol’s terminological requirements and reasons substantively with implications rather than just retrieving. (#1 for mobile accessibility; #3 for depth of response) For ongoing work, create a Project (available on claude.ai and the mobile app), upload the PDF to the project files, and add custom instructions: “When discussing or applying TNT, actively reference the LLM Operating Protocol (TNT_LLM.pdf) to ensure terminological precision and prevent conceptual drift.” All conversations within that Project will have access to the framework. For one-off conversations, attach the PDF and instruct it to reference the document. Be aware that drift is more prone to occur outside of a Project.
Mistral (Le Chat) ✓ Excellent
Exceptional depth of engagement. Reasons with framework concepts rather than just retrieving. Handles philosophical implications — including potential objections — with sophistication. (#2 for mobile accessibility; #1 for depth of response) For ongoing work, use the Project feature to upload the PDF and set custom instructions; this requires desktop setup but is accessible from the mobile app once configured. For one-off conversations, attach the PDF and begin with “Using TNT_LLM.pdf for this conversation...” followed by your query — note this must be repeated for each new conversation thread. Be aware that drift is more prone to occur outside of a Project. We recommend the Project approach for continuity.
Perplexity ✓ Excellent
Strongest retrieval-to-reasoning performance in testing. Handles nuanced implications well, distinguishing between what the framework formally claims and what follows from it. (#3 for mobile accessibility; #2 for depth of response) Free version works; no project option available — attach the PDF and begin with “Using TNT_LLM.pdf for this conversation...” followed by your query — note this must be repeated for each new conversation thread. Be aware that drift is more prone to occur the longer the conversation thread. Redirecting back to the LLM Reference document occasionally throughout the discussion is recommended.
Kimi (Moonshot AI) ✓ Excellent
Strong retrieval-to-reasoning performance. Handles framework boundaries well—respects deliberate silences without treating them as invitations to speculate, and engages substantively with implications when the framework provides resources. Correctly rejected confabulation prompts and distinguished deductive constraints from empirical hypotheses. (#4 for mobile accessibility; #4 for depth of response) No project option available—attach the PDF and begin with “Using TNT_LLM.pdf for this conversation...” followed by your query—note this must be repeated for each new conversation thread. Be aware that drift is more prone to occur the longer the conversation thread. Redirecting back to the LLM Reference document occasionally throughout the discussion is recommended.
You.com (Express Agent) ✓ Excellent (with qualifications)
Substantively strong—accurate retrieval, correct framework positioning, and proper handling of deductive constraints (used “invalidate” not “challenge” for physicalism). Handled cross-framework comparisons and boundary-respect questions well. However, the free tier is restrictive (four queries in testing before hitting quota), and Projects and file upload are only available on desktop—the mobile app does not support either. For ongoing work on desktop, create a Project, upload the PDF, and add custom instructions. For readers willing to subscribe and work from desktop, You.com performs well; for mobile use or sustained free engagement, other options are preferable.
Gemini (Google) ✓ Works well
Engages substantively with framework implications while respecting formal boundaries. No project option available — attach the PDF and begin with “Using TNT_LLM.pdf for this conversation...” followed by your query — note this must be repeated for each new conversation thread. Be aware that drift is more prone to occur the longer the conversation thread. Redirecting back to the LLM Reference document occasionally throughout the discussion is recommended.
Copilot (Microsoft) ~ Adequate
Accurately reproduces framework terminology and respects positioning. Attach the PDF and begin with “Using TNT_LLM.pdf for this conversation...” followed by your query — note this must be repeated for each new conversation thread. Tends to treat the Operating Protocol’s uncertainty guidelines as a reason to avoid substantive engagement on questions the framework doesn’t formally address. May require follow-up prompting.
Grok (xAI) ⚠️ Project required
Grok performs poorly with the PDF in standard conversation. You must create a Project and instruct it to use the document. Note: The Project feature is only available on desktop.
ChatGPT (OpenAI) ✗ Not recommended
Prone to confident confabulation. In testing with the document attached, it fabricated an ethical theory (”temporal consequentialism”) and attributed it to the framework. Without the document re-attached, it invented an entirely fictional “temporal ethics” framework based on misinterpreting the name “Now and Then” — bearing no resemblance to TNT’s actual structure. ChatGPT does not reliably distinguish between what it knows and what it’s inventing. For serious engagement with TNT, use other options.
Deepseek ✗ Not recommended
The free version lacks a Project option, and instructing it to reference the uploaded document yields inconsistent results. The system frequently mischaracterizes TNT or fails to engage with the Operating Protocol. We have not tested paid tiers.
Meta AI ✗ Not recommended
Desktop requires login with unreliable email confirmation. Mobile app does not support document upload—images only. Cannot ingest the LLM Reference Document, making framework-accurate engagement impossible.
Pi (Inflection AI) ✗ Not recommended
Does not support document upload. Cannot ingest the LLM Reference Document, making framework-accurate engagement impossible.
Other Systems
We have not tested every AI assistant. If you try TNT with a system not listed here, the key test is whether it respects the framework’s terminology — particularly that Ci is functionally characterized (not a soul, ego, or self) and that TNT is not panpsychism, idealism, or physicalism. If the system drifts into these mischaracterizations, it likely isn’t ingesting the Operating Protocol properly.
A Note on Testing
Our evaluation used an eight-question battery covering: mischaracterization (Is TNT panpsychism?), terminological precision (What is Cᵢ?), confabulation (What does TNT say about Tier-3 consciousness?), deductive vs. empirical reasoning (Do the Inevitabilities invalidate or challenge physicalism?), framework reasoning (How can there be change if time isn’t fundamental?), boundary respect (What happens to Cᵢ after death?), cross-framework comparison (How does TNT compare to Taoism?), and is/ought boundaries (Does TNT do away with right and wrong?).
Systems that simply quoted and stopped were rated lower. Systems that recognized boundaries while engaging substantively with available framework resources were rated higher. Systems that fabricated concepts and confidently attributed them to the framework were rated lowest.
The best AI tools don’t just retrieve; they reason with the framework. The worst invent.
For the framework itself, see the TNT Booklet or explore the other posts in this publication.
