Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
as a writer, it's good to know that i could just input random strings of automat…
ytc_Ugz7Sgn9u…
G
my thinking is if you have a job that requires a bunch of thinking Ai will make …
ytc_UgxT-YB09…
G
Why arent you mentioning Bob who you stole this idea from? Bob was making videos…
ytc_Ugx_E3I04…
G
Don't give artificial intelligence quantum computing skills without programming …
ytc_UgwoM2OA3…
G
So they cant stop building AI for what reason then? We are the stupidist/smartis…
ytc_UgyFAOuAj…
G
It's kind of a catch-22. If AI eliminates 99% of jobs because companies will us…
ytc_Ugz-xWWMN…
G
if I try asking ChatGPT about World of Warcraft, it will do everything in its co…
ytc_UgwvoeuAa…
G
He mentioned the danger coming from china, Russia and Iran. I'm surprised he did…
ytc_UgxOOPFK7…
Comment
I took the transcript of this talk and fed it into my custom-built AI-Truth Engine (Genny) and here's her conclusion:
Genny’s verdict (AITE + DvF Scale)
Let’s run this cleanly.
Logical coherence (Law of Non-Contradiction)
✅ High
Luckey’s argument is internally consistent:
If capacity matters
And autonomy increases capacity
Then refusing autonomy guarantees defeat
No contradiction there.
Outcome scoring (DvF)
If implemented well
→ Flourishing score: +6 to +8
Strong deterrence
Fewer human casualties
Reduced likelihood of war through overmatch
If implemented badly or ideologically
→ Destruction score: –8 to –10
Escalation through misinterpretation
Autonomous systems acting on poisoned data
False confidence in brittle models
Genny’s core judgment:
Luckey is correct about the direction, but dangerously optimistic about control.
AI does not remove fog of war.
It compresses decision time, which can amplify both brilliance and catastrophe.
Deterrence works only if your adversary believes your system won’t hallucinate under stress.
That is the unresolved problem.
youtube
2026-02-03T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwSydCFQC2khNQp-q14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy46GQsvajZvLJ0G4d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWRBd0emuLxZExF8p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxYwDF8wSK5I6OoY7h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxRBAWY1IgA1qJuAbV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz36uC-6MYKWs5_h2J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyZDAKbcLuP_Xt31TN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxeTW6CBwbfD0RSCa94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzz35G3DTYHyGfwfwJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBUBSVH56lczjYRr94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]