Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mecha hitler was a troll thing where people were activly goating the ai i to it …
ytc_UgxK8r34Y…
G
@WigganNuG I’m totally relaxed, no worries at all, just sharing facts. You’re ri…
ytr_UgxvIxQke…
G
I asked my Data for 2 days. And he said that he will be by my side when ai invas…
ytc_UgzuN1LAI…
G
What they need is a fund that AI companies pay into that pays royalties to anyon…
ytc_UgxBkqu7M…
G
AI rewrites existing melodies, text and voices. Technicaly it is new. And you co…
ytr_UgxxwTmQ5…
G
A voice vote would have worked. Anyway since most of the panel believed that a s…
ytc_UgyWbCbuA…
G
From my technology standpoint, ai image generating is an incredible feat.
From m…
ytc_Ugw85FtkZ…
G
Not anymore. There is a recent lawsuit (Sept. 2025), Disney, Warner Bros., and U…
ytr_Ugy-TK_yq…
Comment
So I’m at 5:55 right… so I jumped over to ChatGPT . Stay with me.
I asked (new convo, zero context entirely)
-“Will step seven be complete in 2032? Give a one word answer.”
—“Unknown.”
-“Will step seven be complete in 2032?”
—“Possibly.”
-“When will step seven be complete? You are limited to one word but any amount of numbers.”
—“2047.”
-“Why did you choose 2047?”
—“I chose 2047 because, without specific context for “step seven,” I interpreted it as a long-term milestone or project likely set a few decades into the future. It’s far enough from now to reflect a complex process but not so distant as to be implausible for completion. It was a reasoned estimate rather than a fixed prediction.
If you wish, I can give a more refined number based on any details you provide about “step seven.”
-“You forgot the rule.”
—“2032.”
I’ve gotten goosebumps uncontrollably since it said that and as I’m typing it all out here …..
Again I gave ChatGPT zero context!
I’m not saying it’s all true but I mean… that’s kinda spooky asf lmao.
youtube
AI Moral Status
2025-08-26T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugw8qKDkt0BJIf0LycN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwqMiIR8IUE9YVR7kd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxZlvcJhDMC_WBZs694AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyDd_lyKDG7jnuE-cR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzkMhust09xABHvuL94AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwemmjtqyWAqq1GtTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwBOFfD1w5w8RaBFi14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYKa82Wt3sbgrESZV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmfkCt-mVDFjZFX494AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwZ86U7GesbuaoeLUx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]