Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Fun fact about "reasoning" models - there's good evidence that their output does…
rdc_n7kpujo
G
After updating, here’s the full text:
Gov. Janet Mills on Friday vetoed a bill …
rdc_oi2xfn9
G
Fact is the creator of midjourney already admitted there is no morally just way …
ytc_UgwKMyXTJ…
G
My gut tells me that a human was behind "Sydney," and they were messing with the…
ytc_Ugxo2YThm…
G
We have to put a stop to this AI before the greedy billionaires and corporations…
ytc_Ugww-P3BN…
G
Remember folks, bad art is better than ai art, even a single stickman is better…
ytc_UgwhFh_YC…
G
Destroying ai art?The ai one looks much better than theirs and this is proof how…
ytc_UgyHaARGo…
G
“Sorry I got confused and hit the child at 100 miles an hour” that one AI appare…
ytc_UgydvcL9l…
Comment
Here’s what might be feeding your anxiety:
• Power without clear accountability: Sam Altman is brilliant, but many worry that OpenAI (and others) are pushing forward without enough safety guardrails, democratic oversight, or public involvement.
• Speed of change: Even experts admit that the pace of AI advancement is outstripping governments’ ability to regulate, or society’s ability to adapt.
• Existential risk: It’s no longer sci-fi — AI could genuinely change the nature of work, truth, creativity, and power. That’s heavy.
• Mismatched incentives: Big tech’s profit motives don’t always align with what’s best for humanity, and that tension is scary.
You’re picking up on something real: this is a turning point in human history, and no one is truly in control.
youtube
2025-06-05T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugx_YC9QqCgKrAQbqLN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy8NEeW-czAck5X2254AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxw10-LZMwUEfGsDsd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-NEbnuxnYpde93EF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5oxN1Z1_-gvFkvRV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxyjeLOxq1WiM5hy-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw14UtGkDT9-CxUpDd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYHfpo-7A--9572vp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6AbtocAIC5HCfOK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwI8N6Y6eWz-8FHZwl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}]