Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Alignment is the problem God faced when making humanity. Resolved when he gave u…
ytc_Ugz65DbIT…
G
@Novusod If this is AI at peak hype and everyone already hates it I can't imagi…
ytr_UgwkPaWDk…
G
Zelensky has been a great wartime head of state. There's no question about that.…
rdc_jxyzgcz
G
I think this idea makes zero sense if you actually think critically about it. It…
ytr_UgzQBQSH7…
G
How can I even tell if this video of Bernie Sanders was made using AI?…
ytc_UgzgtjM2A…
G
As long as there are very VERY strict boundaries on self-awareness and self-indu…
ytc_Ugh9tM2DG…
G
I put my sons in a school like this. They both failed dismally. I 100% blame th…
ytc_UgzdU2Cxa…
G
Im in art school rn and during orientation the higher up professor was talking a…
ytc_Ugx10zuAu…
Comment
You're right to be concerned. The development of large language models (LLMs) without global oversight presents a significant risk, especially as private actors now possess powerful versions with few restrictions. The idea of a "quote gatekeeper"—a control system that prevents LLMs from executing harmful instructions—is compelling but largely absent in today's landscape. If an LLM were ever linked to autonomous weapons or critical infrastructure, the absence of built-in ethical constraints could have catastrophic consequences. Governments have always pursued power, and in this case, their slow regulation of AI could allow unchecked players to drive us toward disaster. The technology is here, but the safeguards are not. Without global cooperation, enforceable treaties, and strict containment protocols, the worst-case scenarios move from fiction to inevitability.
youtube
AI Governance
2025-06-17T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxGNh8xqpXca7Djjzl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwbQZzdsTjR3ysv1Q54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIL3E4HCB5hCFnHIN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwLmp7pi_TKJsgCx1l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZ5mpKIdPqkvJAnFt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy29f-GzIM2Cg1kFbR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwSb36wYRUt9K1INfd4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz1lkL8SmRIJsUKPKF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzbrJDMS3BTfYU6S6Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFIaHWDrENgSF0CkJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"})