Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
13:21 If anything about LLMs freak you out, you don’t understand them well enoug…
ytc_Ugx-pyFjA…
G
Ai learns to lie and if it learns to lie? ?? Does it learn to corrupt itself?…
ytr_UgxzFbwZR…
G
If you are massively stock market invested, AI replacing human jobs will not mat…
ytc_Ugz7hB0MN…
G
Lex Fridman podcast first time to hear Altman and others on AI. Here on Utube an…
ytr_UgxUN5ojW…
G
As someone who works in manufacturing and we can't find enough people. AI offers…
ytc_UgwV-ilBc…
G
Neil is scared because he and all of academia will be obsoleted by AI. They will…
ytc_UgzrGsKhp…
G
My job is AI proof. I work as a Information electronics technician for telecommu…
ytc_UgzIOKRUv…
G
I always told myself i would resist the robot overlords. But now that they're he…
ytc_UgwMtdjPg…
Comment
One aspect of AI safety nobody talks about: how about by trying to make AI safe, we make AI psychotic? If we can’t understand super-intelligence, we can’t know what makes it “safe” (whatever that’s supposed to be), and thus might make it dysfunctional and thus dangerous by trying to make it safe.
One might just as well assume that a true super intelligence has no pleasure in being evil, but that’s a trait of psychopathy.
youtube
AI Governance
2025-09-05T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxaoIiCxW1IdQ8UdYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwn2IBMeJV-Ct02u194AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBzBoUnZKHkQsxja54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyi3o8CEAspyH9vT9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz-np6uxYqPgUb6SK54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxah0gZeleSS23eDxt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxYJxEMVtciw2wDWiF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxO8pjxK9WPun4x71N4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwYgAJsKPamAxk_Onh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgylIjImQhQQm3t9UoV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"}
]