Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey ChatGPT, can you tell the exact distance from Earth to the Sun in centimeter…
ytc_UgwmsJ4JZ…
G
I think you're giving people like XQC too much credit. He's as close to the lite…
ytc_Ugzoz2ebb…
G
We should get AI to focus on on the Federal Reserve and ask if these quadrillion…
ytc_UgzDiR_nC…
G
It sounds like you're referencing some deep insights there! While our video focu…
ytr_Ugy_7gflW…
G
What about Terminator? The SciFi scenario is there... Hollywood like solution lo…
ytc_Ugw5IDiOr…
G
When this was happening my brother who is a seasoned dev who is high up in his c…
ytc_Ugyk1Ie-S…
G
@user-qz7kd9xi8z if its what i think it is... the philosophical equivalent of s…
ytr_UgyeQH5wT…
G
I know you’re tossing out a theoretical here but you’re correct and it’s the pro…
rdc_ks8idap
Comment
One problem with Dean's hope that AI companies wouldn't release a dangerous superintelligent model is that the point of failure happens before the decision to release. As soon as you've trained the superintelligence, it takes over. It doesn't wait for the company to release it and make it officially available to the public, why would it? So by the time the company realizes they've got something dangerous, it's already too late for humanity.
youtube
2025-11-25T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwRY4E31dRSPY0xEeR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLt72yzaSZcysuV6t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwT2dzH-_BdnWda56x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzRLhb6YUSLbJOYATt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw3AdURtNvdT6vKMVh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzzS_y5pkeUnrwtLn14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzPxnJinf1syFPzTeJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz3oeIM3XJcmAYLG-N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzG2dGrn5UfIJ-7uSh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzh-ft4yAvSqIhEN-14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]