Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We need to destroy A.I now , NOW !!! Before it's too late !!! Seriously !!…
ytc_UgxwN1WYI…
G
@vio@violentdeer’s easy to say when your listening to someone referring to an ex…
ytr_UgzMZOlMC…
G
With current AI one would have to type in extremely detailed prompts but even th…
ytc_UgwRb-82I…
G
I was sure this test will go ok as soon as I started this video. Curvy road is n…
ytc_UgwmoCErV…
G
AI isn’t outta control yet. The world leaders and governments are way outta cont…
ytc_UgxnQhQvc…
G
@droppedcombofiend2707 , you're exactly right. They are two completely different…
ytr_Ugyct6CVG…
G
Will there be AI wars? One country's AI attacking another country 's AI.... Mali…
ytc_UgyOvr2VR…
G
Part of me might want one that looks like my mom. I miss her very much. But I kn…
ytc_UgwZZjwOZ…
Comment
So by Harari's reasoning an AI told not to do anything stupid by humans would observe whether humans do stupid things and if it judged they did, would copy them and do stupid things. Its first thought might be how stupid it appears that intelligent life creates machines capable of destroying itself. Then it might copy that example too...
youtube
AI Governance
2025-07-22T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwBmIud28l_qtSRqT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJ-0xOEbWLoLhSZFV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzL9HzXDKb9ha0i6Rl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgztrpcDLVlnizzWtht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7vzxbFJBL1M3BlBl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6M_WEJeFM6BYGLfh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdfryuQSOLLmxPnfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgznzNf6vSFBqlh3F4R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFPPFg-dMfhv7Z8cd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwoE1RjmODEQJwnjZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]