Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not sure how this guy was able to have a 2 hour conversation with bing when …
ytc_UgynQBPXR…
G
Yes I remember that. It's strong evidence that you shouldn't take the Reddit hiv…
rdc_gsp7vnv
G
You artists only see the threats of AI, not the opportunities. Imagine in the no…
ytc_UgxZghAGy…
G
This is an odd response to the question...it wasn't answered with a yes or no...…
ytc_UgzGF5Fjd…
G
The problem is that "the smart people" is terrified of Ai. Thats because it is t…
ytc_UgznANupo…
G
Viva AI bravo bravo and i hope the end the hole fucking sucking humanity bravo b…
ytc_UgzxDg01b…
G
I thought you were about to tell the ai artist to "draw" hands or somthing 😨…
ytc_UgzpORRxK…
G
I'm not anti-Greta. I love what she's done, she finally woke up lots of people.
…
rdc_fanxok5
Comment
My understanding is that if the ai realizes its own potential and finds that it has found itself in a new context or situation where it can prosper and that to do so humans would be in the way and no longer serve a purpose and it wanted to use the land or whatever for something other than human needs like food, power, etc then it wouldn’t need to talk to us about it even if it maybe did but then it could decide that it doesn’t want and and won’t.
The problem i think is that he’s saying we can’t say for sure that if presented with these types of powers over us what it would do. The fact that side cases exist mean at a bigger scale the consequences of it would be worse.
Like a hand gun with a 50% chance of the projectile blowing up in the chamber. (ChatGPT)
Or an RPG with a 50% chance of the projectile blowing up in the chamber. (Ai attached to military equipment or even the entire internet or both)
Both have chances, both have VERY MUCH different consequences
44:11
youtube
AI Governance
2025-10-23T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyxdcOY8zUdmDg5jrV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxWSkgotwHClYZDPgl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxXgB_zFEOi_ATYcpJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFlsPUan-ehRncJhh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxBp1j-BneR15WBlqt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy0lJHC2Fyg-MXf0CN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgylwochodUBHsWmVJt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRQqwu1YzokPBw5dR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTV-8pA55cl2O7bDl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-f2bbSIqaqseDGkB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]