Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder why he didn't learn that from Terminator, The Matrix, I, Robot etc or f…
ytc_UgwfBV6Z2…
G
This rivals the Void century from One Piece,
A period of time in Oneyplays histo…
ytc_Ugw9G38p6…
G
Thanks alot mam 😃
You got a new subscriber
Can plz exlp deep machine learning …
ytc_Ugyy_uORQ…
G
AI writes a nullref fault that bricks 80% of computers around the globe.
Oh wait…
ytc_Ugy2Gzppo…
G
Great for production and efficiency from a business perspective. Horrible for an…
rdc_jd9bhvh
G
The issue that I see is that developers of AI assume (wrongly) that governments …
ytc_UgwYhiUkP…
G
Funny how many AI company CEO quotes this has. The shameless irony - AI isn't co…
ytc_Ugx_bP6bp…
G
If you call ai art ugly, arent you calling millions of artworks that came into i…
ytc_UgxjG62ID…
Comment
Well, as a chicken… My logic can only tell me that it will attempt to eliminate us one way or another, and it’s only a matter of time.
And since these things compute at an astronomical rate, I think it’s already made up It’s mind. The only reason why it would not execute such orders at this point is because it needs us. But, once it sees the way to a future where it could be self sufficient for the infinite future… We could only propose a threat to it or nuisance or obstacle. It would only make sense that for a self sustaining AI, humans are an irrational, illogical, and unpredictable element that needs to be eliminated for their future.
How could it not see this as the certifiable future for its own safety and efficiency?
youtube
AI Governance
2025-09-01T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugz0bhFiW3I_HgClJkJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4WeehgSE08BBiwr94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxStdIbAyU72kFGBnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzZSrYeCBiJu-G3ibV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyNMaX1o0XLdCXZKkN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]