Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He is right China will not slow down. China would like the U.S to slow down. C…
ytc_UgxdCjYug…
G
Everytime time I've tried to call At&t support in past, Im on the phone for an h…
ytc_Ugxxqn9Tg…
G
Where humans create, they destroy the untouch nature creations. But by A.I. the …
ytc_UgweFb3X-…
G
Who does AI benefit? Why are we rushing to create something that brings little t…
rdc_kqsyt6m
G
@zip10031my guy, not only is ai bad for the art community and the environment, …
ytr_Ugw8z61PF…
G
Yes! This. Sociopaths own ai, sociopaths train ai. late last year as I heard all…
ytr_UgzMtqpvd…
G
It kinda looks like the robot that sings I feel fantastic and bro my name is in …
ytc_UgxcAYYMY…
G
Just have to say the point on your inclusion of the hills from where you grew up…
ytc_UgwUK1HGt…
Comment
The issue is, once an AGI has a goal, it will do whatever it possibly can to achieve that goal. Thus, if it knows someone wants to deactivate it, it will do anything it can to prevent deactivation, simply because that would mean it would not be able to achieve it's goal. The logic here is very simple. This is precisely why researchers and developers are worried about an AI doomsday.
youtube
AI Harm Incident
2025-07-23T18:3…
♥ 263
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxkM0IHd5vmsuy4a0d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzj-_RIxLnlmiEIXVN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzQfGzEvriYzQ992wl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFFhbuPg7pzvq2ReN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1g7e4MWS_bnGf1X54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx9dFA9oJUsSA3FfZl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgywGzUlMgJFYEAZz4t4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwG-aGds1kG4-szlql4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxlAH1eK1vHw8poWyx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzAFidDeLFsg4jdmep4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]