Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At what point do all the unemployed people get together and burn down the compan…
ytc_Ugxkr39zD…
G
I support the AI take over if the world. Humans do not deserve this planet.…
ytc_UgzBGuqTi…
G
As a Trump voter myself im really disappointed in him now since hes trying to pa…
ytc_Ugz5WNDnS…
G
Guys, more families are removing their reliance on fossil fuel energy we provide…
ytc_UgyjmzOev…
G
Not for 100 years ? AI will design a artificial self aware AI, fairly quickly…
ytc_UgyiLyLd2…
G
Why does it matter if she's famous or not? Its still ai nudes of her being sent …
ytc_Ugx3TKGww…
G
something even funnier.
theres still people out there thinking that once ai tak…
ytc_UgzA_ghzS…
G
Much of the "business" rationale for AI is a ruse. Groups that are making the te…
ytc_UgwnIpVzM…
Comment
Basically an ai bot might not know what loyalty is so might not be predictable in battle if thirs a flaw in its logic.
And that likely will be their as if it can reason it might detect who gave it the orders is the real problem
Irony being if its intelligent enough it might also begin to not folkow orders.
Their thir might be bots fighting bots and their creators for tge sake of it
The same already seems to happen in politics:)
youtube
AI Governance
2025-06-18T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzhnD-aoLVOJwWCQE14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOK-HNfer1lCqtUTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTCjJvDcJfizyiiTp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4tvp8lA47sAAu3_14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgzPJVN1n17txBMkw3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx_U9m12_0fYGCeNfd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOHyv2K6zuQmXRf6N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw_x3TGll7g6rr3wXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOuVDrbKRCqCTQgMx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7OK3UhpRWrzaK8Zh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]