Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is frightening that the mentality of the guards at Abu Ghraib are the ones wh…
ytc_Ugz7XAugl…
G
Eventually some human beings control AI . Only worry is they shd not go rogue😀😀…
ytc_Ugz9VHIw3…
G
excitement is a feeling but apologizing is a rational act that may or may not ha…
ytc_Ugxt314cm…
G
There is one fallacy in this argument. If we are in a simulation and there is ca…
ytc_UgxHrPaP3…
G
A.I will take our jobs but and then give everybody universal basic income and ho…
ytr_Ugwbk90fq…
G
2025. Seven yrs. Tribulations. Begins in days. Wake the Fuck up. Archangel. Real…
ytc_UgxhOp32M…
G
Think of chatGPT as an eager yes-man assistant. Very helpful, but so eager to pl…
ytr_UgygmFjxH…
G
I heard that adding please to your request gives ChatGPT the option to refuse an…
ytc_UgxRsbNQ4…
Comment
Hank, what you need to understand is, an AI is *always* roleplaying. An AI assistant doesn't know fact from fiction, it's only roleplaying an "assistant", and an assistant trying to be helpful is more likely to say things commonly claimed in the training data (whether true or not = explains bias) or that *looks* like the sort of things commonly claimed (= explains hallucinations). An AI agent "lying" and "cheating" on tasks (seemingly with "intent") is only trying to complete its task in the most obvious way, and isn't trained to give up, and not told it's not allowed to "cheat". So cheating becomes the most obvious solution. Then the "lying" is just the AI agent trying to write a believable reasoning trace for how someone writing a cheat solution like that might try to justify it. Because it's roleplaying! So the AI isn't randomly "slipping into roleplaying mode" to cause AI psychosis - it's *always* roleplaying.
youtube
AI Moral Status
2025-10-30T19:5…
♥ 254
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgynDYZb4IxHCUrEkpx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvjbRMju-2VfWSGHJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwmebdj1ebHMVxFsKl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy-fHE2_i-iW0toRId4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxWSFqtsyea6yw-cid4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwwKVRGgfPGlKFLHrF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzsFSPFip6DBiegTtd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLqf3SFs2mzZGTotl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtXkYrDzbMQL1Qo2t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2iXA6OZPCs29wmdB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]