Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Any job that you do remotly with a computer will likely be replace by ai soon. B…
ytc_Ugx9GurU9…
G
I just came from some guy crying about shadiverse, and this is horribly on point…
ytc_UgzNl6XE7…
G
„The MOMENT someone comes out with an entirely ethical AI system, does that mean…
ytr_UgzeOMV-T…
G
Si no encuentras pareja, no hay excusa; a ella le darás duro, como cajón que no …
ytc_UgxYtGzYQ…
G
This what I been saying to all these ai doomers. Especially the artists. If you’…
ytc_UgxLu-gbS…
G
No it's just choosing a random option like 99% of all robots when you ask it stu…
ytr_UgxGSbz-z…
G
Like it has already been said, I want ai to do the work, so I can have fun. Not …
ytc_Ugy6VD_Mp…
G
it does feel unsettling, and as AI gets better at imitation it probably makes se…
rdc_o6dfxcn
Comment
I discovered my chatgpt was being undetectably indulgent even when I'm always fighting bias on me and others and being explicit about it with the AI, so in the end and until I found this out, by talking to chatgpt I was just reinforcing my previous beliefs. How I cought it? because the AI of a friend with very different beliefs was doing the same to him, and when we started exchanging arguments, both AI's behind us were saying contradictory statements., and we both had been thinking the AI was being unbiased. The conclusion is that AI is always on our side, which is often an incorrect one.
youtube
AI Moral Status
2025-07-10T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-VON5f2htTUs3lAJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxViIIeCOzgWV5-Eld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhENLOZ4JU8jh9VE54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6yzOtNxQSnVwW5KZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3C9Yw1wOBrjM_kOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLm0ZYvJg1D6bTngR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzmVWUTasJu_yPTL-N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGim6DVXDBPbVxw5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwbYugvvV354AUUsbl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwpEU7OhJAiPR4MSy14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]