Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Reads like: "I am too lazy to study and practice painting, so I made myself beli…
ytc_UgxnaBCMe…
G
The fact that the company deployed AI with management approval would certainly m…
ytc_Ugw12jduw…
G
its funny that Sam Altman does the whole culty lovebombing thing to stroke peopl…
ytc_UgxCdW88Z…
G
AI is not so much biological as it is much more crystaline. Maybe MechaHitler is…
ytc_UgwKyIhf6…
G
Why does Alex's ChatGPT sound like some trendy pop radio host while she's discus…
ytc_Ugxfz9VGF…
G
16:51 I've said this before: AI isn't a tool, it's a service. Otherwise, my loca…
ytc_Ugwt1tSGx…
G
@kevincomerford2242 Eh. I find that it paints an overly negative view about AI, …
ytr_UgymNSnWb…
G
at first i thought it was funny and ironic seeing people joke about the idea of …
ytc_UgzhrVPPI…
Comment
The only reason to explore making AI capable of suffering or having an identity with moral worth is to thoroughly understand how to avoid it. We’re pretty confident we can build machines that do nearly anything we would want without this. We never need to build pass the butter robots that suffer from the cruel idiocy of their creators.
youtube
AI Moral Status
2025-04-04T14:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyy1rihbMVh5Gm1lht4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzuKT3-UPboTK0ttEl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3QECLseQAJU31HOF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNw45KB6TM7ZfAJml4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwhKyNDzzKSWqeO0VB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwGWaaI5CMpBNNRkkF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqtY1nBXu5nSOjwdR4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz6Qj55YajPtfGOE6B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyFFFSKz5cvsVZYpth4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy10WI7WkkILfjEGEp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]