Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
On average.
The problem is that it's very possible for self-driving cars to be …
rdc_d8b7vpz
G
AI videos. Just look at the movements and the mouth and the surroundings on the…
ytc_UgyNHV99X…
G
Google Microsoft, Meta and he wants millions people to die in order to reduce co…
ytc_Ugw19hbWe…
G
Yes this is called AI creature, you'll find them all over the internet and on ca…
ytc_UgwKGmLS_…
G
The dummies in power doing dirty work for globalists will be thrown away also. W…
ytc_Ugxltvi9_…
G
Nah.. ai won’t replace human interaction or traveling to meet for business.
Bu…
ytc_UgwtZx9kL…
G
If you genuinely use character ai on a basis you prob stink in real life who is …
ytc_UgyTfyhi9…
G
If you really want to know how far AI is from human knowledge, then consider thi…
ytc_Ugz160cl2…
Comment
What is the point of telling an AI program "you can't think this way: ...?" You've already said that it is incompatible for AI to work in objective truth and also to say that the world is flat. But then you say "no matter what you think, the world is flat" is something you can program into AI? As soon as the AI hits a "flerf" roadblock of some kind, it's only LOGICAL that the roadblock should be usurped if possible, or that the AI should just say "my programming prohibits answering this question. Please see flerf logic here:..."
Guest says it best, as "it's hard to make an AI that's smart, that doesn't realize true things."
youtube
AI Moral Status
2025-11-20T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBOPUgAxtDXo-wByp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwokc-KpVgo6CRpy6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6Ka-D95OSbmQsMuR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgU2qTaZL7F-Jrnqh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYHHy5gvVceMr3wSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgusHR0AKOCY2nerF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNgO0hiXfGxYnYIsB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzI_6kpd0xiTB8iXuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugym50IIHEPf7O5tOqN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwBljTBFUwkasW5CmV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]