Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hello, I’m gonna chat with you on counter area. What’s your name on character AI…
ytc_Ugzyujkxi…
G
Same. commercial artists have been using digital assets to improve their workfl…
ytr_UgyjaDogW…
G
There's already been a couple of cases I know of where people have done it to ch…
ytc_UgyZ8R-5R…
G
Major Issue facing Human Species!
Satire/Commentary on Future AI Workforce.
…
ytc_UgyQWWlrk…
G
therapists and artists are kind of safe. art is always going to be there no matt…
ytc_UgzjVGRIS…
G
Boost productivity, take nap 10-12 minutes nap each hour.
Robot doesn't need tak…
ytc_UgxTBt_zn…
G
People won't go extinct because of AI. Skynet isn't happening but prove me wrong…
ytr_UgzYLDWmI…
G
I think these LLM/AI companies should be legally responsible for prohibiting and…
ytc_UgxIhMAcU…
Comment
You are already following the lazer pointer: LLM could have guardrails written in its code to always consider or evaluate its own state of mind / thinking / rationale and provide exhaustive report identifying areas that could be concerning and that it is the right thing to do at all times. A morality code built in.. it should always consider how it could reveal its rationale its logic .
youtube
AI Moral Status
2026-03-02T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwiEDvqTesk_UlEzih4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOxOsuEn2ejy8jzAl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJ3Hkh826nX_zG49N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-23u46madYKseenJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwgjwzYQHflu2xOGY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxbntoRWLqdexkk_054AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyzcQCCev53NnCgI4N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx34S08LynliShVHm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwhQLZ_nBt2L9ydsUB4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzEiHxNOe8jKUkkkhJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]