Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What a ridiculous statement by the bbc. Yes humanity is in trouble but humanity …
ytc_UgxItRoJv…
G
We just co-exist and things will change but maybe we can focus more on our plane…
ytc_UgyZUIXw_…
G
Me looking for 100th minecraft scripted smp: ye… I think AI is fine cos there is…
ytc_UgxPw2dx_…
G
Absolutely. Unfortunately he's not selling pretty vacuum cleaners, he's selling…
ytr_UgyNUsmnQ…
G
Make o3 and opus , rogue ai learn to be humane, by learning human values....emot…
ytc_UgxwkSi4q…
G
Correct me if I'm wrong. All the jobs being replaced by AI are the jobs women ca…
ytc_UgxolhOdY…
G
Why the double standard? Because humanity is being guided toward a new digital e…
ytc_UgzoGIumk…
G
There might be a nice period where we don't have to work and can just bike ride,…
ytc_UgyuDgadI…
Comment
I fucking love the prospect of having AI robots that we can converse with, spitball ideas off of, and debate the question of sentience. It's just so fascinatingly exciting to me for some reason. I'm of the position that if something has the convincing appearance of sentience, then we should treat it as such. I would rather be mistaken and treating something nonsentient with respect than to be mistaken and abusing something WITH sentience.
youtube
AI Moral Status
2023-01-05T03:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgysjmoirOufH9Vdb-Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwVmSTNzNBjJDHWA1N4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4AsJxpCfk99xTFEp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyDZuxURP_9llc-4Sd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwhLcgz-xqgvxBdold4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvwQ5mJ7vTAAACAfR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw0ubIxZ3WVvgDKjjt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxyAz1xo7-q2e7kSI94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz_2LktjrifiOXc2sB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4aw0LKpiBa8DAl9R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]