Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All of you modern humans that are greenlighting AI are becoming, or have already…
ytc_UgwRx87zY…
G
And that proves what? That you're a liar? That LLMS aren't necessarily the best …
ytr_UgyECy1Jr…
G
I feel like boomers ruined the pipeline a loooong time ago. Younger generations …
ytr_UgzkEIhwE…
G
5:20 well, apparently there's some lawsuit.or something on (suno?) because they …
ytc_UgxBevmRh…
G
Apparently there are law firms replacing interns with AI, which is horribly shor…
ytc_UgxRKIIGT…
G
I don't really know how to feel about AI. It's absolutely toxic for us but it mi…
ytc_UgxbNvS4i…
G
An interesting thought about the cone problem, is perhaps a pedestrian can hold …
ytc_UgytLg_G6…
G
Greetings to my friends 🙋🏻♂️
Be afraid of that day, or should I say better, be …
ytc_Ugzk8YKTb…
Comment
They don't seem to propose a solution to determine the humanity of these machines. I think a law like this would be important, but where's the line? The twitter bots we have today are absolutely the responsibility of the owners, and creators, and the ai of science fiction should be responsible for it's own actions, but there is a muddy middle point there.
reddit
AI Moral Status
1524937050.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dy4e3bg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_dy4ftoz","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"rdc_dy4phxw","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"indifference"},
{"id":"rdc_dy54eq6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_dy57k0p","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}
]