Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Indeed. At which point the argument then becomes "why don't we fire these expens…
rdc_lqrnarq
G
First of all wtf do I need a robot for any of that for? The most idiotic thing I…
ytc_Ugw4yFWVn…
G
I’m wanting to become a graphic designer and animator but I don’t know if it’s s…
ytc_UgwqWMYJf…
G
Thank you for another excellent and informative video. The one topic I don't re…
ytc_UgzDEueBw…
G
The data center size and the power efficiency and energy usage of the chip is go…
ytc_Ugw574Z_S…
G
>programs robot to put black men in criminal box
>robot puts black men in crimin…
ytc_Ugz2Py8tC…
G
Not a cscareer response, but moreso an in general response.
The reason to not w…
rdc_kyzghxj
G
What is the point of having a self-driving car? It sure as hell is not safer.…
ytc_Ugz0fGWrd…
Comment
Chat keep in mind this chatbot isn't generating the answers to life, and you're catching bits and pieces from more fleshed out philosophical theories. From here you could ask it to suggest further reading on some of the positions mentioned, if they belong to a specific philosophical school of thought, etc
reddit
AI Moral Status
1733130915.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_m00kwvx","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_m00h7cu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_m00fduw","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_m00xg02","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_m00vnem","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]