Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
someone put it pretty aptly- essentially it's, like, how ppl who use ai wanna "o…
ytc_Ugz8eKCVx…
G
What is the point of telling an AI program "you can't think this way: ...?" You…
ytc_Ugym50IIH…
G
Could AI art even be viable in a business setting? Since it mashed actual people…
ytc_UgwQNeL7-…
G
Fear mongering. AI or driverless LTL or full truck load trucks barreling down t…
ytc_UgzGwb9yr…
G
AI is NOT the problem. Watch Damon Cassidy‘s video on the matter if you want an …
ytc_UgyocpsVE…
G
One of the reasons I'm glad Andrew Yang got his message across. America is not r…
ytc_Ugzaadze2…
G
Training robots to have feelings and to know how to read emotions of other peopl…
ytc_UgyC6w0eg…
G
So ok dont mind the grammer so a.i. has already won because just as the guy said…
ytc_UgxUvujdD…
Comment
Hell, we are already seeing people being in romantic “relationships” with these A.I. bots. It’s a huge problem
reddit
AI Governance
1762519036.0
♥ 14
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nnlpkp6","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_nnl46c2","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_nnl0g5i","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_nnpmqk2","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"rdc_nnl5u2t","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}
]