Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro elon has been using a.i. for a while now . He just doesn't want to lose …
ytc_UgxfkOn5G…
G
I spend part of the year in Hawaii and part on the mainland. I fly often. I was …
ytc_UgwZk3ubE…
G
The world needs a moratorium on any and all AI . We simply cannot afford the ris…
ytc_UgyZoCNu7…
G
I've been trying to gain some traction with my traditional art without success w…
ytc_UgwzIN2l0…
G
Evacuating some 30,000 people and pets from Louisville and Superior during catas…
ytc_UgymlAfuO…
G
Interestingly enough, AI with neither succeed nor fail. Some AI (not LLMs) are a…
ytc_Ugx5lo5Ii…
G
>**angry_orange_trump**
>So we’re conveniently ignoring cheap Nigerian da…
rdc_k34071h
G
It was noticed early on that context size increasing didn’t necessarily result i…
rdc_n3kpzxx
Comment
There's an argument that the prospect of collateral damage has also prevented more trigger happy solutions.
A drone has no consciousness, no moral compass, no accountability. You can basically now order murder *a la carte*. With reduced repercussions.
reddit
AI Moral Status
1616690151.0
♥ 75
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_gs6njun","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_gs6siaz","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_gs76fdm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_gs61v88","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_gs63shw","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]