Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i have the steps which allows a.i. to be safe already in motion. stay up...…
ytc_Ugx1NQshl…
G
How about a standard filter for filtering certain points found within ai dataset…
ytc_Ugzp4FbRh…
G
@Mantaforce2I think the difference lies in art being in a muddy place between pe…
ytr_UgzelX1Is…
G
Helpful lies is more accurate i think.
AI, as it exists now, tries so hard to h…
rdc_n5gna81
G
AI will never ever be conscious, because consciousness is the ontological premis…
ytc_UgwkIa14b…
G
This Chinese school is PATHETIC . Let children be children . They will break if …
ytc_UgyDq4zca…
G
Tesla vehicles are not fully self-driving. Despite the marketing for "Autopilot"…
ytc_UgwT8E8Me…
G
This has to be a bait post. Of course a simple processing AI isn't understanding…
rdc_n0p6us0
Comment
I wonder what the laws would be like for this self driving car. Could you for instance, consume alcohol in a such a vehicle while it was operating?
Asking for a friend.
reddit
AI Harm Incident
1455299230.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_czxiye3","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"rdc_czxjvmo","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"rdc_czxnkv9","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"rdc_czxujl5","responsibility":"company","reasoning":"unclear","policy":"regulate","emotion":"fear"},
{"id":"rdc_czy384u","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]