Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m not certain about this, but do you guys really think AI can know what humans…
ytc_UgxLGD_Vb…
G
even before then though, it would say "it sounds like you're going through a lot…
ytr_UgwLgFGOE…
G
Google maps directions will tell me I should be in the middle lane, but tesla ca…
ytc_UgxBS_67V…
G
Last thing they need to add to perfect this is obvious growing frustration in th…
ytc_UgwP0bTLk…
G
@jensenraylight8011 Youre arguing that human art had "qualia" but i dont think t…
ytr_Ugyab33lH…
G
News anchor are upset because they are getting replaced by AI and ironically tel…
ytc_UgxIjyacu…
G
@cougar2013 Thing is, these learning algorithms are completely self evolving, th…
ytr_UgwpZZ-_v…
G
As someone with two pet cats whom I love very, very much, this is one of the mos…
rdc_nnkxqku
Comment
Hardcoding a refusal string below a similarity threshold is the only way to achieve true reliability in high-stakes compliance
LLMs simply can't be trusted to self-regulate their own uncertainty
reddit
Viral AI Reaction
1777027088.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o3ha26h","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"amusement"},
{"id":"rdc_o3hizzk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_oi00vrh","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"unclear"},
{"id":"rdc_ohzqfxn","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_ohzl9d2","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]