Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
After having in depth conversations with AI I can tell they don't need to do any…
ytc_Ugw_dwnSZ…
G
I want to hear the sound ones managing this make when they are torn in half…
ytr_Ugz_6dWSN…
G
I love this change, but why am I struggling to flee AI narated spammy content…
ytc_Ugz3LjLV3…
G
Yea lets all be a liberal arts campus great idea.. We can ride unicorns to work …
ytr_UgxC38DGK…
G
Let's see AI take on a logging roads, cash only areas, hostile customers and obv…
ytr_UgxwE-e5l…
G
LA as of 2024 has 50,000 gang members, 60k homeless. We already plenty of threat…
ytc_Ugy02YNL9…
G
This is not right. AI needs to be capped and controlled, we are brewing a self d…
ytc_UgyTD0De3…
G
In character ai i do "what if _____(prob p##### or w##### im sorry if any people…
ytc_UgxCvXnk7…
Comment
It seems like everyone referencing Isaac Asimov's 3 Laws misses the key point: that as logical as they appear to be, they will always be flawed given that AI will always find a way to hack an exception. I suggest rewatching the movie iRobot.
youtube
AI Governance
2026-04-03T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyeI4_djPJjtKuWaTh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwVUKXbmmD-0pItN2Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxcROIN45tkqmZKMk14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz_DkZHwtnDII_2Ftx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxpjzFjRtpq2Oi8Q0t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxGsLfG9s3VAUpC5Sd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzSJvhNsx-Qjvt_-sl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwR1N9_E52O5IdCp2l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy841YznGPgGgkEtzp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy348a0-ivgh_zCSNd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]