Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The YouTuber paralel doesn't hit as much for me because at the end, even if it's…
ytc_UgyBEigpU…
G
Oh we just now gunna go through the Ai uprising. Wait until the dimensions of he…
ytc_UgzlTm0QB…
G
Been testing Chat and Claude for academic. Very happy to let Chat go over this.…
rdc_o85281h
G
We have far to many that don't even know how to be good human beings in this wor…
ytc_Ugz2NgJAW…
G
No one values intuition, but the people with intuition will be on top with this …
ytc_Ugwh0OEz9…
G
@mfitzgerald130 We can't know what form this technology will take. What if we g…
ytr_UgxNW9le_…
G
Calling yourself an "artist" for using A.I. is like having ChatGPT or some other…
ytc_UgxzZORFr…
G
hmm this is tough one... I'm artist for like 20yrs started with digi draw and no…
ytc_UgwTynrPg…
Comment
Isaac Asimov figured this out decades ago. The Three Laws, presented to be from the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.", are:[1]
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
AI Moral Status
2025-06-06T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxKzLbuH6_oAiou6DJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwczAd0fv3pflJHFjp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwtsp01TS67ik1FqE94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJmnixUNSIKUgtYtJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxRSAvCNoLcYjDbyed4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwHWcb_yb453gQtr_h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzjmc6XBfpVIhfmCqp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyeCGfosjkJTgonXLN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXVWgdQDErzZZXuK94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxrBd_RBW09ruPc02l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]