Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The vast majority of people are terrible at challenging or confronting others . …
ytr_Ugzo76Kui…
G
How about this: forget self driving let’s all drive our own cars like ACTUAL HUM…
ytc_UgwflajOo…
G
Hes acting like its all AI generated. His mother was involved with epstein point…
ytc_Ugyt2DEvR…
G
This will happen to some extent because businesses will demand it. The insatiabl…
ytc_UgxvatfpC…
G
Surprised it wasn’t already this way honestly. I guess they’ve regained credit w…
rdc_ohzafks
G
Sooo, manual and trade jobs, service jobs, and jobs where you’re interacting wit…
ytc_UgwvtLA9V…
G
How many adults are in Wuhan and how many cases did Wuhan had. 4% seems quite re…
rdc_g9ssyd9
G
@NAGARJUNA7586 how about you learn to code one yourself ;-) that would be so uch…
ytr_UgyFgAYAD…
Comment
the oddest thing about these videos are that as long as we follow Asimov's laws of robotics and program robots to follow them then no one should have any problems.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
youtube
AI Moral Status
2017-08-05T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzdc5ggMEKwzkcZ07V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1YurAqjrGaCWv-MV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzuhpMbmFcwc1EwNSt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugwo7z4sOrI-LDRBRTN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugxy1uFJLiB-VO4r-Fp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzUoGaTpCJ_sGD8lHB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzGQ2081wUKl_7ojaV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzx0n3rUBZhenf_JPh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugja24tjkz6vPHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxkgXw-9xAY0JKPR2x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]