Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Driverless vehicles may work in the sunny south, but they won't work where there…
ytc_UgyqBNHcJ…
G
I thought about your comment so much and thought of all sorts of scenarios in my…
rdc_jhupfwr
G
Where the heck am I suposed to post my art now?
I HATE AI ART SO F**KING MUCH!…
ytc_UgzHqwN1m…
G
AI training is definitely a topic to discuss, but I feel that ai art is no diffe…
ytc_Ugz0qiujv…
G
The gap produced by AI and a manual workforce that will require a basic universa…
ytc_UgzxBj_-Y…
G
So is AI doing there job for them now and as for these gambling places they dont…
ytc_UgyjSvF6S…
G
Is there anything on success rate? I suppose with the speed of these you can do …
rdc_fjzcs5u
G
Dropping this comment here to help boost this video in the algorithm. This is ve…
ytc_UgyhhEWRU…
Comment
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Would it be too much to ask for these laws to be incorporated into AI?
youtube
AI Harm Incident
2025-07-27T05:1…
♥ 190
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz7dumYfEZyMiY0C_F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyp2o__hTHRtBDCsYl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFENkSFE3ueufiaQd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwM2aAmqP9bjyuzVJN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOv_hw2woQQrVV7rR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwpbodeR1DfrYsWbWJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx4GcPdKNBZr2mbJid4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxp0rf5f_qiHzREE8h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzDyduMGGfOKjmg7n54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzRh2zcQWKmz_nk-pJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]