Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sad but true we don't want an AI , eventually we want a replacement of a human b…
ytc_Ugz9jtSLc…
G
How about having a kid version (under 18 yrs old) of say ChatGPT that is much sa…
ytc_Ugy_RJ4UU…
G
I’ve been trying to draw my whole life, and the things I draw aren’t even that g…
ytc_UgwB24dbN…
G
Only thing I can suggest is that you could get a few grow lamps and do herbs? Fl…
rdc_eh57au1
G
Yeah, AI gets faster as we get better, that’s the loop.
But humans still set dir…
ytr_UgwIcP2SC…
G
@arool4017no I don't think giving a factory assembly robot arm or a clunky robo…
ytr_UgwwhcMj4…
G
"AI makes art accessible for people who cant afford supplies" any AI model subsc…
ytc_UgzxlajRJ…
G
Blah blah
Buy a subscription!
I am training an AI with pirated copies your cours…
ytc_UgwEh3fy4…
Comment
The three Laws of Robotics, first introduced by Isaac Asimov in his collection I, Robot (and later refined throughout his robot stories), are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
This is the highest‑priority rule. It obliges a robot to protect humans from injury, even if doing so conflicts with its other directives.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Robots are designed to follow commands, but they must refuse any instruction that would cause them to harm a person or permit harm through inaction.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Self‑preservation is allowed, but only insofar as it does not interfere with protecting humans or obeying lawful orders.
These laws are intended to create a hierarchy of ethical behavior for autonomous machines, ensuring that human safety and authority always take precedence over a robot’s own interests. In Asimov’s fiction, many of the most interesting plot twists arise from the subtle ways these rules can interact or be interpreted under complex circumstances.
youtube
AI Governance
2025-12-04T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwkSrhvDteXfkKzLcF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx8iyxvz2uYqlSFHY94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgylKRXOeyiNMYDq8Xd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgyXuLjqnYFgNjR0qP14AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugwiwfo864VRJzW8gbJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyyvN5QAC-5d65cIdF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},{"id":"ytc_Ugxq0wUXaIKVSCXlns14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugz9y6SSAIEwC1hAbop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz8BU1zi6vvCR_ilFl4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy7rytCH-AQuMD8lYV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"})