Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The people that are warning you about artificial intelligence are also the peopl…
ytc_UgyHATRfe…
G
For two years it has been
Har Har Har AI slop will never be good enough Har Har…
rdc_o5q5mve
G
Who controls the algorithm? Well, someone told me that it's mostly private compa…
ytc_UgyMT0TUj…
G
It's not only about art even ai is getting annoying in IT sector as well, chat g…
ytc_UgyqDbS3k…
G
So what? We re all humans, all insignificant in the cosmos and im supposed to ca…
ytc_UgwOMVqWe…
G
Ai isn't sentient it can only follow it's programming people don't hate on ai th…
ytr_UgyMAgi7_…
G
@pratikghosh9252Not really when that's all AI can do, it has no creativity for …
ytr_UgxFb-_OB…
G
Machines are not conected with the quantum field as humans are, with the unive…
ytc_UgxaC0yr0…
Comment
Isaac Asimov's Three Laws of Robotics are a set of rules designed to govern the behavior of robots in his science fiction stories: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. These laws, first introduced in his 1942 short story "Runaround", have become a foundational concept in the field of robotics and science fiction.
Here's a breakdown of each law:
1. First Law:
This is the most important law, prioritizing the safety of humans. A robot must not intentionally harm a human or allow a human to be harmed due to the robot's inaction.
2. Second Law:
This law establishes that robots should obey human commands, with the caveat that these commands must not contradict the First Law.
3. Third Law:
This law dictates that a robot should protect its own existence, but only as long as doing so doesn't violate the First or Second Law. In other words, a robot's self-preservation is secondary to the safety of humans and obeying their commands.
youtube
AI Governance
2025-06-16T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyPaudX0VKOtSBdarN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyLHY0nCGHMlpU3zM54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAru9ozhl7WWrpvrl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz0-ZhZl96LGV-GqFV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyNNAFsz0saNWt9u-R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxvgyDGo5HUMS9eP0V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_FfVzpnmcm5oXkOZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwyshk3KrBZkbfNETF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx-YuRpi8hxCUs5dKB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzanz6SU_KEfB52SC14AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]