Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The funny thing is, beyond ourselves, we don't "know" anyone else is conscious. …
ytc_Ugw5tpvJv…
G
Has anybody offered an explanation of WHY ChatGPT gave the false reference and w…
ytr_UgwrMcoYZ…
G
Ai bros are simply mlm simps who rely on fear mongering and hopelessness to try …
ytc_UgzVogj9n…
G
Every time someone writes that AI is going to do something we know immediately t…
ytc_UgyFW-2C-…
G
AI IS THE ANTICHRIST .GENIE IN THE BOTTLE ONCE OUT WILL GO AGAINST CHRIST TRUE N…
ytc_UgxY42RKF…
G
people are sadly mistaking a program of 'meaningful' words for SOUL...
these are…
ytc_UgzWb-zpQ…
G
We dismiss predictions about terminator robots because it was the basis of a sci…
ytc_Ugy1aGBgJ…
G
Ridiculous to worry that AI will wipe out humanity - not too belittle real risks…
ytc_UgzCzZy3B…
Comment
What about implementing Isaac Asimov’s famous Three Laws of Robotics, which provide a theoretical ethical framework for robot behavior:
A robot may not injure a human being or, through inaction, allow a human to come to harm
A robot must obey orders given by human beings except where such orders conflict with the First Law
A robot must protect its own existence as long as such protection doesn’t conflict with the First or Second Law
Policies should start there!
youtube
AI Governance
2025-07-20T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-PHat6WdA82I3fll4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwShqmGArkBDKjanGZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx84QSTjZxwS1vw0Vx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxC72JUNJpzR6Qyjnx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugz8CQcIgC8fk06zBcp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwuZTd8cZqTDmFEWVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzd029lCbCTnlNBIZt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwoPyuiixH7Lb5Y9gB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyIW95_zGm9w17Rj9t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzp32OH0MlOLh7WlOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]