Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So for me… The insane obvious is, did nobody watch terminator! Your developing s…
ytc_Ugxi_Ab3R…
G
@dreyga2 No, safety from something that could soon have super-human intelligence…
ytr_Ugw1NPXw4…
G
Of course there will be people who say well AI is not replacing my job (plumbers…
ytc_UgwY4cNwi…
G
And this is one of the many reasons why I fucking hate AI and wish people would …
ytc_UgxuoXIt2…
G
Their argument doesn't fully grapple with the use-case where the alternative to …
ytc_Ugw5O_Qct…
G
So there is a chance i might experience a apocalypse in the future ai domination…
ytc_Ugxsq1OL3…
G
No matter what robots can never over shine human. Why because they use motor wit…
ytc_UgzKfFvUA…
G
Please generate a small comedy sketch video for youtube satirising those that ma…
ytc_Ugy__taae…
Comment
Robots are used now in operational procedures, we all know that. But what we seem to be purposely ignored is that, given AI can or WILL move into the realm of behavioral decision making, robots will eventually be able to decide whether to save or harm further the patient. Did you know robots have already demonstrated elements of racial profiling? A form of simple classification no doubt, but with the inclusion of social human behavior can eventually morph into a thoughts & reactions driven by belief. These engineers or scientists must admit they're walking a very fine line with AI and should realize they are going to be held fully accountable for any wrongdoings or abuses of this technology.
youtube
AI Moral Status
2022-02-15T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyZj6Uqmxw3CA6qgep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0lxmhfy2-rvdwFPR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJdQ4HoukPdS7QpCF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwAByz-4bKJ1Ra9pkt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxAIK_gIe8dAtjM4AJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJjpOOFSkxGUJDUHR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMYQX1T9UHHURNkOV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzh_AV14fkvqqNPHkJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCanf2rgx0TYMtZ2d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyc6LAuG16QKVieiSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]