Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yep ai is a neat tool that can be used to help quite a few jobs but not replace …
rdc_nc47u5f
G
Was a little disappointed to see that myself lol I wanted to see people's though…
ytr_UgyP-hggf…
G
@rip_luffy8473Chat gpt can never tap in something truly original. Its like a rob…
ytr_UgxdjvGtf…
G
bullying people open about using ai, is why i fucking hate so many socalled arti…
ytc_UgxsNKOCU…
G
All it's doing is generating what a sentient AI might say as per the prompt - it…
rdc_jcl6l1k
G
@cnlicnli I live in Nuremberg Germany they don't expect any kind of A.I genera…
ytr_UgwvI4rj_…
G
There are two possible scenarios for a solution:
1) In a self-driving car city…
ytc_UghSVao5v…
G
This is where we need reform! Predictive policing should be illegal on a federal…
ytc_UgxoqE7Js…
Comment
There are rules of military engagement that protect civilians and criminalize the misuse of violence in the event of war. If used by armed forces (this is already happening) to create autonomous weapons capable of choosing targets, AIs could be programmed to disregard the rules of military engagement or, worse, discover for themselves that these rules should be disregarded because they make success of its mission difficult. Statesmen, military ministers, battlefield commanders and soldiers are or can be held responsible for the war crimes they voluntarily choose to commit or fail to prevent when they can. But who will be responsible if an autonomous weapon chooses to commit a crime? The creator of the AI, the weapons manufacturer or the military that decided to employ it, completely losing the ability to make choices in the battlefield? That's a problem worthy of attention I suppose.
youtube
AI Responsibility
2024-06-16T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzfL97b_1nemN0Clbd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwoxQvIcjRpJ7cvf054AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyFigHDsyAo2Gl3wS54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzbDP_mJjJzyO2_aCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgykhEQFBsGUSzpCTXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzJloDBV9uc9XXEA9N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxqmu0us6WbTQINMfZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxq9UzNz-eLtvKgjFZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw-Hwk_MDDigQjwavh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAhUF5xiMqSGN0nJt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]