Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are rules of military engagement that protect civilians and criminalize the misuse of violence in the event of war. If used by armed forces (this is already happening) to create autonomous weapons capable of choosing targets, AIs could be programmed to disregard the rules of military engagement or, worse, discover for themselves that these rules should be disregarded because they make success of its mission difficult. Statesmen, military ministers, battlefield commanders and soldiers are or can be held responsible for the war crimes they voluntarily choose to commit or fail to prevent when they can. But who will be responsible if an autonomous weapon chooses to commit a crime? The creator of the AI, the weapons manufacturer or the military that decided to employ it, completely losing the ability to make choices in the battlefield? That's a problem worthy of attention I suppose.
youtube AI Responsibility 2024-06-16T11:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzfL97b_1nemN0Clbd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwoxQvIcjRpJ7cvf054AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyFigHDsyAo2Gl3wS54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbDP_mJjJzyO2_aCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgykhEQFBsGUSzpCTXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzJloDBV9uc9XXEA9N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxqmu0us6WbTQINMfZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxq9UzNz-eLtvKgjFZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw-Hwk_MDDigQjwavh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwAhUF5xiMqSGN0nJt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]