Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Massive armies disciplined by wage labor are a thing of the past. Today, soldiers accustomed to playing video games operate semi-autonomous war drones thousands of miles away from the theater of operations. Will militarized Artificial Intelligence command autonomous war machines and human combat units in the future? That is possible, but the specter of unpredictability will never cease to haunt warfare. The worst military defeats have been the result of stupid calculations. The Trojans accepted the gift of the Greeks; the Romans hired Alaric's barbarians as mercenary troops. Napoleon and Hitler underestimated the Russian winter. The Americans trusted the Afghans. Can the use of AI (Artificial Imbecility) also backfire? The answer is yes. On a battlefield, everything is dynamic and unpredictable. Incorrect actions can result in strategic advantages; Following a plan never guarantees that an accidental victory for the enemy is impossible. A group of soldiers who do not understand the command they have received from their officers jeopardize the success of their country. Armies will never be fully automated. Therefore, no matter how good a militarized AI is, its success in war can be undermined by human error. In 1983, Stanislav Petrov avoided a nuclear war by refusing to accept that American missiles had been launched against the USSR, despite the indications given to him by computers. In the future, the recurrence of the “Pretov effect” may prevent the scenario from the movie Terminator. But if that does not happen… it is better not to even think about it.
youtube 2024-07-25T22:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwX_wLvYWlgPQfI5kB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw2veih0oEdOx_NYSx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzNX4ie-t9hZynecml4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw7GAjE128VgBOaNDt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwcJT7O9cgixzcVtMh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-QhvLGhkv9JaG9Ol4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwsj_CeyrMZqu45fRJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwNrYByxVy9v-KqVGN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgytaH3zApinIW2fWKZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_jtRA2yhnUvEXv2h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]