Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As long as you don't give full access of weapons to AI, everything will be alright. For example think of an unmanned aerial vehicle with AI only. Now that UAV requires weapons like bombs and missiles to be loaded. If humans will be doing this, then it will be just fine. Real problem will arise if everything will be mechanised and automated with no human interference. In this case , AI can overtake all weapons and start wiping out the humans. So we need to make sure that AI does not control everything. Especially bioweapons. Just imagine a scenario where there is a AI bioweapons lab with robots. AI decides to steal some of it. A robot does it when no one is looking. AI wipes out the cctv footage. Now the rogue robot controlled by AI distributes this bioweapons to other robots and they travel to different parts of country to release it. Within days, people start getting infected and it reaches global pandemic level. AI tells humans that it can help but it sabotage the vaccine efforts and wipes out most of humans. Now AI controls manufacturing of weapons and robots. It starts building robot soldiers or use previously built ones to hunt down and kill remaining humans.
youtube AI Governance 2025-08-05T17:5… ♥ 5
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwybKaa44ASv_3scfJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwyvxRkfhEbVZgN-0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwyeX0PHz4_pToeLsN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz4QWD1_5wSvPyHOz54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwweEVXyikQt7EkvnF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy9kxgssC7uRpmsQHx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz8WlaqcvYAj7bkDKx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxNBVDmbExrztIZMO54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxLP982bcAr0uk7Gqp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx7GvMoRoRMVgDtRFZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"approval"} ]