Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How do you target people by ideology? How does a ban on "authorized development" combat those building such a weapon for an intentionally illegal (by any current standard) action? If it's about current tech not being able to make such decisions, who safe is it to use them on the road today in our cars? Current AI isn't even AI, it's ML, and that has no path to "greater intelligence or understanding" than it did in the 80's. Besides, freedom of speech is already under attack with the "dumb systems" we have today. Where is the call to ban automated flagging of social content? It's fine to say we shouldn't deploy such systems because they don't have the awareness and context needed to make life or death decisions, but that is different than an outright ban on research which is needed to make things like driverless cars safer. What about defending against such autonomous systems? China has zero ethical issues with the kinds of research it is doing. Some feel good Western platitudes has no impact on them (perhaps a tad ironic if sanctions and a blockade of China in the future should they be in violation of a ban actually precipitates military action). And the public stigma part... videos like this create the stigma (or the slaughterbots video esp.). There are a lot of communities that would likely welcome a robocop (that isn't afraid of being shot at) to replace the trigger happy humans that respond to 911 calls.
youtube 2019-03-29T22:2… ♥ 8
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx4Yx8AcXKNXiQaEy14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxXost-AqMfam2LJQR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxQ52RuyvbG_79WTJR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgyQQnu_A2euTiryB594AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyCIpPW0_RqBl62dd54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy6gbmYrQqK5fbu7LR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYpg-BktSpwpZwlsZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwRQbXfUBH-1Y4VGHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugxs2p_JNFpEgG0jArN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzsc-9cac0DPrRIuRl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]