Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI should be controlled in a similar way to nuclear arms. It should come with the same level of concern and severity. Yes, like atomic energy, AI can be extremely useful and it can help us advance, but it can also be extremely dangerous in the wrong hands. It's already spelling bad news for people working in the creative industries like voice actors, music producers and visual artists alike. The AI race is basically the new space race and the new arms race. If one country has it, then the other has to outdo them. In the process, systems are getting more and more advanced and there is little control over who can possess such technology. Anyone with a home PC can train an AI to do allsorts of things, including impersonate others, swap faces on images and create 'fake news' and false information. AI generated spam can be so convincing that even the most savvy people can fall foul to it. There needs to be a global treaty on AI control and it's proliferation in a similar way to how we have treaties which prevent nuclear arms proliferation. These treaties should limit AI use in areas like defence, finance, medicine (although it's application in medical and scientific research should be allowed with stringent controls in place) and government. AI should be centered around human interests and things that benefit us, it does not have a place on the battlefield for example, where advanced AI systems could be used to commit war crimes with plausible deniability for the offending party.
youtube AI Moral Status 2025-06-08T09:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz6iwdnKdcUE2DKv054AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfCXo6_G3kF_LNXDZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyO3tUSXDuTYIK6iJl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyg8zIBwCUtYHEZYuV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxKNhKtPojWz-13TZZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyYS6PtrkGJV17QiT14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwv3seHZKYuRorO2pZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgydX5R84ERgbBnZeTR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx6ppjIBSQqpeAINEd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"unclear"}, {"id":"ytc_UgyK2RjAOqG-T5XItJh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"} ]