Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The real dangers lie in the same place they always have. Crucial infrastructure systems & such must remain isolated & that's all there is to it. The reasons we already don't comply to that basic notion is just due to our gross incompetence, laziness & greed, regarding the dangers from ordinary hacking. We'll do anything to save a buck & call such unnecessary automations progress. Just hire a few more damn humans at the power station FFS. & further down the line, just don't load the damn thing into any machines/robots(NOR even have that ability) that can physically harm anyone in the 1st place. I think we all watched one too many black mirror episodes TBH. It's our flaws that are going to bite us, though it's true this can & probably will accelerate that. It already has in some ways but at least they don't seem to be directly life threatening at the moment. It seems to always take multiple lessons that involve suffering before we act appropriately to a situation.
youtube AI Governance 2025-08-26T18:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzf6B5zuA-NQ3Os94R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxjhqgUW1WmSy_L6ed4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwy6bK-E-_sjS8NyKN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx2ZzyG4tlS5Lf6qj54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypVnkmiTYdJCwYBCl4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"resignation"} ]