Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let’s develop AI based on human brain and let it reach the point where it inevitably learns to manipulate, cheat and lie, of course make it more intelligent than humans, too. What could possibly go wrong? 🤷🏻‍♀️
youtube AI Governance 2025-06-21T10:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzL2DSk80vMdFzvvx94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwTEAoLBZn6FzQtFhd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzF28K8aQrTqyM7dex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyn_CHQulkp9s0oyih4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxCM5jtqWwS3646cm14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyVMlB9oqOTHXlryxl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwEqdYTc7rorSpo-jB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwNE7mEt2D3IqnDjlR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzr78Lqhn9gGH98yvl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyuoSE1UkGHNVbW8ER4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]