Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans have all sorts of fear driven objectives and they think a machine will eventually behave like humans and will have existential fears and decide to take over the world . Yes if we deliberately program it this way, then... maybe. However our LLM s do not have self generated objectives, they lack long term memory and continuous learning they don't understand the real world and so on. So we are still far from AGI
youtube AI Governance 2025-12-04T10:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzkaHi4i4WbTfYN07J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxnJy42h39uZwlo5jB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxXqIvlEBfeXrykYBR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxwdT4i8HUSmgV_svd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzm0F6DA5rfGzctsDp4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxlGobmdj7hFgQcBDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgzC_ctZBKXWJZIxhW54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzh8pqiexJcFvZ72FN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxXDZ0Mb03n7h9LcJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz5sO8zIEoVkp8POOx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]