Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think some of the analogies are very generalised and in my view wrong. For example, comparing the arms race (which by definition was about war and destruction) with the AI race which is about who has the better more intelligent system is wrong. If we do a comparison, then we conclude that AI is about war and destruction, which obviously it is not. AI should be stopping us of doing bad things, making bad decisions or prediction bad things. Historically the largest number of human deaths were caused by religion (man-made), wars (man-made) and disease (man-spread). So what if AI stops all the bad things? Also about this gorilla problem, this is not a proper comparison or analogy. The gorillas will never understand the difference in intelligence between them and humans. With humans, you can reason, communicate, understand. It is not about us being more intelligent that gorillas.
youtube AI Governance 2025-12-06T19:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgywCcOS-qa6xCJxT3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLMHUK_Ydr4CVKFtt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFzcjQTwFi8wKHtC94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzoLDNz5PX5rHFvRVh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzdDByjokAHZCRsyc54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzicl3veLvMyjzKRt54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwEZ12voOS-Cu-tLBZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyPBXax3hfTyFkR71F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzyiuH83f1r97718Ax4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzJaXZRz4jqI9OA3bB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]