Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Computer development always worked exponentially (Moore's law, 1965 already). People like Hinton knew that when they started developing AI 50 years ago. Knowing this, it wouldn't have needed a genius to predict not only the emergence of a working AI in a few decades, but also that it will become much smarter than humans at least 40 or 30 years ago, and prepare for that. They didn't, because they didn't care, or because they were not smart enough to realize it, or they were bribed to forget about that. Now we have to face our potential extinction, and old people trying to explain themselves and to beg for forgiveness. The scary thing is, that even it AI generally turns out to be benevolent to us, we all would end up without anything meaningful to do as work. Humans would degenerate within a few generations to some kind of roaming apes, pampered by robots. Phew ... I'm really happy that I'm closing up to 70 right now.
youtube AI Governance 2025-09-08T15:0… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzirJYpapHpTI3oSvV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx1XWPHwrei-YQRUO94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzKg7LlvRFqH_FECZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzknBUmybOJvk10Rk14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgymWHZChAa3x7RJvcp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxuem5FVz9AQ2T8I7l4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzLBJE-SFH9e6-Rgjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzYsso-mkKK8bEVJMZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwjAlGlYWTjChStgbd4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxpeXLQAIr9-NMAFdV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]