Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the negative feedback loop of AI is it destroys any incentive to pioneer anything, if you're first place, someone will copy and steal it with AI as a close 2nd just behind you, so spending effort to push the edge past by a tiny inch as a breakthrough to only be dissolved by AI efforts makes little to no incentive to do anything in the realm of discovery and technique. The fact that it hasn't suddenly optimized world governments and removed CEOs and CFOs entirely from corporations, finances and banks is a stupid excuse for something that's replacing art and other things that are far more abstract and complicated than stuff with rules like coding that has already eliminated every junior position available. People are so deluded by greed and hype that disconnecting from the internet or the AI algorithm hole is like seeing reality in another universe. At some point it'll be training on itself recursively on stuff that either makes no sense or its own answers at some point where it's based off nothing, because currently all its parameters actually exist from human output from a real standpoint during the data scraping and training phase in its advent, it's just a long slow ride to hell really. AI doesn't have emotion and won't it's like a sociopath that'll mimic it but not actually have any contextual understanding of it, it's akin to psychopathy, which in essence would be hyper optimization to get what you want and treating everything as data points.
youtube AI Governance 2026-03-17T03:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzpZB2TwIDXQwv6Zcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxWkOCjBPsxNPNil3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwwES1E11rMPgx7jlR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwcuJSMfFvBtvURPZV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxieUY_SVr4KHRGD_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxll8Gz7SsOcqr3SpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwY_vzB08zSWLljGGl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyEYKV9ah0Y7WJ3eiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwCbE5lMXvD9rR_Hmp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyXoPUvHeehIzSBZP54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]