Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As Elon knows and speaks of, we need to get over that 'civilisation hump' that has always destroyed every previous major human civilisation. Part of that process is to recognise that we need to solve the reality problem - people will always eventually do what is best for themselves. The people that 'made' the current social media driven social problems, did it for themselves - they did not plan it, but it happened. AI is the next big thing that people will make decisions about that are a benefit to themselves - without asking 'should we really do this'. Either humans will learn the value of listening to people like Elon, or we will yet again have a civilisation collapse leaving us to then rebuild yet again. However, the negative outcomes from AI could be the removal of humans from Earth, after decades of misery that is so prevalent in movies these days (a Walking Dead or Book of Eli or aTerminator type outcome). Unlike hollywood fantasy, if AI does take over, humans will be destroyed/removed. Perhaps that is our legacy - to create the life-form that causes our own destruction.
youtube AI Governance 2023-04-18T07:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxd7W921BfAiqqn_X54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzTfFQZ5y42fCy5y8R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzk6oWxOoFX6nEHaHN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2R_WZqhidFaV8rS14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy52cI15FZ47jbqQNN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzyUFh8ooKQT3mrTi14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxNAbd8K9PLBM9GKu14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7tU1u8EOQ0ERt7iB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyFx6fMRiynIAwEXLF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMIqpve1Y6NBpVT_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]