Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know. In humility, I may need to listen to this again. It is very long. However, it seems to me that his arguments are more philosophical in nature rather than technical. Like yes, if we could manage to create AI that was at the level he's talking about it would be really bad. However, my brother is a computer scientist and completely disagrees with him regarding where we are. He says we are NOT ANYWHERE NEAR that level of technology. Now, in fairness, that wouldn't change the fact that, yes, we should probably not be trying to create this technology in the first place. Sounds like a bad idea to me. I am just skeptical of his position that essentially humanity is going to completely crumble because of this in the next five years. It does make me wonder if his belief (or you might even say religion) that we are living in a simulation is driving the bus of his viewpoints more than he realizes. I'm more scared of AI being used to do things that are really important, like medicine, while it's still, quite frankly, at this point pretty stupid. Just think of how they've already implemented it for so many things and it doesn't work that well. Like how frustrating it is to try to get a live person on the phone. Now imagine they implement a relatively stupid technology, that is not ready for primetime in areas like medicine. That seems like a more realistic fear at this point.
youtube AI Governance 2025-09-11T01:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx06jWp559Kas_jJ8R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwwN49z03zSmddoNgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwfIHnlDn7VkKpdWzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugym5sY6KBCgB8oR0t94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2ako-f8a-00Svg314AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzuixos3evi4dpEXSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyGKgqESre5yjqzX8Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxVmZQsn-l4LLCaj0x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwiLB95X2GSHB4TfS54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugycp3NTGYMZpXNQ8up4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]