Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is zero evidence that consciousness resides in the human head or brain. Nothing, none, zero. To suggest that it does is a presumption is to reveal how one's own biases influence their "logic" and render it nothing but personal opinion. Wolfram who seems to have difficulty focusing on one subject at a time also seems inclined to believe that human replacement by AI isn't necessarily a bad thing. After this point in the conversation. So not only is this deeply offensive, in other words he supports the other team the nature of which is still a big question mark at the cost of the human species. Personally I have zero patience for people who purport to surrender their instinct to survive. I don't believe them, I don't believe him. I think it is performative. Besides, what place does that have in a discussion on AI risk, why not call it Human risk to AI. The whole thing was profoundly ridiculous despite the horsepower of these two thinkers. Nevertheless, it was still uncomfortably and annoyingly entertaining.
youtube AI Governance 2025-10-29T13:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugzytbm32BmPyZWeuft4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxBO-wKvI2gMWMQXm54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyqKQ4Q2zAr2Pf3XpN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxd5GPgz0mc1vmWDml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz7du4ZZIu4g61tYPd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyFaHsdftdvaS601Lp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugz5tlyVDY64cxGs0WB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyK2doOVquGPsMQeW14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwimqcLDJLMkOtSFeR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJSuxSyyJkYu5zO7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})