Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The presentation is good but there are some caveats to be aware of. These caveats are not just with this presentation but on almost all discussions on the related context. First, at the core is a discussion about hallucinations in AI. No one talks about the concurrent construct of hallucinations in human processed data, which is treated as if it doesn't exist vs the reality where it is rife. Thorough studies about human processing vs AI comparing such "hallucinations" would be very informative, but would have to be undertaken many times under different circumstances to give any constructive understanding of the phenomenon. Second, now starting to be talked about, but still misunderstood by the majority of people, even ai professionals is the fact that "hallucinations" both in AI and in humans have their place. It is in this type of processing that creativity exists. Just think of the concept of brainstorming, where the tagline is "there are no wrong suggestions when brainstorming" is common. So "hallucinations" aren't something inherently bad but rather something that can be bad when uncontrolled or in the wrong situations where strict adherence to the source is required. Add to this we have breakthroughs coming in now on limiting hallucinations, often built on something akin to a more advanced version of "test driven design" that has been around (but much under implemented) in the programming world. Note: none of these comments are new, as in, others are talking about all of these talking points. It's just that they are little talked about yet, and then even, typically only in AI professional circles. At some point, all of this will seem obvious, so the comments are time specific, as in, right now some might debate this but it will at some point become self evident. The main takeaway is, more attention on them will help expedite working solutions.
youtube AI Responsibility 2025-10-16T01:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyPnt-L_TSYCpahK6x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXvi9ysBwjfXUHh4F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxYq3nI1rpK8WpjejB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyuZ-d99i-guYaHhYx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzuu_hFn_CCl_yUq-t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwvPlsErNXi6Px_1zh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzz5VMBmYD6foS226x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyaHYDx41E32O8gK494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwbLn1QtfBFWg6pcIp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwEl6QzlTrPUc1Xci14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]