Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The real problem is that not all progress is good. There are things that we do that are difficult, heavy on the individual, complicated, exhausting, but we still so them because we have a practical necessary need for it: a job. Think about agriculture in general, but also think about assembling products or calculating taxes. All simple yet perfect examples. These things don't have a meaning by themselves, not outside the material benefits they give. Using technologies, and AI for example, in such cases is important: work smarter, easier and less, not harder and more. It's for the sake of humanity, it's for the sake of welfare. But intruding AI in something that isn't heavy and exhausting on humans, something that is not of strictly material and practical need, something which difficult is not limiting welfare: it's bad. Yes you can work off art, yes people might benefit from it. But it's not the same type of need and it's not the same type of process, most of all it has not the same meaning. It falls perfectly in a capitalist inhumane world where it's all about fastness and profits. Art is nothing without humans and humanity.
youtube AI Responsibility 2023-01-12T00:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwogLkFaRxd-CDIYot4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzshcW2TwR8djrjLLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzxg67BVJ89Lp-ZYel4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugym2S2avDUe01vl_L54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlvQggBAVRd-4K2FZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyRHwGNkOE9nYtRllR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwxwaAltxV98VnQyd54AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwp-eyRSgCcZ8Y5M0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwxlcEV6OODz_cDXet4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzWLhjg0NjvGKU6gy14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]