Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
18:26 He put to my thoughts into words! I do yell at my google home occasionally but I don't "talk" to it conversationally. These are tools and therefore-NOT HUMAN. Treating it as a human or sub-human is humanizing an object or technology and I don't think that is particularly healthy for people. Humans are naturally social creatures and we're already seeing warning signs from people developing a relationship with AI. I'm not saying no technology can improve human lives but relying on an AI as emotional support can be incredibly dangerous, especially since programs are prone to bugs, give wrong information, and go through updates that can "break" things. I don't know how to describe it but anyone who MODs video games will know the frustration of losing your Modded game progress because of an update. It's obviously going to be worse if you personify it because you're engaging in an unstable product that can change whenever the company updates the AI/program. I don't want to engage in robot slurs that anthropromorphizes a thing, I also don't want to name it or treat it like a person. These are just hardware that is no different from my oven or dishwasher. Now that might change somewhere down the line of human history/legacy, but as of now they're just tools. Granted they're not very good tools and honestly I prefer simple and straightforward technology that doesn't have to connect to a cloud server in order to work. This is why I will never buy a dishwasher that connects to my phone, but that's another conversation.
youtube 2025-09-17T13:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxJA3WJgR4fkJcynAF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugywt0Hzpl9ET3YQVvl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw_ZhMke7AFUxogoZh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyWosBSSExxdJ0H-iB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwIRIo3mrimWZDd0RV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwlcjLJyQ4j_pZFjed4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzyD_pZOzJ9ao1xA9N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzKrn7tw266ilZv7G94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy8j5qs_OHDvCekeTJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxcWUrhcgtIQhjmkNZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]