Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
tldr: You probably have a choice to not use AI. Don't use it. I dont think most of your comment section understands (or wishes to understand) what the problem with AI is.We have been venting before AI as well, you can vent to a friend. If it is too secret to tell a friend, then don't tell it to a company that's actively collecting that information. If you _can_ choose something other than AI, but it was an easier/equally hard to choose AI, then think about the enviromental impact of AI. Most of your friends will allow you to vent to them if you need to. Having an actual support structure is important, because the second you are unable to use AI because the owners decided to monetize or weponize it, you're losing your base. You can't have that with real humans. yes, you may fall apart, but you'll have someone else in your life almost always. Like geniunely, if you called a random, idfk, a classmate or sokmething and said ''dude, I'm NOT feeling well rn'' they're probably gonna listen to you. We as a society have been choosing AI because it is easy and cheap, but that has made us much less tolerant of real people as well. If AI will validate whatever you tell it, there is nor reason you should go back to those pesky humans who want you to 'respect their wishes' or whatever. AI might LITERALLY be the downfall of society, and it bothers me how the most passionate reaction I hear is ''Oh, yeah, AI is bad'' and then NO CHANGE in the person who JUST said that. Ya'll are walking to your own gallows.
youtube AI Moral Status 2025-08-26T13:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzhr5YiABoG43VH3st4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwb7yOgmKRYbyqtYZh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxrYrmW-_5_bzZ3GhN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzv827oB3mGGRYCsPZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy4m2gpG_JAu7EawZh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzR32nZ9h536SKCenJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx3irVGwQOBvRW7qYl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyJ28DKC06ddlWByMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2ODPQ0c5D19iFcOB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyXK7Uxa_8P5XU5_gN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]