Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like this is fairly obvious no? If you train a pattern machine on all sorts of patterns from Bees, to Threes, to Antisemtic hate speech- its all going to just reproduce those same patterns. Chat GPT isnt any more evil than what we already have on the internet that trained it. I think the only concern we should have is how we use it and it what circumstances its ok. Furthermore, I feel like this video puts blame on AI workers who put this "makeup" on it which I find to be pretty harmful. These people have done the best current solution we have right? As in, we dont have a way to un-train the models on what they are already exposed to but we can "teach" them right from wrong. Same way u teach a child or a puppy not to steal or bite. Sometimes they still do but what is the alternative? It seems broadly effective is my point. But anyway, im confused on the point of the video, is there a different solution you had in mind? Or a message we should follow? I hope this comment didnt come off rude at all, it looks like a lot of work comes into these videos. I also know no one is going to read this and this will get buried so I think im really just saying this for my own sake.
youtube AI Moral Status 2025-12-17T23:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyindustry_self
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy_oYeuVnlbKzsR5FV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw6xwjArXVoZ3R8gOB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx64Nzf61HxEzVEyyJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw7s9XbcxQBf8cu3oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwCWvGES7n9qt7Z7fZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyrLAeafNmmnmt2MTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxg-IgFe-scsYiueN94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxhv_xMbsHkvHBt8ZN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwKDwHTaJdtdhbXs4p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzMqvgvQBmw4gtqMnV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]