Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is troubling. How will an AI deal with differing opinions? How will it deal with differing ideologies? It seems that whoever builds the AI will have influence over the AI to move it in one direction or another. It seems like the battleground is just changing locations. Would a climatologist who is firmly in the camp of global warming, is a threat ever want to use an AI that was not tilted in their direction? Would a climate denier ever want to use an AI that was not tilted in their direction? Would an LGTQ activist ever want to use an AI that was not tilted in their direction. What a Christian ever want to use an AI that was not tilted in their direction? Even though an AI is supposed to be artificial intelligence and learning on its own, I still think garbage and garbage out still applies. If not, and it can actually learn on its own there’s a possibility that global warming will be debunked by the AI. Would those climatologist acquiesce at that point? In my estimation, there is no way they will. Where will we go at that point? What will we do? This is the new battleground.
youtube 2025-01-13T16:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxdbgTWrH8RMKq3gO14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxEDTGzoHszlLO4q2l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzfSEp6XEO49JI_J5x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQNzXd2pk0QjsEhpx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwqTAh2jazQQpIeCU14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxdc3hZFcGbRyKkKRx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzlbrhbAvYs8W3P_pZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy_JtXqJlIGcBMbRAZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwydQhCmkN9-gz6fi94AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwOhDeTdwd3ybcdhQB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]