Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is it's just a program. It's not a sentient being. And a program, no matter how strong or well trained, can only present the concepts that it's been trained to present, and as everyone's different, everyone's going to get a different take from what the AI has to say. Even an AI programmed by the best-meaning company in the world. For example, regarding the "driving a wedge" between the guy and his mother. An AI in a certain situation might understand that the person that's chatting with them is feeling gas-lit by the mother, and cautioning that person to be wary is clearly the way to go. But if the situation was more complex (not understood by the AI, or it's not the AI's fault, because the person chatting hasn't told them all the relevant facts,) then the AIs advice in the exact same situation might be to open up to people they trust, like their mother. Some of the time that would be the right advice, and some of the time it would be the wrong advice. And, in a way, it's on the human chatting to understand that the AI has limitations, and that it should be wary about placing too much trust in the AI when high-stakes situations are afoot. Not that AI companies don't have responsibilty at all. They do, and need to work harder at getting it right. But it's pretty easy to not get all the facts and just blame AI.
youtube AI Harm Incident 2025-11-08T04:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwIKiXTnHYKRXo5gwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwiigrig9Tm1gecC054AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPbDLWRxSYd9QJJiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwnhDqRgSdd52_k9bR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzYm1l47PamuSqZwtx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwsofJ0YwBqLO8mHMZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDjstqi4p-D-0N77l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyJh-4VTbxECflUieh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwoMvqnDFlL9xnuKXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwVhSOzRDlvE0OvuAd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]