Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The user also has to be intelligent. You can't ask a human nor a LLM "What's wrong with me?" and expect a correct answer as it is severely missing context. Yet, I have noticed that LLM still tries to give you an answer when it should in fact ask a long series of clarifying questions to generate a correct answer. People are real idiots and think that Gooling "where are my keys" is going to give them an answer and when it fails to find their house keys off the web index, they call Google "stupid".
youtube 2026-01-21T13:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwgiDHcpIe7EOE44Fd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy4YV7zvuN8S0qlMv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyYSu47-_ZvBUHwOJp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5pRBBIVYKd-qbv_t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz3pY50R0a7Vd7-O4J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzYTkx7pv7ic0ulhOl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzOWE7-D7rNSpckpD94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzgvL2b6LE0_0j5mdV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy9VQ0TzbYBcAFodot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgySY6I0fV96WWo88tp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]