Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm sorry, but the idea that "I don't know" is not an acceptable answer from a LLM makes me even more skeptical of AI than I am now. It sounds to me like the goal is to keep people using the model even if you KNOW incorrect / unwanted / (even harmful) data is being returned? More proof to me that the majority of LLMs are just glorified search engines used to collect massive amounts of data for marketing research on those willing to use it. No thanks.
youtube 2025-11-19T17:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwmqSE39iyLF9Mzemh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzYAerrrQN1cFGzuv54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgxKQbGcwkQCiN22Ibx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugw-jbwUikWyu42QQLh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwkmE8elRaDem8h43h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgyAA6Cclk9ioTzhY3l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgzgsddE9q6ecLOmpZF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgzEUX_kG29sX3WlNcF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugz9ByUof-k4ASQ3TYl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx9X-CkWUq6Km9w8zd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}]