Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It hasn't gotten better and won't. LLM's will hallucinate by default. Yes, humans can make mistakes, but you can usually reason with, teach, or correct a human. AI will be confidently incorrect and that frustrates me to no end.
youtube 2026-02-12T00:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyLXcxQnp6DZMRMfr54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwU6lab2wMQS4q2rit4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxeJ40G8jxQSvJ_NrZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyCm9b6apkiRXGu6Zh4AaABAg","responsibility":"society","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyrTo4qS0cUwK83cb94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwyRnOfT-Sy6KSzbEB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzyu5OMg4lSqG7HTXt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw2ZbihN3uqk4iCO954AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7T2R0oXnCBUUTm0J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyJr2L59UV4An5-QU14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]