Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Peak example for why LLMs have no place in encyclopedic use cases. They're intrinsically prone to amalgamating their training data ("hallucinating"), as their responses are purely based on the probabilistic relatedness of its training texts to the input text and its syntax. They don't think, they don't problem-solve. They just give words that have high probability of following or relating to the sequence of words you input.
youtube AI Harm Incident 2025-12-06T19:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzoLDifIt3aG_H5fkR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx6qzQX67NVnFjaiFV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyQkdgazW2JmfA-pOh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2Op_dlIVfnjjbJt14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz6oDn-c9iudLgk7mp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4yDwbmnCF-rOHUEt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwe2GpEtQphzk5mWqR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxEL8p2VjBFS8Wl3Kx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxmeuLX5hchAabtJRF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzRhGSC9uJf9Y2W8NV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]