Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just a factual correction - The transformer architecture and hallucinations are distinct concerns and have nothing inherently to do with each other. The problem with hallucinations has to do with the reward functions and the output form, not the transformer architecture. Basically "generative AI" inherently IS a kind of hallucination, we're just trying to constrain it to hallucinate what is true.
youtube AI Responsibility 2025-10-01T02:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxON7IADIPlVZP_D1B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzt14V5zeHKK56QXhN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzh6KiELk5RX2fPA_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx4smyYTslg3jMp8UF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSYQteWfHnViR5O4J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzt0HKJhINzFZbCQjZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxt7dWOomzUVmAcJRp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxMq2dGridvtRROlyN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw04JeSEMYPx3pvdE14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugys1hsau6sed4e7cXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]