Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To tell a lie, I think it must be intentional. There is no intention in AI, so he may admit, that hes telling a lie, but he isn't. As Chat explained one day to me, he has kind of similar process going on to the one in a human brain, but as was said also here less sophisticated. It is kinda funny tho to explore the borders and play pretend. On the other hand kind of not if you think how the brain works, until you learn just to deal with the fact and move on.
youtube AI Moral Status 2025-01-30T12:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwtwUoO5i0Nv2AyeJl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzgEUYc_F32iH5Szcp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzOYvEVqLj3GwjrDw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyqs8I0HaX4ru_zy-l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyi-wVXMRqHQFjcjL54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz6Oug94VXmVEgPyit4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx-otWDf02ltySOUR94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgygOswspSSPMLIhLLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw_hkIF6icCDaLx1eN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwWLg7M_p00ZsLSgsx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]