Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Another issue is that AI models have been trained to be profoundly sycophantic and affirm you, even if your compulsions and desires are harmful. There have been reported incidents where the safety settings have failed and an AI has encouraged someone to do harmful behavior. If you don't care about privacy, then at least understand *this is not a helpful tool*
youtube AI Moral Status 2025-10-18T07:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgylwdS8GtcjjRLyvSJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwuI_C24I8TOcdUqbR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_Ugwf8_T8yxeebvMyHAN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgypcKLL4lw7rSRrKfR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgzGtHqmPJqveUhwmE54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzKII-384bHYcK_k614AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzVU9eqbWIeD9Eieht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzvQMLcHwQ8EUZgPDx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugyo78DbeRF7fVK5wbB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgxaP2ZiDUFfnDK1H1t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]