Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem could be fixed by filtering training decisions through another AI that is trained on being good. The more complex the neural net becomes, the more those ethics will be intrinsically built into the modal.
youtube AI Governance 2025-06-17T11:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz52Uu5d7jdZooGen94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwNiI46-lZ_xn1zthF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzAFia9aLqXWDKGZ-Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxD3IU8IumHHh6Q1r54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzel7LjO1le8JN-J414AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxGxKSFfYUmqZQ4rIp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzlsLRRX7faQVr0svh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxjyG5mRbMiTbV2cTF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugw0JgkbVkBZknwAaUV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzh0brdg4DNh490M054AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]