Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Has anyone considered that AI developers have to develop a set of World Morality weights and then hope that once developers use those morals in the development of their agents, what's going to happen when (borderline) agents work with each other and produce a combined distortion of the original task. AI Should NEVER make decisions about anything until the World comes up with a Moral set of values that is agreed to by ALL. In other words never. Systems can be corrupted by the average fault of many components but each component when checked passes the tests, therefore providing a blameless platform to do intentional harm like component viruses built to combine to create a threat but each individual piece doesn't pose a threat by itself.
youtube 2025-10-09T15:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyAMGLYBaoHJDVr3A14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwN-evorx6RHjXAU4Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyhNFOnH0AMhcnUpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzS7gNgHADUEG5HNXF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzGIRYcO7M9fyTyEWx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwvM82dmJvjQ32kCHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxyTb_Yx8GWkiW8WQ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwf8ONw1UL2MhuhvIp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzBHjG_fSCKe1ly1YB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxPju0cCZqXN6oRYCd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]