Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What hurts the most is the fact that a group of humans agree to train those bots to be more efficient than them . I was part of a project where I was training bots to be harmless, precise etc etc funny thing those ceos of Ai companies will always win because I needed that money I had to do the job . This means they will always find Humans to help destroy humanity
youtube AI Governance 2025-09-14T19:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwE94flJICMea32KjR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZAjTjwVniNqR5-6J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYxRf6X6SABVopfLh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-bPu5edh5CiGpCfN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyeHffo2Jci-FqpM6x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwOd3doJBOqfR5wSxp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxPxbxEopOUQ1pSvRB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz8aEcWmjGe31ryTl14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxSira2dCeoNwPBQ9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzUzs-C6mRScZiZqip4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"} ]