Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Google the Company, (Not the Machine), has already made malicious AI. They just haven't got caught testing yet. You don't Write code to prevent the Self Development of Sentient AI, unless you've tested and failed. Hence the rule. Rules are created as a safety response to negative actions and past events. Or as protection from obvious dangers to society.
youtube AI Moral Status 2022-06-28T00:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzWXNVZfyhm2oO5v5l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzvQDRmY2BaSg2cgup4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy82DUGzSHL6h6vYMN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwnu88o4ljon0Qjsdd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyhtE3uMBE8bI2mhOB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"} ]