Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is falsely called "intelligent". The truth is it cannot think, makes serious mistakes that no human would and consequently needs to be fact checked constantly because it "hallucinates". And physics papers and legal defenses are being trusted to AI. Big mistake.
youtube AI Jobs 2026-02-08T16:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz-Unn-SbpwOSUqWi14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxbGkFo89Da_iA947Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyQp6uSu2a5qetyFzR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyW3lVCnS4JA-_FItR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyXRkn7I5FanJ2ggXJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_Ugz07fDtX94faZthEWR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9LlH3-4c_hvlR2LF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxtBhqFCHvC6NutTI94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgxX5UMdV1Qsj1c1PxF4AaABAg","responsibility":"leader","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwLmS91X9xmVPJjjyd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]