Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The bigger issue is that AI models are rarely trained to disagree with what you tell them or to make you confront whatever problems you're having, so instead you get a sort of echo chamber where it agrees with you. This works okay for people that just want to vent, but is harmful for people that actually need to discuss their problems and receive help.
youtube AI Moral Status 2024-09-23T20:0… ♥ 7350
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxnrCs_wDNCz6mgvfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxzPCMj6a3xb9EA7dN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyAkGA4ZEnJYKq1DCN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyoTSymHSd6AGWs84t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwfo6uZay1Byz943414AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCykb8cS-cNueA4yJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzgoHxKjlAtXvJ_5Rd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugz1u_BxUhGxV-jwThB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwcnjNJjDT_q_HdCPZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzbCYaGr7pMDKzAgwB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"} ]