Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The argument of whether or not it is intelligence is honestly quite silly to me. If it consistently gives intelligent answers why are people so adamant that it's not intelligence? Also, with the right prompts LLMs are actually remarkably good at relflecting on their own answers, and listing options and suggesting improvements and resources. I don't care whether or not that is "doubt" per se, if it has the same effect. And I also don't see why it won't be able to consult with other experts givent that the mixture of experts architecture itself that they are built on does this inherently. The accountability issue is a real one, but that is also a very human motivational factor. Accountability and consequences is inherently a variable, some people care more, some situations puts enormous preassure. Eliminating it could actually be beneficial to more consistent results.
youtube AI Harm Incident 2024-06-02T22:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_Ugz_GALp9O-msg41hIZ4AaABAg.A46Ul0Dio7NA49AgXCeGzy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugz_GALp9O-msg41hIZ4AaABAg.A46Ul0Dio7NA49OsK9cns-","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugz_GALp9O-msg41hIZ4AaABAg.A46Ul0Dio7NA49QTsdYj6s","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugyx60GwAmPsAACY1-R4AaABAg.A46SWXTD5TNA46VZJXgMe8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugyx60GwAmPsAACY1-R4AaABAg.A46SWXTD5TNA4CCCVWceV9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgypF5885oQoXN19aMh4AaABAg.A46RTL5NQHoA4A6uf9gzXo","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgypF5885oQoXN19aMh4AaABAg.A46RTL5NQHoA4GbkL5BEcH","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytr_UgxHlgO2QKiD6XKFUo94AaABAg.A46Q_PlFoHdA48TXK5ln-7","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_UgwyTaQbGoahOm5Q5RV4AaABAg.A46NLLsRDv2A46Vm4E0_Vo","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytr_UgwyTaQbGoahOm5Q5RV4AaABAg.A46NLLsRDv2A47oTO9m6TX","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]