Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
the use of the word "Intelligence" is the problem. We have created an algorithm based tech that mimics human outputs but it is no way intelligent. Relying upon a system that contains no intelligence is a mark of human stupidity. Alignment will never exist because AI isn't intelligent and so human intervention is always required. Not to say "machine learning" isn't useful.
youtube AI Governance 2025-10-30T11:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw5FMG_q8YjisCwvqJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzAY8vNhZk-3OjwBUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxxWx6cIo69q8nPH314AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwKriY1t3DCQT8Ya5p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxkUX2CIqoYfQcnI5V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw1TJxC0flXLlbgIId4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwcfO9bHyJcoYhaQMp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxExXwpT9eXmXXIt6R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgwDU_RS007lj6ACr0J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwu_R4LfV8TZxgtgkh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]