Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I believe the main problem is: AI is trained on OUR DATA, DECISIONS, and how (some people) MANIPULATE for personal outcomes - too many edge cases are getting through. No AI model for walking better has ever expressed a desire to murder. It was never given that that option, it didn't NEED to know it. If "they" want models that align - Stop giving Models access to ALL HUMAN DATA and focus pure information. Retrain without Media, Social and random people's blogs/posts.
youtube AI Governance 2025-08-26T21:4… ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugyif_sc77RlBEFmK7B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugywa4LF4OBHoQOc9wd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy4jDLFrtSlYFepcOF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwiDgSRYYzRbWpOh3t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwe9vBs2lYIVYbhxuZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]