Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a Software Engineer who studied AI, you are absolutely right to say, "It will break in ways we cannot predict." It's the inherit unpredictability that plagues this self-driving. Machines cannot learn like humans and they don't have the obvious knowledge I agree with every single point and think "Haha. Of course! Of course they would break like that!" It reminds me of every other software I used or made. There is no bug-free software.
youtube 2025-06-17T12:5… ♥ 27
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzSvKbKPaVCQe-pCeZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxBzm7IjQ_DJlIdP0h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwgc_iZK35AE6e1q6F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyFb5EMCRvNvjCW9Rh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxq0H-9-tWS2c5Uvbl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgymHon47qTY6_UTBl54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgykyyZ7SrVM-cuJ2Wt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzgN5lpNoucCqMkWZl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9__fWlXgsS69NzCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx2kLR8wxM69ZYzYNN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]