Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Something I personally think is that a lot of people don’t know Ai can be wrong today a lot. In the future it is possible that Ai will be wrong, but programmers who have domain expertise of other fields will be unable to correct Ai. The problem that is like having a computer Donald Trump that confidently hallucinates knowledge but lies faster than you can fact-check.
youtube AI Governance 2025-11-10T07:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwWN9HYLhKhLsWG9Hp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzNeCTMBap6wDZ3-yV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwkk3BIx48urhi4_FR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyI-cxxqj6Y5zdTUWt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugza1g0TKwiydEbC-Cx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwaaL3HcWDmtTKLaQR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzXZ9fcdPGUDt2qO1Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy30GuA7DX1w5qQeXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz_VuGAmLe1AJupF6p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxZVgZDHbcrcazZlp94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]