Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI only reproduces information it's been given. If the info is wrong, so is AI. How do humans know it's wrong? I'm afraid we're still the standard of intellectual thought.
youtube 2026-03-13T15:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxnYpW5BjFVR57AP6R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEwbbdxxDAAyYyNEh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwlTeOTL2g-IFGKqYh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyWHi9WoyIzj1QrgWN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyitxB0Kzf30vDgNMR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxpx-aSDSr7Xok1Ueh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwSyfXknkINJIo5em14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxaizM7nX9bkBXvwAp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwf7E0Rvh_50EZ1pYJ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0VHPeKMWn6aaQrZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]