Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't actually worry about true AGI. A being that is exponentially more intelligent than a human would be peaceful. The worry would be something that falls just short of true AGI. 10 times smarter than a human, but only when it comes to making paperclips. I'm hopeful it's not possible, like the first superhuman AI will probably be optimized for training AI, and it will immediately make true AGI as it's first action. Guess we'll see!
youtube AI Moral Status 2025-12-31T23:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzEsVP21oc-Ojg4dyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzYRoa7_MkTAteWoDx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgwvkRx-YifE-9AlUX14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugz-wjt_YlEqPDKv_MF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzJaZumQ05Kxh8qzcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgydiX0uQWQfuct__il4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwGQez1Jq1r6JRHyat4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwtqJdABfpvaNggSwV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgzaQUYEJsaVOXIloFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxInO7aIaWh0b66EUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]