Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI expert Professor Stuart Russell warns that AGI could arrive by 2030, posing extinction-level risks. Despite knowing the dangers, tech CEOs continue the AI race driven by economic incentives. Russell argues for strict regulation and safety measures before developing superintelligent systems that could replace humanity.
youtube AI Governance 2025-12-04T08:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwREN4FyXG7tN2FwFd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1KSkc_CBCF8vkm_R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzBm_4sMJemVzr7VF14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwvZ7_pmmm43evBynF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWAKDdP4sAqpkSl-N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwD54SpD1NMdfuSlEt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyUKU3Q7ZZTTm8jF454AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxprB8UiCWnAo8ULDt4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwNqjG86yxxs0pXUId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz098x72_JnwYp9xex4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]