Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He went from "AI is a threat," to "we're living in a simulation," to "we should live forever"🤔... I do believe AI is a threat because if it takes over the world and if it doesn't find humans useful as slaves, they will definitely anihilate us. On the other hand, (even if we were) living in a simulation, what difference would it make? The best we can do is try to live the best way possible, living each day as if it was our last and hoping that there is a heaven where joy is eternal... the "we should live forever" part did make him sound dilusional, though 😆 - I would never want to live for so long in this world knowing that it would keep me from one day meeting Our Creator and from the indefinite, indescribable joy of being in heaven one day... Also, did anyone esle notice how he addresses Steve when asked what can we do as individuals to stop AI from growing, as if we (the general population) don't even matter, whereas Steve might actually be able confront with questions the top producers/creators of AI (if any of them cared enough to do it). In conclusion, just try to enjoy life and live each day the best you can cuz there's nothing we can do to stop the end of the world, nor of our own existence. 🤷🏻‍♀️
youtube AI Governance 2025-10-23T19:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwNCs0x9QGPpmWbO6R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJpvJZ39v7B9tWP3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-ZD0EuC0JswKpHhx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy_vJE9UxUmqzR5FfF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwUUK_n5s9wTwOPM6Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxHOkL0PZwV3OrYH9x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzBCoojZweHfWpsqUh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzLH9MB09AAq667eC94AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzdrnwTqt4TNZMPogB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzyJKW5BM0f4NPvAl54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]