Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Interesting discussion, seems like Steven's more onboard now about the AI risks :) Also 2 interesting points and questions to ponder: 1. Professor Russell mentioned that Singapore has a coherent AI strategy for future, what is that strategy and where can I read about this? 2. Professor is working on trying to keep AI systems below human capabilities to ensure controllability, is there a scenario where we can balance the AI capabilities to be equal to human intelligence and maintain control? This could potentially expand human wellbeing without being subjugating or being subjugated 3. What is the one thing that would incentivise more safety prioritisation for large tech firms, is it regulation or access to markets etc? Problem is that companies like OpenAI and Google have already access to most of the large economies of the world.
youtube AI Governance 2025-12-04T13:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxhNhyo_zmGfaSgNnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzi70UQKeAnDkqg0_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxtMi-kto18UCPHCu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_UgwA9ZI9baa1Pb2_0mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwbeks40lI9SX4pBHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyyzHtlCz5y5cti_Gd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy3LcxGMCLzxUMeZf14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwucjlkdtpFnLCuYQp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwCJbkb2sliJVZ2O014AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgylV5Kg6xWWLDV2R_x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]