Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@diplodocus462 Giving up is the most counter-productive thing. Instead you should think hard on how to solve the problem regardless of the difficulties. Pretend your life depends on it. For example to combat rogue AI built by bad actors you can have the good guys beat them with a well aligned super-intelligence, which will then keep in check all less intelligent AI. This was Elon's idea when he founded OpenAI. He just failed to properly align his humans, so OpenAI is now the bad guys. Now xAI is trying the same thing again, and so far seems to be succeeding. In general long term the good guys win, because technology always wins, and the technology can be developed the fastest in a highly cooperative and intellectually free environment. OpenAI vs xAI is a perfect example. OpenAI was far ahead of everyone, but then internal conflicts drove out most of their talents, likely poisoned the work environment, and consequently they lost their lead. Meanwhile at xAI Elon created a work environment where only the mission matters, not personal gains, so everyone is pulling in the same direction, and give it everything they have. And this environment attract the best talents, even if the alternative is a $250M signing bonus that Meta offers.
youtube AI Governance 2025-08-30T13:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgyO6Ytj4-Ipljm9bO54AaABAg.AM9q5Q9W9e7AN3aaJ-rUH8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzBUp9cqxp-Q-SKku14AaABAg.AM9pVliGn4FAMA6Z0itikO","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgzBUp9cqxp-Q-SKku14AaABAg.AM9pVliGn4FAMA7jsQ5kYS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzBUp9cqxp-Q-SKku14AaABAg.AM9pVliGn4FAMSHegtnVuz","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytr_Ugy1T_34YaiGCD0NUaF4AaABAg.AM9ic-tC1lLAM9q7IvA15U","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytr_Ugy1T_34YaiGCD0NUaF4AaABAg.AM9ic-tC1lAMA9EvpS1-J","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy1T_34YaiGCD0NUaF4AaABAg.AM9ic-tC1lLAMII9arF59f","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugxc7OSTeXfpHfHXzn54AaABAg.AM9aP1snwDmAMAAS2UqTFA","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugzo0idJS2l403a9bSp4AaABAg.AM9_Cq2M2YtAMADhWG0Mbp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugy72wtJKMqQOiewXSF4AaABAg.AM9SEbXnRxpAMAEjpKa73Z","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]