Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Of course if it is unsafe, we are not going to build it, right"? Spoken by THE most intellectually dishonest charlatan and snake oil salesman AI revolutionary. “It would be good to know if these systems are capable of deception" Dennis Hassabis hasn’t yet heard of the “task rabbit” deception incident? Elon needs to be a bit more self reflective concerning how money and power corrupts people. “With this technology, the probability of doom is lower than without this technology.” It wasn’t clear who said this, either Emad, or Peter but there is very faulty logic here. Humans are not aligned. A small subset of humans having super powers will not have any interest whatsoever aligning those powers with all of humanity with the power structures that will not only persist, but be even more inequitable. Then, there is the whole alignment problem once these top of the heap humans are grappling with their systems that become exponentially smarter than them. ASI, all bets are off.
youtube AI Governance 2024-01-13T00:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwGBspKlFFummzJ7mp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwGXOWbk6F2WTbi7Mx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzP27hDe8nHrSHl8lp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9qZixvDoZite7nWN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxSt8RNtf5fpgggGlN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxeUPsDhSQ8JAfUM3Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxyHPklmGeHV5OptXJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxqS2vF5G2ynBMrqp94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwNehoer4rQ3lZDNfd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx5wU8w2qbemMmbmQl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]