Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think "sudden" super-intelligence is quite scary. Self-replication/self-improvement is indeed quite scary. Just high intelligence with a high level of "agentness" and lack of supervision is already reckless. I don't believe in "sudden" super-intelligence, is because I believe it would require way too much energy right off the bat. I'm not really scared of "narrow" (low-agentness) AI, even if it is intelligent. If the training data, inputs and outputs of an intelligent AI system is data, papers and theory in fighting cancer, is virtually impossible that system will go do scams on Ebay. That would be like saying a sufficient advanced version of Stockfish will eventually realize that it could do much better, if it had tons of money. I don't see why by the time a intelligent (not super-intelligent) AI system starts to scam people on E-Bay, it will already be intelligent enough to be impossible to shut off. It would be extremely unfortunate for human race, if the first AI "scare" or "incident", is already the one that ends up in human extinction. But on the other hand, after AI incidents, it's going to be a lot easier to sell to the public, and humanity in general, a permanent ban on AI research and engineering. Sadly, the less obvious the dangers of AI, and the easier it is to inadvertently build a humanity-killing AI, the less likely of an "AI banning" ever working. Even if every government agreed on a ban, the chances of a high-resouce rogue agent that doesn't believe in the risks developing underground is going to be extremely high.
youtube AI Governance 2024-11-12T00:4… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxwDnlEHA7QFwMzrZB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgwGPNiP4G115HlCMmB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxgn2QDG4u3GwUCBPh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz431MRgmzceabjLdd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzcbFmhgeHbLrPqRyN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx-xpntgp4QxxIED5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwePVVbMUGmOuwAgch4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyNv7S5t7BOv9eoxYZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwnNR89T2lV3e0tf7Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwIZrGwu4CUO899WoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]