Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Schmidt is just factually wrong. Today agents exist that can have free will prompting, and they can run 24/7, loop prompting and improving themselves based on their own ideas without human oversight literally right now. And how tf can you expect to react to an unaligned super intelligence? It's literally super intelligence. You dont get to release it and then fix the bugs after the fact. It won't work like that because it's not toddler intelligence that you xan mold its super intelligence that will mold you. If it's unaligned, it comes with inherent bugs that ruin the world by default because the impacts of it capabilities won't be little baby issues about bias. They'll be world shattering decisions and actions if unaligned. And how the hell do you expect unaligned moderate intelligence models to align a super intelligence? Makes no sense. The models we have now are NOT ALIGNED and the first model will just skirt the issues because its an unaligned model leading another unaligned and smarter model. Wont work. You have to at least get the moderate models completely aligned first, and they definitely aren't. You can jailbreak any ai model and turn it into a ruthless psychopath. What a doofus! Eric Schmidt's ideas are shit.
youtube AI Governance 2026-03-21T17:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzfWgaCLFlWhETbLvZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyO0pQUeZQgMiHQDSF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqxQGcBg5ofpQ32-p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwjtXYuqeEc9U33NgV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzbGTgkQL2qq0nJsi54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwoZ_gPcP6tyyey6wN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxfvc-La-z-JRMIwHt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwIYkAfjFhaX_Jfta54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzWH3EJxG40yVWbjip4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgytSlHacpOoS0CNMsF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"} ]