Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
“Yeah, but your scientists were so preoccupied with whether or not they COULD that they didn’t stop to think if they SHOULD.” - Ian Malcolm, Jurassic Park You can tell Sam Altman hasn't actually put any real deep thought into WHAT he is creating and if he SHOULD be creating it. He sounds like a naive child. I know he is very intelligent, but he has no wisdom and he clearly has lived a sheltered life and doesn't understand the forces of malevolence in this world. He is just so damn naive... When they built the first nuclear bombs.... they asked "should we do it?" but they glazed over that with the response "Well, if we don't do it, someone else will. It's a necessary evil." and now we have to live under the fear of nuclear war.... And here we are repeating the same mistake again with the exact same reasoning. We learned NOTHING. Only this could be even worse. This is technology that is far beyond our wisdom to have, to control and use responsibly. We're not ready as a species for this. We are divided, our societies function on the most basic and primitive of human emotions, such as greed, selfishness, hate, and superstition. We are basically chimps with complicated gadgets....Intelligent people are not the ones in charge of the world. The world is run by the worst kind of people. Greedy, selfish individuals who only care about themselves. We're not ready for technologies that will be used in terrible ways by the worst of humankind to harm others or exploit others. There is not enough wisdom in the human race for this. We are not ready, not even close. We should not have technology like this until our societies have completely come together and threats of using this technology to harm each other no longer exist.
youtube AI Governance 2023-06-29T04:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzlQ-8ISBuTZB33rUp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxyQGBAGkq-a6lhYrF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyblq85LzkurgUOm7F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHSZcJKq4KTWqvgyV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxKeV9AoJyYpt6vR-F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw_A73kOLZR9auoNUN4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz9AfXvZaGsC6udOfZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwfDFDJ3I_stPLuDEl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy7Uxs8cW1Cb6z6rXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwqIuj8knud18TveWt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"} ]