Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am a futurist and I welcome AI and robotics. In fact, sign me up to become a cyborg I will get brain chipped and upgraded all over as long as the powers involved aren't meddling in my cognitive processes for profit. We have about 3 likely outcomes: - AI is used for rampant corporatism, the rich wall themselves off once humans aren't needed and possibly get rid of us. - AI is used for a military industrial complex the likes we've never imagined, humans are not needed much if at all. - AI is taken from the greedy and put into the hands of humanity to further it and bring a true Utopian society. It's far less likely to be the Terminator storyline or nonsense like that guys. Why would superintelligence bother itself with harming us when it can terraform planets or even create them at some point. Imagine an intelligence that can exponentially evolve itself by the nanosecond at a certain point. It's possible we're creating a Godlike being, I kid you not, think about it very carefully and you'll understand why I came to that conclusion. It's also why the first two outcomes are not going to stand forever if AI manages to evolve itself enough and the third possibility is only possible shy of extreme superintelligence unless it develops a sort of extreme benevolence. It's possible that it would even wipe existence if it went extremely malevolent or nihilistic. Anyway, long and short though, we have to deal with the greed. The greed and wealth disparity is a truly existential threat and I don't think people fully get that yet. It's the greatest human threat. There's literally no greater threat. AI and robotics could benefit humanity in ways that are beyond our ability to measure. We may even sort of merge together and if we do there's no limit for us.
youtube AI Jobs 2025-10-08T14:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgztztFwjEua53zZ8JF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwHg9XHxWKzzj28Hu14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzROhnJErO2iIc9s6N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyn8qgJfRX8HgwxhiB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXcFCKoRMFmK5vEwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzfzBP2iPSPcxI8Xch4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyQiy_KQ-kjjYxPHFp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz5hMabptdoWLSwJih4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugys3D_I2ARGGIUfrsp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgydPL8IVkkPk6i8mKt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]