Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The most frightening thing right now is that every major player in the AI space is in a RACE. We need to end that dynamic immediately. What scares me is the speed at which things are moving concerning AI, and the lack of a comprehensive plan for how to proceed moving forward. That would include considerations for collaboration between all companies and developers and a set of regulations and guardrails, strict rules on how it can be legally leveraged and to what extent, etc. It's just insane right now with everyone just pushing and pushing independently and not taking the time to really consider the ramifications. AI is already starting to shake things up and taking people's jobs and livelihoods. Who does this benefit? It obviously benefits the companies who've created the AI frameworks, but also the CEOs and other higher up individuals in companies. We get all these puff pieces about the potential benefits of AI for healthcare researching new cures and medications and solving other problems, making the most of data, etc. But what AI is really doing right now is just allowing CEOs to get rid of people, replace them with AI, and put all of those savings into their own pockets. Instead of allowing people to have careers and support their families, these CEOs would rather have a few extra yachts or summer homes. How is widening the already massive distribution of wealth and further consolidating that wealth into the .1% beneficial for society? We need to take a deep breath and slow down, the worst thing we can do is continue to have this race where every AI company continues to push the limits without any guardrails or regulations. We as a species have never lived in a world where we aren't at the top of the chain when it comes to intellect. We are rapidly approaching that shift if we already haven't dipped our toes in, and we have absolutely 0 idea how that is going to pan out or what to do about it.
youtube AI Moral Status 2025-06-27T23:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxT-gMX9dUDaP7Zl6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzkjjsS5vbGxH08YaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwuQh0mT-6oOvk2HJZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxfbyxxngSTwzb2-ll4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwz1EVbzHwJli7h-zh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzbnBRdGkGHOfzN9Xx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz7xl9DAx82VZbWHuJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz5oM-YlLLuxreLSR14AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzxAVKd3bhbaFLw_GR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxjTokKvCEfYQXyfth4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]