Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My thoughts on artificial intelligence: INFINITE possibilities. Literally. I would love to see it grow, with automatic level design, borderless dialogue, imaging of celestial and (theoretical) phenomena, and possibly even multi-track computer systems. The issue is that all of these great things that we could do is just... not focused on. I know it is being worked on, there are AI-powered video games, chatbots, 3d adaptations (from satellite scans), etc. But image generation, while a great step in artificial intelligence, is a drastic violation of computer ethics. Reason why: it is TOO boundless. Think of all of the sci-fi media with androids. Dystopia included boundless AI that can think for itself, its self-preservation, and the riskiest quality to give ANYTHING with knowledge: the intent to dominate space. Utopia, however, had androids within very restrictive boundaries. Computer ethics are very different from human ethics. You can make a robot do a job for you with no pay, but you can't do that to a human. Likewise, you can give a human the ability to make art taking inspiration from other people's work, you can't give that to a computer. Why? Because computers cannot think for themselves. Computers do EXACTLY what they are told to do. They have no sentience, they have no thought process outside of analyzing bits, and this is both a great thing and a horrible thing. One one hand, a computer does exactly what it is told to do, without question. On the other hands, a computer does exactly what it is told to do, *WITHOUT QUESTION*. Sorry for my ramble, but a software engineering major with artist friends makes for an interesting take on ai, in my opinion.
youtube Viral AI Reaction 2024-11-02T05:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyF2rgsSWO_tC-vbot4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxnHV883fpTQpQjLWJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgznDZ4vBxafyMVyvOV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJHv1qT-DUM1vFhsp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx2qDaDeY8v_IyTmTd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx3WYqsv6cNJnCJq6V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxAQ9SO8aBzAeyNFeZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgykCGPhEnsv6aOX6c94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgywzjjC-CPKI7ykiZh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgypAWaBwoFqBkDmxMx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]