Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Personally, I don't find ai robots to be any scarier than the people working behind it. Yes I do agree that ai has the potential to be used for harm, but that harm can't occur unless the people creating and maintaining ai purposely use it for bad intentions or don't bother to fine tune the ai to eliminate as much error as possible. Ai is still a human invention at it's core which is modeled to mimic human capabilities. Even if ai had the ability of consciousness or free will, none of that would be possible without a human working behind the scenes to make that happen. This begs the question: Does free will really exist if it has to be created or programmed into something? At the end if the day, ai like many things we use in society is a tool. Tools can be powerful or even scary looking to a certain extent, but they cannot cause any damage on their own volition. For instance, most countries have laws banning certain weapons because we think of the people who are using these weapons and whether or not they are using them responsibly. The weapon itself can't choose to harm you, it's humans who harm each other. In order to be scared of ai, you'd have to be just as afraid of the people who are in charge of it's creation and maintenence. This isn't to say that there shouldn't be laws in place to protect us, if anything, this is more of a reason that ai products and services should be regulated. I think when people say that they are afraid of ai, they are just subconsciously communicating that they are afraid of humans, which is a valid fear given what we know about human behavior throughout history or even today.
youtube AI Governance 2024-08-08T23:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz5Pcjh1xElVOqOC554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzMYWzQ4-2XGl3wlZ14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwxkdwdShGKP4T2CSJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwUvNNfg7_sLtS26cB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwc1rB0OPGBvSSehsh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyuqpGFJZM1uTlwPzt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyJHS4g6mpeqCVzR354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGbm8CuWgQYzrpYQ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzYXLrJw5iDE4ICgbR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzg0an_5IE0svCdvB14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"indifference"} ]