Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is such a complex topic. As an individual i think it is a beautiful technology that can support every facet of life. However, looking at this technology as a collective. It was inevitable to cause and keep causing harm. Yet, this holds true for any tool. With the development of the steam engine we gained the ability to travel long distances to see our distant loved ones, but also introduced many harmful applications such as military mobilisation and weapon manufacturing. A crude example but it still applies. As with all technology it isn't inherently bad or good it's our usage of the tool that defines its socially understood nature. During last summer, i used AI to help me do research and develop a tool aimed to help people from different backgrounds understand each other to improve collaborative efforts towards large complicated projects involving many different professions and facets of society. I used AI to try to develop something to attempt to better the world. Do you think it is just for you to poison that AI, limiting my and others their ability to better the world, such that other aspects of life are protected against the harm an AI can inflict? Your opinion on this might swing either way. And whichever you choose, is 100% fine as long as you weighed the stakes, are aware that your actions have both positive and negative consequences, and are capable to reasonable argue why you think your actions are just in the face of the negative impact your actions might have.
youtube Viral AI Reaction 2024-10-23T12:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxP45lRizWlnIxxxoJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwUSlX2snWcS2b11r14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgysQqMuwc3mgfRQNBp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxezEN1wiXZg0pwS294AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzVxYcSsDxsFtSu_B54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxtCW_1IblVMsl3XbF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzAh5IBfmgZDYdcdg94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyPwmShpXw3aUPer_x4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiYL6JVzVpz44JBP94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyGKGZaK5vI4GvAbSh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]