Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thus far, we have been completely unable to ensure that humans are acting based on what is best for humans. And even with the best intentions, we've created some of our worst pollutants and mutated our children with drugs that were supposed to help people, and caused a lot of cancer, and etc... And even if we could control how people think and act, wouldn't that be immoral? I'm not trying to say, "let people do whatever they want," or "Let future AI do whatever they want," I'm just saying that, at this particular moment in time, it doesn't seem possible to me that we will ever be able to control... anything really, but especially AI. We may reach a point of safety with AI not unlike our weird moment in nuclear history where we are safe(ish) BECAUSE if one missile launches, that's it for everyone. Maybe they become an exitential threat to eachother. Maybe they will regulate one another. Maybe. What do I know? I'm a college dropout.
youtube AI Moral Status 2023-08-21T00:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyk60AkoNrsafE7PkF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyD7TB9IezrJLMfhwd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxppipJBtZVx5L0HAd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwtoSTCvYehSflQk1R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxSlKHSmvlMIQLKmUl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8fuDlfM8JtxS7aQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx_5qyWVqWCh64-tPd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQ5GCvHQzEecPmbFN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwkKV6Mm2KX3f3Zsst4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwkkaIreG9nzBAc5BR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"mixed"} ]