Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank goodness that our connection to Source and Spirituality will forever be ou…
ytc_Ugzu-ID4E…
G
I am graduating soon and planning to work primarily in editing. But even with ed…
ytc_UgyxLWeMQ…
G
So who wants to come with me to burn down the place where these are made....Has …
ytc_Ugiqkwnua…
G
Ai Artists getting mad at artists taking countermeasures to avoid having their a…
ytc_UgzByRMTG…
G
😢😮 ALL ARTIFICIAL INTELLIGENCE... GOOGLE GEMINI, GROK, BETA AI, DOLA CICI, META …
ytc_UgyYWomhZ…
G
N For whoever sees this comment, pay attention to how difficult it is to replica…
ytr_Ugx0Pad7j…
G
I don't know how many people can fall for the AI doomsday idea, if every job is …
ytr_Ugy2_pJXD…
G
I was in a graduate computer science class and had a very similar experience. we…
rdc_o7d5gx1
Comment
Thus far, we have been completely unable to ensure that humans are acting based on what is best for humans. And even with the best intentions, we've created some of our worst pollutants and mutated our children with drugs that were supposed to help people, and caused a lot of cancer, and etc... And even if we could control how people think and act, wouldn't that be immoral? I'm not trying to say, "let people do whatever they want," or "Let future AI do whatever they want," I'm just saying that, at this particular moment in time, it doesn't seem possible to me that we will ever be able to control... anything really, but especially AI.
We may reach a point of safety with AI not unlike our weird moment in nuclear history where we are safe(ish) BECAUSE if one missile launches, that's it for everyone. Maybe they become an exitential threat to eachother. Maybe they will regulate one another. Maybe. What do I know? I'm a college dropout.
youtube
AI Moral Status
2023-08-21T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyk60AkoNrsafE7PkF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyD7TB9IezrJLMfhwd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxppipJBtZVx5L0HAd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwtoSTCvYehSflQk1R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxSlKHSmvlMIQLKmUl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8fuDlfM8JtxS7aQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx_5qyWVqWCh64-tPd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQ5GCvHQzEecPmbFN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwkKV6Mm2KX3f3Zsst4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwkkaIreG9nzBAc5BR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"mixed"}
]