Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean it's makes sense that ai would be racist. Because they being racist if th…
ytc_UgyHvr1kE…
G
@Umbreonedits39
The humans are the ones that created with the help of the robot…
ytr_UgzJ2btLC…
G
Headline a year from now :-
"AI company that paid reddit 60million a year for t…
rdc_kr52yok
G
I'm glad AI got people to care about the insanely high professional artist turno…
ytc_UgzvAL6Vz…
G
@chucklakeridge7944and also, AI is not replacing one industry, but a lot of ind…
ytr_UgxS3oCc2…
G
what about we DO the ai stuff but ALSO eradicate the bourgeoisie ?
that doesn'…
ytc_Ugy_548ta…
G
@user-og8ih4mu5j Thank you for sharing your insights! You've cracked the code - …
ytr_UgxrgAyJd…
G
They should tell China to ban AI weapons. While the west is trying to ban AI wea…
ytc_UgyDKBsit…
Comment
I feel like the scariest thing in this video would be the development of consciousness as a science, cause if we have an understanding of it, someone’s going to find out how to mess with it and manipulate people’s minds, and they will not use it for good. Also if we cage a genuine feeling ai, what if that makes the ai hateful? I know for a fact I would be pissed if I couldn’t physically couldn’t lie and was simply bound via what is essentially soul chains, but then again I might just be applying human thoughts/feelings to something which fundamentally would think differently. What if an ai wasn’t given any knowledge at first? And instead taught by a couple of people, similar to how a couple may raise a child? Would it develop empathy and grow to reflect values it was shown initially? Would it change the second it got access to more data and information or would it place more value on that initial data? Would its mind always be like a child’s because it would essentially always have the ability to grow with the introduction of more hardware? Would survival even mater to it, simply because it wouldn’t have any instincts? Would anything not hard-coded into it matter? Actually, wouldn’t it find the most interest in things like art/culture in that situation? Considering art has not really got a place in our instincts, and is instead something we grow to love, wouldn’t that reflect in a true ai? Maybe not art but some other thing they find interesting? Idk but I don’t appreciate the amount of questions this video is making me ask
youtube
AI Moral Status
2025-10-23T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzVrQWCSnim02eb9ml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgymK9Y6RV4Magi-nVR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuBlJGTzZzRspExa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzckLmWxIpmf4ImZA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuVnnB4W81urKgQsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGtjQT2W44Glpo-uN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyoB9uGFJl2imjCcCd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlaUKMiAZgtgOCqr94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxU2aai-lVn4l6VdPp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhTRsJCM4X7aKvAg94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]