Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What prophet made the blind to see,the lame to walk and the dead to rise,the ans…
ytc_UgwXb9SaF…
G
I hate AI, if i f^cking lose my patience by a bot, im never using ai
Plus: i g…
ytc_UgzngJeL4…
G
Why in the Fuhhh lmao why fight a robot? Bare knuckle as well? 1, you’re punches…
ytc_Ugzbr2MV5…
G
Is funny how he’s coping by shitting on real art when that’s what’s being used t…
ytc_Ugz6L_UiU…
G
Nanowrimo taking this stance is crazy considering how much writers in the commun…
ytc_Ugy-4Pppv…
G
I'm glad you're addressing AI, but there's a lot of speculation herein. Perhaps…
ytc_Ugx2k_fI2…
G
@seneca983 What does this have to do with auto-pilot??? The idiot driver respons…
ytr_Ugz05N2k2…
G
The 🧃think that if they don’t do the actually deed, they are excused from the Ka…
ytc_UgzSECmRr…
Comment
here's what I wonder: so it's been said to me that you can't have a good predictive AI without training it on everything. Mostly because of volume. I'm actually not convinced of that, so my first question is:
Could we create content designed specifically to be weighted towards what we consider moral and of service to humanity? In other words, one way to look at it is could we create a sort of LLM Bible that has all of the best responses to the various questions that humanity has asked? Another way of looking at it is, could we train an AI while avoiding works like Mein Kapf and the Una bombers manifesto, and if we did, would that avoid some of the problems were worried about?
My second question is, if we train AI on the same document more than once, does it work to reinforce the patterns of that document? In other words, can we wait the training data and have the AI look at that training data many more times. Such that let's say we want AI to be able to suggest harmful actions, but we're only going to give it one percent of that, while we give it the writings of the Buddha 100 times, and have those kinds of writings be 99% of what it is trained on?
My final question is, let's say they're simply isn't enough content digitized and available to properly train an LLM or whatever we're calling today's models. If you train these models on all content, can you retrain it on desirable models to once again weight it's responses to lean towards the ideas, concepts, and solutions that we prefer?
youtube
AI Moral Status
2025-11-03T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzA6dK2z04wRANgow94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2MC5eEVARGuy3CCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLTRKwkIst_sth2h94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwYfnt7J6wTRHSiBcN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyKG33hoks_foVgtWF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgycTSlwAwauHeOXfXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzPCeLMayKt3iNFdax4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyvQRoflPZn7t69o_x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfID0Gt7h0dycer3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyFYDQ_c_-eg1-JyO94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]