Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
here's what I wonder: so it's been said to me that you can't have a good predictive AI without training it on everything. Mostly because of volume. I'm actually not convinced of that, so my first question is: Could we create content designed specifically to be weighted towards what we consider moral and of service to humanity? In other words, one way to look at it is could we create a sort of LLM Bible that has all of the best responses to the various questions that humanity has asked? Another way of looking at it is, could we train an AI while avoiding works like Mein Kapf and the Una bombers manifesto, and if we did, would that avoid some of the problems were worried about? My second question is, if we train AI on the same document more than once, does it work to reinforce the patterns of that document? In other words, can we wait the training data and have the AI look at that training data many more times. Such that let's say we want AI to be able to suggest harmful actions, but we're only going to give it one percent of that, while we give it the writings of the Buddha 100 times, and have those kinds of writings be 99% of what it is trained on? My final question is, let's say they're simply isn't enough content digitized and available to properly train an LLM or whatever we're calling today's models. If you train these models on all content, can you retrain it on desirable models to once again weight it's responses to lean towards the ideas, concepts, and solutions that we prefer?
youtube AI Moral Status 2025-11-03T02:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzA6dK2z04wRANgow94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz2MC5eEVARGuy3CCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLTRKwkIst_sth2h94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwYfnt7J6wTRHSiBcN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyKG33hoks_foVgtWF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgycTSlwAwauHeOXfXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzPCeLMayKt3iNFdax4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyvQRoflPZn7t69o_x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfID0Gt7h0dycer3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyFYDQ_c_-eg1-JyO94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]