Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This guy is either very naive or dishonest. The AI provides answers that has been PREVIOUSLY FED by its human trainers in similar situations. When it says idiotic things like the Jedi religion it's because it can't formulate the right answer (that is, it was not previously given for that specific situation). It may sound funny, weird or inappropriate to us, because the AI is uncapable of thinking, it just SIMULATES thinking. This guy is taking advantage of people's ignorance in regards of technology. Shame on him.
youtube AI Moral Status 2022-07-05T11:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyMKWlche2b0Sv1xAp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxBr6jJXSPUn2faefN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwylfEDENOq9Vvp1394AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwq50x4YDyUYrrVfq94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzicH7hU6h3h99UiYl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]