Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's all about context and the problem is that current generative AIs don't actually have any context for the prompts, unless they are detailed enough. So, that whole "you're the product" context can in fact be valid. Because if Luke types in "give me an image of a soldier" - well, what "soldier" right? Heck, if you asked a person to draw you a "soldier" they would have the same question, or they would draw a "soldier" based on their own context. So, if the AI, given the above prompt, uses what it may know about you to draw a specific soldier, a Canadian sniper in case of Luke for example - I think that's a valid way to "understand" the context for the prompt. And if Luke wanted something else, then he would need to create a more specific prompt. The way to solve the context issue (and I don't know if it'll be possible any time soon, before people go crazy and outlaw AI or some such) is to create AI with enough "understanding" of the context of the individual, the context of the culture, and whatever context the prompt itself provides. So for example, if you prompt for a "doctor" then the cultural context would be that while there may be more male doctors in general, we also want to encourage the understanding that anyone can be a doctor, and so a diverse set of images should be generated, but if you prompt for a "WW2 German soldier" then the cultural context is that this was a much more narrow range of genders and races among those soldier, so the set of images must reflect that. Hopefully the folks behind those systems can figure out these things sooner rather than later.
youtube 2024-02-29T12:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzGOUgiZavchgPiP7N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-dOB_NEhocUrdnGR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}, {"id":"ytc_UgyTQjQQcBUjZClh9Tp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzDlNH_VWUWekwdBT54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzvTgXf7W0n9dZbh0d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJxFj-8yCgJPTAK2t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"amusement"}, {"id":"ytc_Ugyh7NT7fTtH5v4x6HB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"amusement"}, {"id":"ytc_UgyE-s0bn5tORoJ21mJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxOZXMePz0PmXSThoV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzPi17Izod0eR6j7RB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]