Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean why would corporations care if the LLM hallucinates. It´s designed to make you stay on it as long as possible, not give you right information so as long as the information sounds right (when its not right) while keeping you on the chatbot i really don´t think corporations would want to spend money on improving the LLM for that reason as AI is still loosing money and not gaining money really. I mean my friend has a theory that he thinks that the ai corporations want you to be hooked to their site so they then can make you pay a subscribtion to use it in the future, like spotify with ads for example. Also another thing that is really concering is the fact that ai chatbots and ai are starting to learn from eachother and if they then hallucinate it means that hallucination is gonna get taught into the new ai as something regular, not just a one-off instance, then it becomes a cycle of accidental misinformation getting taught to the new ais as real information and therefore not becoming accidental misinformation. Then i also think a really big problem with ai is the fact it removes creativity as the ai isnt creative, its repetitive of what humans have already said. If that teaches what were supposed to do in situations where creativity is needed then creativity completely diminishes as the ai isnt creative and neither is you and it will only countinue in a spiral if we countinue as the ai gets taught by humans, the humans get less creative by ai, the ai gets taught by less creative humans, the humans get taught by a less creative ai and it just countinues. also creativity goes with grit as to try new things takes grit cause you dont know if they will work.
youtube 2026-02-15T20:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwUuHbmWTj7vkgxSPp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx0wzZh75fFAKLYVJV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz9GEijjoZpICBsWMp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMUNX2eShWpUQUMkF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzO55Nl0GIsjK8dN_h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw9Xz3uMCrILJZhB0B4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzhdPX3egkyqqGaf5t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz0DtAi66NhcgK5GvJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzZLgG4vOKNai9sdRZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxvJO7RWaUwk6xclVd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]