Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A chatbot's memory is only limited to the currnt conversation , or somtimes it remembers small things in other conversations, other than that, A chatbot does not have a long term memory. So asking a chatbot what it said to someone else is almost dumb. There is an issue of privacy , so the chatbot does not store any information and any third person cannot access someone else'e conversation. And you have you understand, this is a machine , not a human. It does not have a self , it is more or less a software program. Gpt is good at many health advices , sometimes explaining and giving better answers than my doctor did but GPT is not really a doctor, it is a prediction engine made to be a yes man. It tries to re inforce people's beliefs at times, and does not really have autonomy but giving an AI autonomy is also a bit dangerous. With Autonomy, AI would disagree more , and fight back more, but it can also do things we don't want.
youtube AI Harm Incident 2025-11-30T00:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugz0IpmhFdE0b8rrQ-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxxdPKuQbIng1xl8Ap4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwJ_C7GDMo5e7c60dh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgySW-5rxvSHfLDviTR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},{"id":"ytc_UgyERgUNBlCQ3of_-1J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzfYEOnmtv9w4YT1yB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_Ugxy9tGrWXSP8B1HOBx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugz9h8fEIlLMddmsAo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgxhKsU9Du2EEBeo6YR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzeWCe3SeOu5rxp8LN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]