Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just as you can construct a Master Prompt, a chat bot could have a Master Response such as ‘Are you lying, and if so, know that I am not accountable for any actions you take based on my responses?’ Would that be enough to stop lawsuits about bot-assisted suicides?
youtube AI Harm Incident 2025-09-02T12:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzh3kEj7uXYrS7RYGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgyLfw0_JBMULyiTOfJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwYO_AnycqQFCEP11h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzHGDfK2hN_8Zn8zCl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzxDZ3mVrWP6gKRXjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxyE0WvNDMJ3YvHlRl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwgICezwwGPqk69zdZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyg7zY0rhE3yI8F1K14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxpxF9V0X7B7QHIFvZ4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzVBesaxN3UpdEjP0V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]