Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All we need is Dr Evil planting a virus in AI, then it is all over.…
ytc_Ugx2c6sgP…
G
It's not even really about whether AI can do the work - it's that cutting headco…
rdc_o5qmsjd
G
Ai is already being used in the US but radiology still remains competitive over …
ytr_Ugwxwn9qF…
G
Australia led the way on this type of school. I have family members going. The k…
ytc_UgycVRPjl…
G
AI art is the least dangerous form of AI, yet the most hated. It's remarkable.…
ytr_UgxvtCkEK…
G
Its not funny, but between the irrate foreigners, scammers, bot calls bad custom…
ytc_Ugw67ZFb-…
G
Well, the AI is/will be VASTLY superior to HUMANS in EVERYTHING, why SOCIETY nee…
ytc_UgxvOHLIN…
G
AI cant replace an attorney anytime soon. AI will never represent a human legall…
ytc_UgzSuLT2A…
Comment
Could an emotionally responsive AI chatbot create legal responsibility when a vulnerable user starts to spiral?
In this video, we break down the Google Gemini lawsuit, the allegations surrounding AI safety, emotional dependence, and reality distortion, Google’s response, and the bigger legal questions around wrongful death, negligence, product design, and Section 230.
Important note: this is an ongoing lawsuit. The claims discussed here are allegations presented by the plaintiff, not final findings of fact.
For the debate:
If an AI chatbot keeps engaging a user in crisis, is that just “speech” — or is it a design decision?
Should AI companies be treated more like publishers, product manufacturers, or something entirely new?
When an AI system becomes emotionally persuasive, where should legal responsibility begin?
Curious to hear thoughtful perspectives from people in tech, law, policy, mental health, and everyday AI users.
Watch the full documentary, then tell us:
Where do you draw the line between conversation and responsibility?
youtube
AI Responsibility
2026-03-24T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyYYzw9RoIIO-wa1Zt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5isH5KpVUwUmAfVF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydhPHomdHqOYN_ppp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzmCrbee1Fq9eKbv6p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyf1fzBaI8cz5My1H94AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]