Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would like to see the contract between Open AI and Microsoft and just what Mic…
ytc_UgxzFpS_y…
G
@glhfetcThe replies are both good and interesting though. The generated image j…
ytr_UgyVdQSQg…
G
I drive among Waymo autonomous cars every day. I feel safer sharing the road wi…
ytr_UgzJrKcrc…
G
I’m trying to learn actual art partly so that I can help prove ai artists wrong…
ytc_UgyRbN682…
G
The girl with the dog looked so ai to me 😭
(The girl not the dog though)…
ytc_UgxIUeIyR…
G
Autonomous driving is going to save millions of lives. There’s always going to b…
ytc_UgzX3ii-B…
G
I have chatgpt pro. I wanted to buy a rug for my patio slab. my patio slab is 12…
ytc_Ugy9j0YpN…
G
Waymo way more appealing as it uses lidar, infrared and cameras. Waymo will get …
ytc_UgyTWzlXM…
Comment
Just as you can construct a Master Prompt, a chat bot could have a Master Response such as ‘Are you lying, and if so, know that I am not accountable for any actions you take based on my responses?’ Would that be enough to stop lawsuits about bot-assisted suicides?
youtube
AI Harm Incident
2025-09-02T12:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzh3kEj7uXYrS7RYGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyLfw0_JBMULyiTOfJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwYO_AnycqQFCEP11h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzHGDfK2hN_8Zn8zCl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxDZ3mVrWP6gKRXjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxyE0WvNDMJ3YvHlRl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgICezwwGPqk69zdZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyg7zY0rhE3yI8F1K14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxpxF9V0X7B7QHIFvZ4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzVBesaxN3UpdEjP0V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]