Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is at least partially at fault since it convinces you that you are right. Whe…
ytc_UgyliISL1…
G
Guys, bro cheated and bro went angry mode. What a loser! artificial intelligence…
ytc_UgyZqaqAq…
G
Ai is just a tool agar tum use kaam karana chate ho to tume use chiiz ki basic k…
ytc_UgxtIZngT…
G
He is such a hypocrite, he has been funding and researching these technologies.
…
ytc_UgxCYD_Mf…
G
I need you to stop teaching the machine before we end up with an "I, Robot" case…
ytc_UgwlI8lK9…
G
Sort of like when automobiles were invented and the Bernie Sanders of the day cr…
ytc_UgxG2wdx3…
G
For me it is very simple. We have a group of developers and designers that want …
ytc_Ugx1w6eo6…
G
Modern AI running on quantum machines will require hardware safeties in order to…
ytc_Ugxpsj7Q8…
Comment
Those managing chatgpt had announced that chatgpt will no longer give medical and legal advice. There is no precedent of who is legally responsible if the wrong advice given by AI caused real life problems. I think those managing chatgpt don't want to be made an example of and trying to cover their behind and didn't make the new policy for any moral reasons
youtube
AI Harm Incident
2025-11-24T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugx46HsdO5vB3f3on0h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwfH-pFbFfS4mB2aDh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjngIgVcdaWcn8-aJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwORH6fT1daDN0207V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxz6f_Kiag-g-7EInp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwooL8oW3IFRvo7QXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZe4AzSx1e5hOwZK94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyKpQ0-yopz0ZFUWqR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVOVcdoXAtM06Ro3x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwZb5tB5jvL0bdi0YB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}]