Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've never used AI and hopefully never will. 25 yrs ago I caught a certain condi…
ytc_UgwYmGOxz…
G
Sooo you're pretty much saying that AI (designed to be human like in nature in g…
ytc_UgwbmlzFu…
G
Shogun is wrong. It will not be possible for AI to create fake Cinema Shogun vi…
ytc_UgyjwCX_M…
G
Serious question: aren't all of the models just as bad and in bed with the US go…
rdc_o7wt5ed
G
The accusation isn't news. An analysis suggesting it's AI would be, but per the …
rdc_nc7pd3g
G
@Jackson_Zheng my requirements are very specific typically, but which AI are you…
ytr_UgyU2HLbS…
G
I think that 'The Basics of AI and it's use as a tool' needs to be a separate cl…
ytc_Ugxx4PLRA…
G
Why is it that when an ordinary Kitchen knife (being sold on various Kitchenwar…
ytc_Ugwu5MrrA…
Comment
Mr. Altman, while you warn users that their private chats with ChatGPT can be weaponized in court, it is time for OpenAI and the tech industry to face accountability. AI conversations are becoming an essential part of people’s lives, whether for support, education, or advice, and must be protected by clear, enforceable privacy laws.
Users deserve the same rights to confidentiality here as they do with doctors, lawyers, and therapists. It is not just about tech innovation anymore; it is about human dignity and trust. So instead of quietly collecting and exposing user data under legal pressure, OpenAI should lead the charge to legislate digital client privilege, data sovereignty, and strong user protections.
Privacy is not optional. It is a right. If AI companies want to earn user trust, they must respect that, not just warn people after the fact.
youtube
2025-07-30T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxmvKxIBiv3l5KARsJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzTeyyok9c9hhA5VzR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyXczWVzVDYFg134Rt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzsh9BC7jiAo32sAn94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyAKUq6PLrHCsGd52B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwS7gGAr-EnQWAxeZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5UBnZRHiO2ALPfRl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyDjXm1ayue0pFAmQ14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzTAW43RVop5KAks_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwu1p81KpwoaOlRuTJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"indifference"}
]