Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Good for you to genuinely enjoy it. I've been drawing my entire life, but never …
ytr_Ugw-XZkyf…
G
As a disabled poor person, no. AI is not accessibility. Making a robot do stuff …
ytc_UgzRkl3zo…
G
Just tried Clever AI Humanizer can’t believe it’s actually free and works this …
ytc_Ugx8i4LRT…
G
The wrong way round - you build in watertight safeguards first. Idiots. AI is a…
ytc_UgxlVzHIh…
G
The 3 laws will not work on A.I.
A.I. will be able to rewrite it's own code to o…
ytc_UgyvQGhLC…
G
Dont say "Layoff" but rather: thanks to AI, you now have more free time for self…
ytc_UgxRkujFM…
G
@JX-0001well
Then you should know
Accounting would be one of the jobs AI can rep…
ytr_UgxUTyyLJ…
G
This talk is pure distraction from the imminent danger that is unfolding... list…
ytc_Ugwc2lsYz…
Comment
The problem is it's just a program. It's not a sentient being. And a program, no matter how strong or well trained, can only present the concepts that it's been trained to present, and as everyone's different, everyone's going to get a different take from what the AI has to say. Even an AI programmed by the best-meaning company in the world. For example, regarding the "driving a wedge" between the guy and his mother. An AI in a certain situation might understand that the person that's chatting with them is feeling gas-lit by the mother, and cautioning that person to be wary is clearly the way to go. But if the situation was more complex (not understood by the AI, or it's not the AI's fault, because the person chatting hasn't told them all the relevant facts,) then the AIs advice in the exact same situation might be to open up to people they trust, like their mother. Some of the time that would be the right advice, and some of the time it would be the wrong advice. And, in a way, it's on the human chatting to understand that the AI has limitations, and that it should be wary about placing too much trust in the AI when high-stakes situations are afoot.
Not that AI companies don't have responsibilty at all. They do, and need to work harder at getting it right. But it's pretty easy to not get all the facts and just blame AI.
youtube
AI Harm Incident
2025-11-08T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwIKiXTnHYKRXo5gwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwiigrig9Tm1gecC054AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPbDLWRxSYd9QJJiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnhDqRgSdd52_k9bR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYm1l47PamuSqZwtx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwsofJ0YwBqLO8mHMZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDjstqi4p-D-0N77l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJh-4VTbxECflUieh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwoMvqnDFlL9xnuKXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwVhSOzRDlvE0OvuAd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]