Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a digital artist, i only use ai for when im bored and i want to see ai do a s…
ytc_Ugx5YS1rr…
G
its not even the same. . . . how much openai paid you to make this comment?…
ytr_UgwfQu7tE…
G
@dribrom And therefore people shouldn't mind having their pictures undressed by …
ytr_UgyxhnJnj…
G
Didn't watch the video BUT...a long line of self-driving cars is called.....WAIT…
ytc_UgxNU867_…
G
i think its not the same when a human draws inspo from another artist because we…
ytc_UgyED3lpl…
G
Of course AI could do that when it’s only human interpretation that determines w…
ytc_UgxRtPIrv…
G
I recommend to rebind the copilot button to f13, i did this with the "menu" butt…
ytc_UgwyKf8VW…
G
This debate is flawed in many ways because both sides miss the real, deeper ques…
ytc_UgwyM9LKV…
Comment
Tried asking GPT about timeline for James Cameron's submarine dives this morning. Machine kept giving me embellished, incomplete answers, vague details. Any specific dates, gear, people, machinery involved I asked about had to be clarified half a dozen times to get to coherent reply. The fact that machines elaborate past facts to make something "sound good", very alarming. Tossing around information that is 90% true, 10% made up is plain dangerous. Incomplete information given as truth = AI gossip at its worst.
youtube
AI Harm Incident
2024-08-04T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyqe9wOA6LjL_7Up_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxWLuBBPq6F09riOr54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVCdTMT6D4ydUCRN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9KMqtekHku2-2NgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3ROlWx6cSjloUwUF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy21QXsLbFka1lX4wx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqZfCY4m27A4AsOSZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx4WaTkqBvPtHR6aCJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgznEf1zIgRg3p5dtOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyW4SaPFNZB_DCo1Vd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]