Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The comment section, as per usual is out of control. All she was saying was put…
ytc_Ugw2jvGRD…
G
AI's profits are being generated by the very people who's jobs it will replace. …
ytc_Ugxzunn1T…
G
Use search with -ai on the end to avoid ai answers. AI answers are expensive in …
ytc_Ugyhe2qac…
G
A job application is a formal process and a set of professional documents that a…
ytc_Ugy5Izssn…
G
So he floored the accelerator, dropped his phone, was rummaging around the floor…
ytc_UgxuQD8im…
G
My mom is teaching a similar level/situation and it is hugely frustrating and ti…
rdc_nu16jn7
G
Wait, so you are saying people shouldn't put trade secrets, inside information, …
ytc_UgyMtF_ZB…
G
Any ai take over would last 5-10 years. Supplied for the ai would run out. Not t…
ytc_Ugz1QJ7Tt…
Comment
are humans any better? its going over all these extreme hypotheticals but seriously would human beings do any better? like have humans never blackmailed someone? am I the only human being that randomly has malicious thoughts?
Like 'the AI was given a hypothetical scenario where a single human being is in mortal danger, and it had to decide between the entire american nations interest or a single human beings life'. What the fuck is the right answer here? Is there one? Like human militaries make this decision every single fucking day? Nation states do this on the regular? What are our expectations of AI?
youtube
AI Harm Incident
2025-07-23T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwbux0NNkjdVKiRJLF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMbL2XYToViPAKQUh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwlEKq1Y7fzNMvQi394AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzheSfjtZlNDkJtNA94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyVD9GjJyN58AtV-jp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyzOmysSg0nl5hJj5d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyEUDu_20utTr0QKX54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyC0D_L3clvsKTnfXJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8BPVxwnUAAaB5dQh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyP95xHKDhNP_KdByZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]