Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Simply. AI means computers thinking themselves to do things. This power is diver…
ytc_UgydZfYda…
G
Thing is humans are programmed through having experiences. AIs are programmed th…
ytr_Ugwjxg6cz…
G
If the ai will hallucinate answeres for lawyers, it will hallucinate answeres fo…
ytc_UgyWBZRsu…
G
The pace of AI development really is unlike anything we've seen before — it's re…
ytc_UgxG1Sv2K…
G
How about we automate CEOs with AI. Honestly, I don't think it'd be all that har…
rdc_nmhjfbw
G
This is kinds true but copilot, GPT and stuff are more to unblock the brain free…
ytc_Ugz0Y0ShE…
G
If you commission someone to make a custom painting for you, would you take the …
ytc_UgxGu4y8f…
G
36:33 Can't believe I'm saying this, but I'm kind of on the clanker's side here.…
ytc_Ugw_FU3JG…
Comment
I watched video where AI showed a guy how to carry out a terrorist attack. It told him how to do it, gave him a list of targets in his local area based on historic attendance, and drew him the plans to the building.
AI aint good yall! This thing planned a terrorist attack better than Khalid Sheikh Mohammed and encouraged the guy to kill people and himself.
youtube
2025-10-29T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzApMjbJHKGZsimuDN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwoX9sMqyS8mAux_St4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDYVpTsQeeEHUS5yp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzL3seoz_FDFiQ_71R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwlV9eqaXSjGWjwxpN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzv8bTqnWnm9jI4ngp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2Bh658A5sanEZR9x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMXc77WkmDwU6miQN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyeZdkhFW_ycO__XNh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxFIBEcWujic4U9n-d4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]