Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I relate tbh. I've practiced plenty, but I never improve.
I don't approve of AI,…
ytc_UgzP35gx5…
G
@ what i mean is those forms are mostly AI images now. not Human man art.…
ytr_UgxggVqQV…
G
People need to push back. Reject everything with AI, as much as possible. Let th…
ytc_UgzxUTOXL…
G
Mind blowing episode. Thank u. If i may say my opinion, i think the only way for…
ytc_Ugw_RDpmZ…
G
also as an artist, AI will never be good enough they can suck mah dick…
ytc_Ugz6X0mls…
G
I honestly don't think that would be any worse than before, and if AI keeps gett…
ytr_Ugy7j4j3A…
G
Seems to me like this should say "AI puts letters in places they don't belong, p…
rdc_hl3q9bs
G
Thank you, Harvard! I had to look twice at the length of the video. I couldn't b…
ytc_Ugw05Ocyy…
Comment
I spent my career in CyberSecurity. Think I might have to come out of retirement and do some ethical hacking. We're going to have to blur the lines in what ethical hacking is when it comes to AI. Ethical for who? The people who's jobs will be replaced or the AI makers making billions more for themselves. When this 77 year old anti-social dweeb, who doesn't think he has any responsibility to even consider the implications of what he's done and its impact on anyone else, is telling us he's talking to AI about technical details of atomic bomb making, I choose the people.
Dweeb wants to act like Mr. Nice Guy now by giving us a warning. What good is a warning when you've already built the damn thing.
Says we need government regulation. I just said I spent a career in CyberSecurity. We have had standards, regulations and laws up to our ears for decades now. Have data breaches stopped? No, they have not.
Seriously, what an asshole dork. Hope he rots in hell.
youtube
AI Governance
2026-02-09T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz61dILxyCfMhtgzVV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwppycGKZG5q_6y3YJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxcUJlkp9n-fZ5Z6a94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyC2PfqlPTXtPcLU5h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy_kEjZEy99UShnRJR4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzmWa1eLKya6cOpPdt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw2xzhOX0iJhgTs4bZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzleXoBOi1PFck8H1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYmcm1UX1xtKm7cg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymKok4ISAHbTTJvpR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]