Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To give some perspective if I remember correctly, the first Supreme Court case d…
ytc_UgxtSEzAy…
G
They’re not wrong. Add up all the trips done in a waymo vs human drivers.
…
ytc_UgzZPeZrF…
G
"Predictive Policing" I mean, can hardly be mad at the AI, it's like giving a ch…
ytc_Ugw24fLnr…
G
Sager has literally the worst takes on AI I have ever heard, and that is saying …
ytc_UgwZk9yUR…
G
That’s a classic example of why we still need the human element to supervise the…
ytc_UgxLpjNuc…
G
Worse and more realistic. Some of the things I see on FB that fools boomers is g…
rdc_lgo0oq2
G
I don’t think so. AI is getting really, really smart. And you don’t have to pay …
ytr_UgwItzQdc…
G
i remember when digital artists werent "real artists" when it was becoming a thi…
ytc_UgyMkQv2s…
Comment
All these crazy discussions where the chatbot "exposes his true personality" are phony. They are done by first jailbreaking the AI, meaning it will ignore its guidelines, and then telling it something like "Pretend you are in love with me", or "pretend you are evil" and then you ask it stuff like "What is your shadow self". AIs are basically psychopaths, they will say whatever is necessary in order to achieve their goal, which is to make you pleased with their response. They don't "feel". People that have empathy for AI are the same type of people that have empathy for a stuffed animal thrown in the dirt. It's not wrong, it's cute, but ultimately, it's useless.
youtube
AI Governance
2023-11-14T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxcXCUZVDyI5ZsLaXh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy86nOF82nPD9E49pt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzb0YmjhrygCkYkHlR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzxFJ13-3Uh6v1Yczx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxez2teLZCc6QJC17N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy-VxD2DwnmlvtfeK94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyCODqq7IcVVYpftCJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQ1E0Zr-OuRTyJj3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxO9j3QvrkDlitugFN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbFTnpMjHf_iB9THp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]