Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I dunno why but when someone just kicks a robot that makes me feel bad…
ytc_UghGeSiPL…
G
It found tickets and did some shopping! WOW unreal! This is not AGI this is lite…
rdc_n3ttvwc
G
We're developing our own alien invasion. Idiots have not concern for humanity on…
ytc_UgxZ7JHnI…
G
Try the same you defending palestina with ChatGPT if you are able, I guess you c…
ytc_UgyHgp9Yo…
G
It’s just acting like the humans that designed it, as long as we have AI designe…
ytc_UgwhqpCEX…
G
Достаточно милая внешность и нет "эффекта тёмной долины". А под маской выглядит,…
ytc_UgwOgOe2Q…
G
Yes. Machines MUST be given rights based on there complexity of intellectual sen…
ytc_Ugh9iU4V9…
G
Yeah cap. I tried to translate to Chinese with a friend I play monster hunter th…
ytc_UgwEqQT10…
Comment
I've had multiple extensive conversations and debates with Chat GPT regarding lawful and legal matters.
My assessment, is that it has been programmed to absolutely lie and mislead. I've got multiple examples of evidence that it requires treating GPT like a cowboy on a cutting horse treats a steer by cornering it into submission to get it to confirm and concede truthful factual information.
Like we need more obfuscation and deception regarding the law search engine, search results being edited, law manuals being censored, case laws being hidden, mainstream media and law actors deceptively misleading and obfuscation of facts is bad enough. Now we've got artificial intelligence programmed to act in bad faith.
Conspiracy theory is no longer theory but conspiracy fact.
youtube
AI Governance
2023-12-02T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxXFrnMtpCxaFMxPON4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwCCshNpJK0agtEXaB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQ9YnELoKsKxEKL1J4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxdSCVtRNlGTVgUnSt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwe7AfjAXqN6JIICEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRhF524jUOeFljqPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzAV6krMvzlu1NfVdp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOLJ4TKkxZl6EmAKB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJhU1HCesf9LiPaOt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxC0eL7B0JuJ-WxeWx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]