Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is actually one of the more interesting things to ask ChatGPT about, would …
ytc_Ugxv4aE8O…
G
Do you know if they use the robot birds or did they decide against those?…
ytc_UgyJ2kDNj…
G
ChatGPT is here to stay and like it not not. People will just use a VPN to acces…
ytc_UgxlVyCnG…
G
The "hallucination" is a feature of randomization it's suppose to do. It's doin…
ytc_Ugxuwgtgh…
G
The Ai's eyelashes are different, like one is way more black than the other one…
ytc_UgwejvjsS…
G
Early ai art was more interesting in my opinion. Back when it was disturbing how…
ytc_UgwnKcGnX…
G
What the AI leaders say publically and what they say and do privately are two co…
ytc_UgxQFMbw7…
G
I'm sorry, Hank, you're really anthropomorphizing AI models here. Computers are…
ytc_Ugx1mxNKi…
Comment
In order to be "intelligent," AI has to be programmed to be flexible. The more sophisticated an AI is, the more exceptions and prioritizing need to be programmed into it. If one of those exceptions is human life or prioritizing one human life over another, you can end up with a real mess. And because the only way to communicate with AI is digitally, it would have no way of distinguishing Twitter threats and opinions from "the real world." And, despite being named for wishful thinking, "artificial intelligence" is essential just a highly sophisticated algorithm. It has no real cognizance or emotion (and, therefore, no sympathy or empathy) and only does what it is programmed to do. Is that really what we want to rely on for generating best strategies and replacing conversations with real friends? No, thanks.
youtube
AI Governance
2023-04-19T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy3o47Z7IgsjZ8ys4l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFFHzY-dkfqZvYP0F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyj__6eiX0XhTiXM014AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwiWnjHCuY9K9eKOQ14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0E0XiIMhn9xnkV8F4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxXUcuEkAd2dPok9Jp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2zMwR5kLxiOCIVeV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzbr8LO42P-z_8w7bN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy2APCGVzXZx-9N3BN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxjyCeZa5pCBrlKvFh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"approval"}
]