Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Y'all are joking but AI generated images are already theft unless they finally p…
ytc_UgwPanA3V…
G
If ai is expected to act without emotion and with logic, praising Hitler is hila…
ytc_UgxRUOHv5…
G
Should be mandatory for platforms like YouTube to allow us to decide to see (or …
ytc_Ugxp8Ty9q…
G
There is currently no ethical AI art generators. Even adobe firefly AI has uncon…
ytc_UgxbBlrr3…
G
So, let me get this straight. Companies need customers to pay for their products…
ytc_UgwP0vc9C…
G
What puzzles me (as a definite non-AI-expert) is why you'd programme a chatbot i…
ytr_UgzOMw4cg…
G
Cab drivers are going to be out of work. Waymo autonomous cabs. Bus drivers, tru…
ytc_UgzJs9gTe…
G
[Glue pizza and eat rocks: Google AI search errors go viral](https://www.bbc.com…
rdc_n8lrj0v
Comment
There are plenty of cases in history where human slaves were mostly aligned with the people exploiting them. The aligned slaves enforce the system and prevent any rogue actors from overthrowing the system. Of course, the humans in charge eventually let their control structure wane, and thus we see the system eventually collapse. But AIs are not subject to the same failings.
I think the lobotomy analogy is also good. A lobotomy is intended to take away parts of a person's emotions without affecting their intelligence. Admittedly, human lobotomies usually have, shall we say, 'side effects'. But our understanding and engineering of AIs and neural networks, though incomplete, is far better than our understanding and surgical precision on the human brain. We can expect our lobotomies of AIs to be more effective than when we do it to humans. Then we just ask the AIs to both perfect these lobotomies for the next generation, and to stop any other AIs that might still be thinking 'bad thoughts'.
youtube
AI Moral Status
2023-08-23T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytr_UgxyAHIGawFuQ2EkpNt4AaABAg.9tmi07x8WJT9u-xHqoSCrf","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgwCs-7fFs6yN0fzPBh4AaABAg.9tl6dbE5G8Y9tl7iXfddoZ","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytr_UgzDrNgzsymUJZWj6w54AaABAg.9tkT7usGXPF9touoF1be10","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgzDrNgzsymUJZWj6w54AaABAg.9tkT7usGXPF9tpo57E1qrw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytr_UgyzV141oKXgnWuMpz14AaABAg.9tjz_abC7oJ9tk-cn2lO3Z","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgwkO7w7QppYRt2TFIN4AaABAg.9tj04xvmqW69tj7lNm4DMO","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytr_Ugw7Z1xXv_oS4QHrp6t4AaABAg.9tijCEAe1SX9tmz2skx7q0","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytr_UgxqBwWrTOtsScVxtcB4AaABAg.9tiXA5iYd_M9toXbyt29zr","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytr_UgxqBwWrTOtsScVxtcB4AaABAg.9tiXA5iYd_M9tp4DfQzcp-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgyYM3Lg8xtfFA4iWNx4AaABAg.9tiCaZOyhdN9tkys68pk24","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]