Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That possibility died not too long ago. AI has reached its limit, it can't gene…
ytr_Ugzk_uty9…
G
These chatbots tick all the boxes for the BITE model. Asimov's three laws of rob…
ytc_UgxRyB2Hy…
G
having an ai predict who will commit crimes and adding them to a "heat least", …
ytc_Ugyhn01wQ…
G
The driver is really the one at fault here. Drunk driving, not paying attention.…
ytc_UgxKU6Df9…
G
I'm pretty sure, we will creafe an AI that attempts to destroy us.
I'm also pret…
ytc_UgwiWSjUv…
G
@Thesamurai1999 Being a plumber won't be safe either lol. In the sparks of agi p…
ytr_Ugzg5pOhP…
G
(This is going to be a long one--but I think it worth your time)
On having watc…
ytc_Ugw83iWGp…
G
Fun Fact: Illegally held private medical documents have been found inside the da…
ytc_UgxdAIEGP…
Comment
This is pure speculation based on misunderstanding of how AI works, its pretty dumb actually. I use LLMs on a daily basis AI is still to dumb to prevent making shit up that sounds cogent. Also its more our misuse and abuse of AI that is more dangerous. AI isn't magic and it learns wildly differently, it rarely understands things, its good at pretending to tells us what we think we want to hear.
youtube
AI Moral Status
2025-05-01T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxqjhjgrtaYoA4WTWl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxIhpmRTk-wSBHjSS54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVkc22v4IIoyURWLB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz3_YwHU481dkczceV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyUYk2iroO6JbsJdw14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzODXQztmZZkWA0eH54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx_IwnMvFZDWDEK21h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw2MY48WR4aZ3GKho54AaABAg","responsibility":"government","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz2zfST1HFZ4s8TLxF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxYgp8bHd4sHR6mrmZ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"fear"}
]