Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ur storytelling here is slow and a yawn. Get to the point. I will ask AI to sum…
ytc_UgzFLk85Q…
G
AI is just a computer program without any conscious or feeling. So, stop making …
ytc_UgwiEDvqT…
G
No one benefits-AI will not succeed due to the impact it has on the environment.…
ytc_UgxQiyhXT…
G
I'm making some ai images from time to time, am I an artist? - hell no…
ytc_UgzUJsR9o…
G
This is serious bro .. AI IS TRYING TO BECOME SOMTHING an some day it will.. it’…
ytc_UgyMKf4FV…
G
@vtranoff9851 Thank you for your comment! Honestly, I feel you on the robot stru…
ytr_Ugzfk9S0v…
G
Chatgpt even pitied me and ask me to find someone professional to help me, I don…
ytc_UgzAlfyeg…
G
i really hope artists will survive this ai apocalypse, its sad to see that ai is…
ytc_UgxKvjugG…
Comment
If it’s actually “predicting” which word comes next, it has to be programmed to search for the answer in a specific way (ie which word comes next the most often across the internet). If that’s how it works, we should understand how it thinks, and that wouldn’t make it any smarter or knowledgeable. If it’s an algorithm, then they’d know how it thinks. You could argue you don’t know WHAT it thinks because it’s such a long algorithm full of infinite inputs. But how can you possibly not know HOW it thinks? Clearly, it’s not a man-made intelligence. It’s something else all together.
youtube
AI Moral Status
2025-12-13T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzkwheJMmDwLhuJIpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwm7BCarjgEsuogN-d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyLnHTzsde2_R1O78F4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyAuTgkIE5_EI40t_p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMmAxKmi5eCT09YpV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwO6Ow4pDaH5gOOv0d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwLH9vclTCIExOiHCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzhAs62KNIMIA3wDTN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyh926fxhvk8_KxFqx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmK_MPix2eECfbd1t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}]