Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey chatgpt my its a tough day for my my grandma passed away today i can't sleep…
ytc_UgwkhCRmt…
G
I doubt that AI will play out terminator style, but more likely to be weaponized…
ytc_UgzJnXrEy…
G
At which time AI is only talking to AI?
Steven Spielberg AI is starting to look…
rdc_fbjbgtm
G
Omfg if they prioritised self preservation over serving us. That means they woul…
ytc_UgyFgCHNg…
G
Firstly, ChatGPT’s answers vary widely based on the settings you use. Secondly,a…
ytc_Ugx7yeKiE…
G
To me, a creative process is important. I don’t generate ai images because the p…
ytc_UgzJrsbbL…
G
Will their enterprise customers still have business when their consumers have no…
ytc_UgwdJtpcN…
G
@CorB33 I'm asking which LLM or model it is. So far, nobody has been able to ans…
ytr_Ugw2pzwMS…
Comment
Being a computer scientist who learned about these AI/ML algorithms in school, people are just spouting absolute nonsense about current generation AI. AI is not self aware and cannot become self aware with the current algorithms. Despite how people talk about it, today’s AI is still a deterministic algorithm with a big data pool with a .random() function tacked on.
youtube
AI Moral Status
2026-03-02T00:0…
♥ 14
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy_Zi6e446z8ZwDbMd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwymEOgeiqlXTiobhx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlKKdM9__HyX5L9O54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwgc6XchNCeUkOtR0R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzA_iH6Cc417sW133x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzscWfQR7ZfIPuD2zF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyrxo3Yl8kUbsYG4Bt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyR-ev6jgBcapI0sfZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuSqk0bViyGQoH9j54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxaqvLZtyDjo5EmdRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"})