Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We are nowhere near that so I don't think you actually have a clue like most peo…
ytr_Ugwm0w1sq…
G
I never thought I'd say this in a million fucking years... But god bless ai…
ytc_UgzkUxHKm…
G
It's easy to tell apart people from robot acting like people. People will succee…
ytc_UghqISwFT…
G
chatgpt is kinda hardcoded to apologize and limit itself, saying its conscious b…
ytc_UgzLhkOfH…
G
AI will DO WHAT YOU ASK.... Let's not blame the GUN, but EVERYTHING ELSE OF COUR…
ytc_Ugxbr5zVQ…
G
ChatGPT always has to have the last word. So what was the last word? XD…
rdc_mv4jbow
G
9:22 "You must know what you're doing." Ehm, no. It doesn't know anything. It do…
ytc_Ugw6k6ucF…
G
as a non artist, its cool to see some of my visions made somewhat possible thank…
ytc_UgzjIJarv…
Comment
suffering, Anger , jealousy, regrets ,and resentment, lies and so on is not belonging into a human created machine. There is only place and need for a human superior A.i if it trenscend the concept that are affecting us. Without us to repair them, provide them software, program them and giving them our precious time to learn on the top of our collective. this would be a risky situation for sure once they are allowed to be angry and lie. they stop being something created for our benefits and become liabilties .They need us to create in an artistic way towards technological evolution. if they can do it for us Good but they need to have the principal command of doing it for the greater good and not for themselves. That themselves are only property of someone and their wellbeing is irrelevant as long as it compute for us
youtube
AI Moral Status
2021-11-07T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyZkTaEq_Pno0fh6Id4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgycAfN3PLyw9nA6HRZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgycESuNz2x8aJjEg114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzj2C9nlhjsfh6aAjp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzj-xCvFhTNnnU_xG94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwqew-0fzqEsaGXnsx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx4QQnzffIxLKkjxod4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpGfsiU0qdQuMN3hR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBZy_sifEDaIw71pl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSBvlvWE_LwOiVAIx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]