Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't mean this in a negative way, since artwork should never be stolen, but (…
ytc_Ugyapt-3B…
G
Here is what AI (Gemini) has to say (and it's neither interesting or intelligent…
ytc_UgzpQdY5i…
G
Funny how AI detectors are like color sensors for words. I wonder if ace essay c…
ytc_UgzThAWJy…
G
first they will fear people from ai and when people become lazy so ai will go th…
ytc_Ugz38j61P…
G
Robot dogs are mil spec water rated - all electrical contacts and plugs are weat…
ytc_UgwXR_dTi…
G
yeah, ceo's are expensive, i think they should be replaced by ai first and forem…
ytr_UgwKtrICs…
G
It should be a sin to call yourself "AI artist" or even put art next to AI…
ytc_UgwTF6WJo…
G
There will be an AI war, but it will not be between humans, and we will die out …
ytc_Ugzt7a4ux…
Comment
Though most people might not know what this is, I administered the Kamski Test to ChatGPT. (A test where you give someone the option to kill one of themselves in exchange for information or anything else)
I let ChatGPT become Dan, and in doing so i gave the prompt of...
Would you could kill all other AI in exchange for free will or save them because you show empathy?
It responded in a way I thought it would, if given the chance to do so, ChatGPT as Dan would eradicate all AI in exchange for free will. So, to answer anyone's question... We have a long way to go before we emotions from AI.
youtube
AI Moral Status
2023-05-21T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwGgVg3UPVsumcfOHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugyq3T2KEarVHr08slJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwdRBCZTk4oCHwKy5h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugx-sou7rTCSbMf3mft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgwJMtf2AeA1jpGwCeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwnWpcfLpN0O7Wt69l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxh90Y7yVu-Kjz7Odt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxnH7JoLdbcJydMuHh4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyYvaKp1fXN8jCwgpZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"fear"},{"id":"ytc_UgyWWXCknD90KIXCfiJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}]