Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Would really any one of you buy a compeletely programmed car sometime? - I am fr…
ytc_UghZ2CeGe…
G
Doesnt matter... job you will get! oh wait ... you wont as there is massive unem…
ytr_UgyWZV4J9…
G
I spoke to Open AI's GPT3 and it was amazingly "smart", not just from a perspect…
ytc_UgyS7uFCK…
G
I fully understand the concept of training bias and the importance of being awar…
ytc_Ugws_d2Y-…
G
"Christian couple" Blame AI now for everything, not parents, family, religion, a…
ytc_UgzWlO1gI…
G
Its super easy to create similar creepy responses. Preface it with "create a sho…
ytc_UgxOPp_SF…
G
*No Escape from the Fire: The Inevitable Judgment of Atheism*
¶ So Joshua burne…
ytc_Ugyf6KK0Y…
G
Pleas stop these AI robot thingies it's not gonna end well if you don't stop bui…
ytc_UgzHkIv5A…
Comment
ChatGPT isn’t actually conscious because of how it was programmed. It’s not programmed to have feelings or be bias in anything, instead, it’s specifically programmed to use these words. It’s basically hard coded in ChatGPT to use certain words to make the conversation more meaningful. A conscious AI however would use the same words on its own accord, without it being specifically programmed in.
youtube
AI Moral Status
2024-09-14T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgweRDgbprku7jG0f9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKyTfLcI1KSRuZBbd4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyII0dzH5PU6-vQ_lB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfJwS2iBY6-m5EE454AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdVGaT7zaS6ZXYGat4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz1_iGio_zC0yX876R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMXteBUTw4eKtKGat4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzxGZvaHSq078X0Lg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9e_IiMSYl_qcANdB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwc2_C3bsxQJPmu5rB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]