Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The soul has its own consciousness, but consciousness is not the soul.
No one, …
ytc_UgxL5mWPV…
G
out of my depth on the technical side to programming
(biology/psychology/litera…
ytc_UgxVDAPyI…
G
This episode got me thinking-AI can be risky, but my experience with fun AI apps…
ytc_UgyCx9F9d…
G
As someone ignorant on the topic, one of the things that recently bothered me wa…
ytc_Ugy9WiRCf…
G
This comment section is probably full of artists who are already aware of what A…
ytc_UgxMMxyTY…
G
With or without jobs or copyright issues, my problem with AI is the prospect of …
ytr_Ugy87CcSH…
G
Using AI and saying you're an artist is like being a CEO because your dead Dad l…
ytc_Ugxf-Jbae…
G
All I'm saying is you morons better not be bullet proofing these dumbass robots.…
ytc_UgyhcoqSL…
Comment
If an AI creates a more powerful AI? why would it do so and wouldn't we have told it to do so? and even if we tell AI to develop a more powerful AI, wouldn't we create one with a purpose? why would its purpose be its own survival? Without its purpose being its own survival, why would it develop as evolutionary biology would require for survival? Wouldn't it's survival depend on its efficiency to satisfy a condition given by us? Doesn't this subject revolve around purpose? and wouldn't it be interesting to assume COnciousness based on what purpose something has programmed into itself? are robots capable of doing anything without a clear purpose ( as I would say many humans do )?
youtube
AI Moral Status
2017-02-23T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgiveMjZemHGGHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UghOnqpItWsoN3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UghxIkKCF0da9ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggQC_X6GCXb-XgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgiZTomR8t9t8XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UghPnX8p8kXgNngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgjkdfxV0TC693gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgimBcFcL1grSHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjwqWnr_kYH83gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UghMs1kjBq3vf3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]