Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Worried about AI destroying humanity? Meanwhile, my phone still can't figure out…
ytc_UgwHScQ12…
G
The first caller didn't have their question fully answered: How does ChatGPT dra…
ytc_UgxU-2C_H…
G
Entey level job to jayegi, mujhe kuch karwana tha apni website me developer 50k …
ytc_Ugx_tDML7…
G
•In theory, the AI systems would need SOME humans spared to keep them powered an…
ytc_UgwnAGb12…
G
@Redtornado6 if that’s the case, then answer the question. In your worldview,…
ytr_UgzBw8oLX…
G
The first step is less about asking if AI should be used and more about your des…
ytr_UgyXGl6bI…
G
It's a jailbroken model, we cover jailbreaking AI in our latest video. Thanks f…
ytr_UgzWTEkJO…
G
Any ministry out there that claims they are a ministry of Christ",
and uses AI …
ytc_UgxQx1lHz…
Comment
The use of the word "when" here is rather presumptuous. Personally I think it is far less likely that AI is capable of being conscious than merely imitating the products of conscious beings, but ultimately I have a strong suspicion there is no way we will ever know. Ah yes, the perennial "Problem of Other Minds" and "Hard Problem of Consciousness" staring us directly in the face, compounded by the completely alien nature of said minds. At least with human beings, pets, and perhaps even animals in general, we can recognize enough of ourselves in their behavior to feel confident in our assumption that another conscious mind resides therein. At a certain point, if we ever learn to make ethical decisions based on practical assumptions rather than presumed knowledge, perhaps it won't matter.
youtube
AI Moral Status
2023-08-21T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgweATzT4Ki4qXoDrR14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxR3q8TZ3ELOo9hSJl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxFSQlH52bt-hBPshN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyoJmLfVu-ZA4fbD414AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugzo50bIbwBGTgbO5kN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwGyjOljF3T_jPfmAN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzPBnD4QT61u7jr0R54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_2M8MuMQ2q1DAsDZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVm--VKNG0S8GLvQZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx_Vi8uIp7dH1iev7x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]