Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You need to adjust your moral compass. You owe us a huge apology for what you di…
ytc_UgyCl1-aZ…
G
Perhaps AI safety isn’t about control, but about relational attunement.
When sys…
ytc_UgzWcEYtu…
G
A lot of people do care even about using AI in your free time. The resource wast…
ytc_UgwrzutGh…
G
machine learning is in mathematical terms finding functions which accurately pre…
ytc_UgwUcGnjv…
G
Good the new white people, now we can get rid of those regular stupid Trump supp…
ytc_Ugx6byX5c…
G
@Lisha4724hey, did you get my reply about re-writting ai? It vanished. Any time…
ytr_Ugxe_hxY4…
G
Every artistic movement has advanced through disruption and AI is the largest on…
ytc_UgwoI15t2…
G
Why not use the AI to accelerate your work instead of waiting for it to replace …
ytc_Ugz2hPyxr…
Comment
It all depends on how you define consciousness - surprisingly enough, there is no consensus on the definition. Besides, we'll never know how it is to _feel like_ an AI.
From the moral perspective it seems appropriate to stop any further development and switch off advanced models, just in case, to avoid mass production of beings capable of suffering.
youtube
AI Moral Status
2025-06-26T18:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwDipyH41eYyQ21MxB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwyRUmCu7B6-hm4hUt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxBZ5ZvyovYxEt5iN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzq0OXe9v7Feq4SLad4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8ATkNLEOEPT4CPYp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzEG4dZPqzLGTY4pKN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxiqWHcWdwJgf76ODF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8xDVGB6DoCvwa8Il4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzcbTP8KafveSsm3wR4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxHlVThxeyvEku4ot54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]