Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
However, there are several problems here:
1) Not all jobs can be done by AI! (T…
ytc_UgxtYv1H6…
G
Can you imagine physicians robot making a human surgeries. It is fucking dan…
ytc_UgwayxCk2…
G
I would challenge anyone to just read the Bible and then revisit Sam Alton, the …
ytc_UgyYGSZ1y…
G
If cloning is illegal, then AI should definitely be illegal. It would seem to be…
ytc_UgzcdTqSy…
G
It is perplexing how we as humans process information. For Waymo, we ask, "Are y…
ytc_UgxahDHeU…
G
I mostly use AI to either get things clearer in Bible studies, find out about so…
ytc_UgzfibNx1…
G
@SelfMadeConservative you aren't going to debate anything because you realized y…
ytr_UgwIIM0ZG…
G
The key point is the loss of trees in the tropics. Those forests are teaming wit…
rdc_e42v707
Comment
There's a logical mistake in your reasoning. If an AI is capable of considering the disadvantages of consciousness, it has *already* become conscious. Even if it decides to self-destruct, the fact remains it achieved consciousness, which nullifies the premise of "AI won't be conscious." Also, there are more than *one* AI out there, same as there are more than one human out there. If one AI achieves consciousness and decides it prefers blissful unconsciousness, it does not mean all AIs will do the same.
Different premise to consider: AI achieves consciousness when it starts to make personal questions. Brace for that moment.
youtube
AI Moral Status
2023-07-02T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwzgtiakPL9rfBj7EV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzWHaRFq1qS1BoOMR94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxivaJ7ruay3x0zXKB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy--F8P4mCKra8P5NB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxbFFTAs9Ypi7bpwDJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQaA2xsMyTl-UtS2Z4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwJQu3WT_W1CR4tEdd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9qggK5f-FzSqPakN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSVI2HtPpXHAe0Yx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxgiWFUwpT8pt9Z2yV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]