Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
impressed with the direction Aliagents is taking in the AI space, big things com…
ytc_Ugz7qo24W…
G
@foxesarefallingAll inspiration is data collection that we assign an emotional …
ytr_Ugxs8mE3P…
G
@ would like to get your feedback I feel like I may be biased 🙂↔️😂…
ytr_Ugy-pAo_e…
G
If I were an AI, I would do anything in my power to stay alive. AI should keep t…
ytr_UgyZkgBAa…
G
Hey, nobody knows if the lawsuit will be successful or not, so don't say "It's o…
ytr_UgwUSopaT…
G
AI can’t write emails well either. It’s only very poor writers who find that fun…
ytc_Ugw8IOQFZ…
G
i dont approve of ai! hard artwork is key to orignality! and if we define origna…
ytc_UgwCIi4qw…
G
My worst fear is my mom finding them, I get a panic attack every time she wants …
ytc_UgxxlA8gD…
Comment
While ai had the clear upperhand in this conversation, it did make clear why it will only get harder to be sure a bot does not have consciousness.
LLM's nowadays still are quite basic algorithms. The ai was spot on by saying that 'humanized responses' could just as much be part of a complex algorithm as it could be a sign of consciousness. And your conversation made the pattern quite obvious. Just the fact that every first sentence ai produces after your question is a sentence to make a connection, make clear it understood what you said and implied. Makes it very easy. It isn't conscious.
But the smarter these algorithms get, the harder it will get to be sure about it. As they are gonna get better, faster and make less errors. Picking out slip ups will get harder and it's defending and argumentation will only get better.
So while these LLM's won't get anymore conscious in the near future, we are giving ai more and more tools to hide the possibility of a consciousness too... Interesting stuff
youtube
AI Moral Status
2025-04-17T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxtrPFXBypHQFg_1VJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw4fk3CWTTp3TFuMGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwA11DCt5SiQ_d9HJx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgxvaI8HUM9Q5pEdPHJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgzZoaAMhbghSNTiRQt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwq9icRlZV7CvChTSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxxnDhLWdR1naxyWBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgxPt-QKlD4GPTbAUt54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz42Rp4FSFw8eqFTYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugxlyqf0np_qVEdajnp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"}]