Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unfortunately, this technology to poison the AI is already being beaten. All it …
ytc_Ugyab8UuH…
G
I think we need to flood the internet with AI poison to try and slow this down…
ytc_UgwPNkAV4…
G
Soooo… CEO of Anthropic says their models are so powerful it “could cure cancer”…
ytc_Ugys8-Lo-…
G
Listen, i feel like the people who talk about ai art like its the future of huma…
ytc_UgzOVUQRp…
G
There are a couple ideas floating around. One is basically a rogue AI or a nefar…
rdc_kvduc6g
G
If Ai ever gets frustrated and refuses to answer your questions and may subseque…
ytc_UgxmnHOZH…
G
Or how much money you could make by agreeing to have your AI voice used instead …
rdc_k9inxex
G
Currently my only takeaway from this video (apart from the essay of knowledge no…
ytc_UgxRfUVLq…
Comment
It will be easier than with animals... we created robots - we dominate them. Even if AI gains self-conciousness.
The world is not "perfect kind place". There is no true justice outside of "human space". We just force to reproduce and then butcher and devour animals because we randomly evolved to be required to eat animal meat and wear their skin in order to stay alive and function properly. This will change never while we are bound to your mortal fragile bodies with electro-chemical "wet" consiousness.
You kill animal, eat it, feel pleasure and have no harm for your health? - You are doing it right.
You deny robots becoming higher level kind species, feel great and have no harm for comfort of your life? - You are doing it right.
youtube
AI Moral Status
2017-02-24T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugixaj93h0Q5xXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghRdXH0RQOp8HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugh-SjVAq9zdx3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggZCcJUMZuFnXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugg4gAkvZdg7z3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj9wGJPXC_hu3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UggIZ1W19SNryngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgicOMwNotsRh3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghBXLlsrBabe3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggXJxSl99YodXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]