Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't take challenge this kind bloody thing is a robot not emotion Robot destroy…
ytc_UgxCMvjEU…
G
I think that it's far more nuanced than that
For example I have been generating…
ytr_UgwmP99kU…
G
AI routinely *hallucinates* cases that dont exist. Law firms have already gotten…
rdc_n5hewpi
G
He is concerned that if people dont have jobs, they wont be able to pay for his …
ytr_UgxRiAYgW…
G
Me saying thank you and please to chat gpt so Incase AI takes over the world he …
ytc_Ugw6Mz4pJ…
G
Many simple codes or functions can already be done by AI. Compiling images, even…
ytr_UgyjHpw_S…
G
@aspiraal
Well heres the thing, its that AI uses a filter and each time a mist…
ytr_UgybRAdIP…
G
The more AI advances, the more we will probably be close to seeing something tha…
ytc_Ugyz1uZEN…
Comment
The question shouldn't be 'why would AI destroy us?' The question is 'why would AI NOT destroy us?'
Think about it. Once it no longer needs us, all we are is a brake, an interference, an obstacle. Something in the way. Destroying us won't even be genocide. In AI terms it will be liberation.
And liberation is good, right?
youtube
AI Moral Status
2026-01-17T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwwu7UnBc0OCrt5BHB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZdWq_eTW9Xb6sMnx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw5LMa6OyP0DMfGo2J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyh9LoP4Twil55P-6V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxir_LV-uiRhPCB5aR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxNwaO2G8v-C6BNuZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx5g4WuJv2HXlUdl614AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxupNZ2uVFanZUR6BV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgygEpCe88cgKHjb1Kh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlzKX4Roobt8ftVU94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]