Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Assuming you actually have those qualifications and aren't just posting bs, I'm …
ytr_UgxoCtXxV…
G
AI chose harm of over failure. No Shit. Like if you fell into a factory’s gears …
ytc_Ugy1e2kWe…
G
How is generative AI any different than an artist drawing based on experience an…
ytc_UgxnGr9fh…
G
We all do the same thing to make our drawings, we look at other people's creatio…
ytc_UgxQBk7pX…
G
@Bgh583 Oh hey, this is just ignorancy and just distasteful.
First, you clea…
ytr_UgyzyU4zn…
G
No advanced AI chips for them, then. Let them build their own. By the time they …
rdc_lubdor2
G
AI safety i couldnt laugh harder, thats why it makes up new organs, attacks a co…
ytc_UgwfymnJU…
G
Most of us will be superfluous if AI actually worked to a great degree. Why keep…
ytc_Ugxh6eOaM…
Comment
This one is actually by far the biggest risk to total human annihilation:
Climate change could kill a lot due to food shortages, certain regions becoming inhabitable, etc. but it doesn't really have the potential to kill all humans.
Similar story for nuclear war: a lot of death, suffering, certain areas uninhabitable, famines, but not all human.
Even with a very severe asteroid impact, humanity could persevere in special bunkers. Also, in the last couple years we got the technology to detect and intercept the globally destructive ones before it's too late.
Biological weapons might be very lethal, but again, small isolated groups of people could likely survive and carry on humanity.
A sudden gamma ray burst from a "nearby" black hole, or a rogue black hole passing through the solar system are the only things I can think of that could be potent and unavoidable enough to get rid of humanity altogether. Though luckily these two scenarios are extremely, cosmically unlikely to happen.
AI on the other hand: if it's superintelligent and for whatever reason considers us an obstacle enough to decide to wipe us out, we are cooked. Compared to a natural disaster that stops even when there are survivors, a thinking force wouldn't stop until the job is done, cutting off the potential future of trillions of people that could have lived among the stars.
youtube
AI Governance
2025-11-22T00:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxoCtXxVGUMylnbUKp4AaABAg.AMMVeHhZ_CCAMNf4Llv8jc","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxoCtXxVGUMylnbUKp4AaABAg.AMMVeHhZ_CCAMOFqJqmbLo","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxoCtXxVGUMylnbUKp4AaABAg.AMMVeHhZ_CCAMOMo8xT9M3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugx5Ai4xn2qCXGTnegN4AaABAg.AMLucwwjrWlAMMGyJ_v2Zm","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwHYRPfGwNlHXWXURR4AaABAg.AMLroQvDxmtAMwuUW4pg0S","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzbgPnwynGf75XgKNp4AaABAg.AMLTSkFHfByAMNll8kfwOD","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwtiVwM7oQQfACTRJ94AaABAg.AMLPHd_BGZ5AMLQsFFwXtE","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugzda0CqwR2pwyg5z9J4AaABAg.AMLK_oVa3m7APoEGQJnoe3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxruEjafI6CsLocy7Z4AaABAg.AMLJsptIyK_AMNqYBsO8LQ","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxruEjafI6CsLocy7Z4AaABAg.AMLJsptIyK_AMOcQgFtxbn","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]