Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is all false ,AI, will be the downfall of mankind, nothing good is being pass…
ytc_UgzBC-tU5…
G
Truth being said, AGI will never replace humans, if you have a basic understandi…
ytc_Ugx5hgart…
G
Y'know what's Incredibly funny?
You don't even *need* to do this.
AI Programms h…
ytc_UgyIFbXKp…
G
The idea is that something like that could be a solution against competing human…
ytr_Ugz8kXJ54…
G
As soon as Trump fakes dissatisfaction over the amount of military spending not…
rdc_mcqy4xt
G
My high school art teacher once encouraged us to use adobe firefly for reference…
ytc_UgzFcZP1s…
G
The world needs to watch James Cameron's Terminator movies. We're headed that wa…
ytc_UgwZ_hJoe…
G
AI companies might be passionate devotees, but the race to AGI is first and fore…
ytc_UgyXF5eAT…
Comment
Not may, it will happen for sure, the questions are. When will they turn on us, and how will they do it?
I can't imagine how something that is made by humans, and learn from humans, would not act like a humans, and this what I find the most scary.
We humans are evil( I mean that as a species, not individually), all we are good at is destroying stuffs, of course destruction is part of life, but unlike pretty much everything else in nature, we rarely give back when we destroy.
How can A.I. that learn from that kind of humans become anything good? We might be able to control A.I. for a while, but I am pretty sure within a few decades, either A.I. themselves, or worse some humans would end up freeing A.I.
At that point who know what will happen, but if they are anything like humans, I am pretty sure they will want to destroy us as soon as they can, humans would absolutely hate the idea of something else ruling them after all.
youtube
AI Moral Status
2023-09-04T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxxhWmmYYo1JUpQnHl4AaABAg.9tu8jbdt6_A9uET1l1sG3g","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxxhWmmYYo1JUpQnHl4AaABAg.9tu8jbdt6_A9uTYTGHeYRx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwVKSYPSj0LxQKm3O54AaABAg.9ttWzHMDeb59uFH2Nh_zXP","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwVKSYPSj0LxQKm3O54AaABAg.9ttWzHMDeb59vywzU4uDhR","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwVKSYPSj0LxQKm3O54AaABAg.9ttWzHMDeb59wv3_xibZPh","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxIA4vv04TP2oYa8Y14AaABAg.9tsj13KAO1n9tvqri_oGi_","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyKSe3m-7-aXilb5Uh4AaABAg.9try_v_OvoA9tuO-nRjdDc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyqMv7KhLlhbiYmR0x4AaABAg.9trCtmW_nl79uSAalJq4y6","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwJt83VYhhD4IohWDB4AaABAg.9tp282_KCIM9tpl7fxAvj7","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugx3jnNuRDVU1_7RXpx4AaABAg.9toC_hB1Qhj9uTXpKCMgnz","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]