Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, that's the thing lots of people keep missing! No one is worried AI will be …
ytr_Ugxs4s4GG…
G
A.I is not dangerous it the human who input rubbish into it, lets not turn this …
ytc_Ugz_iu-Gv…
G
I don't get it, why using AI to do the job, in the human way, why you can think …
ytc_UgzgYpSdl…
G
AI will create really art then when start having self-awareness, and can think. …
ytc_Ugx_8Sdnz…
G
I don't know if this is worse than it encouraging people to 'Log out of Life' or…
ytc_Ugzv9dxF7…
G
the left loves the idea of deep fakes, because that way they can claim that all …
ytc_UgyyBJgLs…
G
@komisiantikorupsikoruptord6257is there anything more valuable than 5G? Who wil…
ytr_UgyR-bfiG…
G
Inequality of wealth is what scares me the most. We are already dealing with th…
ytc_UgxQqkW85…
Comment
surely a super intelligent ai would be super rational by default. the most rational decision to make in any given scenario is the one that benefits the most conscious creatures the most, by their own subjective interpretation. the only concern is super ai becoming so exceedingly conscious that we look like ants by comparison such that it discounts our consciousness in making its decisions. but i came up with this flawless axiom, and any human can understand it, so humans at least appear to be beyond a critical level of consciousness such that i doubt the super ai would be able to justify treating us like ants. considering its able to be logically cornered by a simple argument like this... thoughts?
youtube
AI Governance
2023-10-23T08:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwYSmj-QWFfCY1Vagp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDkY66R7ZahdrSX5t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyvaVyySY3bkB_45t14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5J-ExKffm5FofABh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeSDq2gl_wX07vekJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6gmeNZq9_MU1J3sN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwncOUEFjHol4gCghB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzv60J2KKiuV5Q0O014AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgypwtV20W8ttHleDfd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgylRf7yzamdUukXMk14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]