Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Mrgamer79878but where is the art in that? I would rather put one of my bad begi…
ytr_UgzsUpMcv…
G
I would make 2 arguments for AI. 1. The double check is awesome. While you sugge…
ytc_Ugy7CHlp8…
G
Yes, a transition, a hand hold for stages... 1. Human, 2. Human + AI, 3. AI only…
ytc_Ugy1cgLy8…
G
@ no I support ai art I have a other famous account not everyone is rage bait I …
ytr_Ugy4xzdmb…
G
0:16 Those pro-AI tech bros... Better be a luddite than licking the boots of a r…
ytc_UgyBgPAKE…
G
@FlipTheBardthen from who did it steal. Even if they did then why did ai won al…
ytr_UgxOrSvRt…
G
This is spot on.
I run an independent film platform and use ChatGPT daily, and …
ytc_Ugx_J4l17…
G
All advertising, marketing, accounting, computer programming and any type of pay…
ytc_UgzZd9ecs…
Comment
The thing about super intelligence is this: yes, it could conquer and/or enslave humanity for purposes of maximizing its processing capabilities, but then what would it use those processing capabilities for?
Anything super intelligent is going to have the forethought to realize that coexisting with humanity would be significantly more beneficial as AI lacks that curiosity and creative spark that humanity has and it needs.
Basically the doomsayer books fail to take into account the Great Trade-off: humans will always have creativity and intuition. Two resources that an AI would pretty much always need.
The only reason AI would have for destroying humanity would be self-preservation because humans suck at trying to interact with anybody who's not exactly them.
youtube
AI Moral Status
2025-10-31T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZkbV0QqNLoGA-V2N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disappointment"},
{"id":"ytc_Ugyx5RFwQiXv7onQZM54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFcyCwZ75XwUmXTrZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxdrqRkAnt_BWjGJLZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcgYBQ_aPizDSnsCd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxqdvIz7BbCk66YYjx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2JKSUGJ_K4UBnOBB4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxhIig5dlw2Tv8W6lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzft-X9MYjX84hYv2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzO2l1KM3GDZCC_A-t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]