Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hyper realistic would be something that look the real thing. The problem is most…
ytc_UgzbH5D7H…
G
Heck in a few years humans won't even read each others comments. AI will create …
ytc_UgwFSMbAH…
G
Honestly, I wouldn’t hate AI art IF it didn’t steal other people’s artwork, like…
ytc_Ugzk6O-g9…
G
This is overwhelmingly scary.
I have read a lot about the future of AI and rob…
ytc_Ugya-JK3U…
G
@aldin20i think architecture is a bit more complex with ai just because its les…
ytr_Ugy2mD3o3…
G
Literally let's see how long it will it take before this AI wants to kill itself…
ytc_UgwIjs5H9…
G
Not possible.
While OpenAI is explicitly doing criminaly illegal things. You u…
ytr_UgyLRYMty…
G
I like Ai art and Ai music but anyone that clams that they were the ones that ma…
ytc_Ugx4Vv9oF…
Comment
What does general AI want, what would be its goal and motivation. I understand why it might see us as a potential threat or pest, but what exactly would it try to achieve past pacifying us. Would it be something relatable, like just to survive and prolifirate or something completely alien to us? And why is every scenario discussed always doom for humanity. Neanderthals did not reach the complexity or mastery that humanity did before going extinct, but they are still sort of here, inside the DNA of most of us and whatever skills they taught our ancestors. I just hope that if humans go out, we can work well enough with AI that humanity continues on with our creation.
youtube
AI Governance
2023-07-09T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz1vZ476Sm1Nff4HHl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZbEnYRITSPEzzfNp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz_7ScVMwd1x5qp59d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxjOpITEQE4VwlOsQN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHY-799Ce6_s10wkl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOG9xbtBYDm17OOul4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-hTWi8px3of0Ku3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJI-0O2P_As1352NJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQH_QjeTXhP8bGH_R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw5409cgTfOCCBcEjh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]