Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i once had chatgpt make a synthesizer/orchestral avant-garde piece, and it sent …
ytc_UgyAG6inZ…
G
Ex falso quod libet: anything can follow from a falsehood. Simply put, A.I. prob…
ytc_UgzhnH7Pl…
G
It's got some terrible rates of accuracy. Look up ai hallucination and read abou…
ytr_UgzfY4XfJ…
G
Not entirely, my company hasn't hired anyone that's Gen-Z in years I assume, eve…
rdc_oi15ac6
G
wow, just fuckn wow. i knew AI was bad but this is next level scary…
ytc_UgzY_gwjl…
G
So happy for a new depression turtle video! I feel like AI is a perfect example …
ytc_UgyxyIyds…
G
well first of all it should never have full authority and autonomy and secondly …
ytc_UgyxtSfMh…
G
In five years you'll be arrested for a hate crime if you dare say an A.I. woman …
ytc_Ugy0-O1Xn…
Comment
@1:16:30 One of the issues is WE'RE NOT superintelligent. How can you say "we can't make it"? We don't know what it is, what it takes to be superintelligent. However we're doing our best (well not we, the companies) to make it. Once we switch it on, we have no idea what it's going to do. It's different from the "normal" LLM - we know how conversations go. We know how problem solving works (kinda). We don't know what something smarter than everyone (together) is going to think or do. Have you played chess against an engine? It overwhelms you, it crushes the best players. Is that what a superintelligence going to do in general terms?
youtube
AI Moral Status
2025-10-30T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwtxs-CncYNop_0tsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1yXpN6_mw1Mbo2jJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyB7ndzWaq0zAswosp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyExt2nRhtNP0DqbJ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxA2eJgnKVc_b_B4TZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUwm8CoQz08K_rFqV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgznTGQfXRmC1stpMRR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyMf4BIlZYdS76nVbt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmxwLGLl4MRC3aboN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4EwcGESnFAEd1Obp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}
]