Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just pull the plug or get some common sense, even better fine any AI content out…
ytc_UgwECWG6U…
G
Ai is trained off the backs of professional artists works. So it’s going to look…
ytr_UgzULckuV…
G
Food for thought…..we have to take desire out of the equation for the Ai….the st…
ytc_Ugyj9M4wI…
G
AI just needs to grow and get bigger and better! No regulation nonsense! Grow ba…
rdc_jj974cx
G
Good evening Sir, please between Open art. AI, and this flux which is better to …
ytc_UgzJa7bIo…
G
And the benefits are created by stealing in the first place, if they need ai for…
ytr_UgyMveNtO…
G
AGI has no value. We don't need it and we shouldn't pursue it. If you want somet…
ytc_Ugw7wrgt_…
G
and if theyve then got a lot more time on their hands, maybe they will trim thei…
ytr_UgxOz0yGN…
Comment
Watching this guy talk is like watching someone who wrote a fantasy novel and then embraced it as their Bible. The example at the start is an excellent one: did the AI realise it was a trick question and make a joke? Or does the raw material the AI was trained on represent the fact that people in general will be a bit cagey when asked about religion in Israel and make lots of jokes about it? What if you asked it to chat about movie stars and then you bring up Barbara Streisand - would you be surprised at a remark like "You asked me to talk about Barbara Streisand, but why don't we talk about pink elephants instead?" Would it mean the AI knows about the Barbara Streisand effect, and made a joke - or is it just the availability of jokes when talking about that artist that would prompt such a phrase? In the case of a person, the former would make some sense (although you'd consider it a weak / easy joke), but for modern AIs the latter makes more sense - simply by virtue of their construction and exactly what it is they do.
youtube
AI Moral Status
2022-08-12T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxgAZbjeUZAwsufEIF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8GzQXVT_SO7rVrJx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyP0Jfah-3X_akV6Mh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxuB1Awn5zoh3tFMR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxDv4J5Q1b9phLIB094AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]