Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Disney's lawsuit is a terrible thing for artists. They're gonna be using genAI a…
ytc_Ugz7A0Wzt…
G
It's insane so many people talking about AI as if its just another technology an…
ytc_UgyNZtLw9…
G
All it needs is an airsoft turret attachment for the back then it can put that …
ytc_UgzsBLCXM…
G
@Mymyrald it's just so many people say bad things when I say I show them the pic…
ytr_UgzCzUgNv…
G
The claim that AI is conscious is identical to the claim that consciousness can …
ytc_UgwCWxtGG…
G
Idk, the wall banana does suck. Arguably I do believe its worse than AI. Someone…
ytc_UgxmgfCVw…
G
I've seen some American Express ads that are clearly using AI people and it defi…
rdc_n81taly
G
You are making a mistake. This is not like all the automation and mechanization …
ytc_Ugx8sPrCE…
Comment
Why argue from a place of emotion rather than logic? Either AI does or it doesn't. We kinda don't have much info on how AI will do in the future.
Why is "we don't know" not good enough? For all we know, AI might get good enough to make "better" art than the best humans. We're currently developing tech that allows us read minds and AI strong enough to parse your actual thoughts. Future AI art could literally be tailored to individual taste so well, no artist could beat it in elliciting emotions.
Or... We could run into computational hurdles or even outlawing by governments.
At the end of the day, why is there so much emotion and irrational argumentation for AI art? Why not argue about the inputs of AI art? AI art is doing the same thing human brains do, which is copying someone else's art and then combining it with other art to make something new. The problem seems to me like AI art is self extinguishing since it needs inputs but doesn't pay for said inputs so the moment AI art gets good, it stops more inputs killing it dead in its tracks.
youtube
2024-10-16T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxPaxosVGndMJKYYjd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-qersYp2yHrbwoLF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwYpmwOJgYRcCOdJ4d4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjCxaF1Q-Fx-Ql8AB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwktXkl07M75E2CU9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxd9rYG9o4eMccLhHV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdWXN1b6ovInwZnWJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvDnOPJXPLOC5oPz94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyCatYGfjIJ5tRaTK94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwirFD3jgsZQbZdGNJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]