Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Based on information released to the public, we are at the “Reasoning AI” stage.…
ytr_UgxyKs45y…
G
Possible hot take: Using AI to create art/images is completely fine. If it's jus…
ytc_UgwNVqFCf…
G
I think before we see autonomous trucks we will have trucks with "autopilots" fo…
ytc_Ugw0Q8OWf…
G
Videos like this are just cope. the future will take us in directions far remove…
ytc_Ugx_uJK-G…
G
So how long before OpenAI adds a link to their onlyfans page in every response? …
rdc_l4dnv4v
G
This is bullshit I've seen this original fight and it was another man that knoc…
ytc_UgzQkayBu…
G
@zettovii1367 It's easy to denounce because it is used mainly for what's popular…
ytr_UgzF4oQ9E…
G
What a lame excuse. If we don't do it China will beat us. China is a communist c…
ytc_Ugxs4yioF…
Comment
It’s too late to go back with AI, it’s already been released and utilized by the public. Even if laws are released, there is no ways of enforcing them— think of it as the same as pirating music and movies, it’s such a mass spread issue that punishing the unethical use would be nearly impossible.
But let’s say there are laws and regulations, how are we going to stop people discretely using it on the dark web or hidden websites? Sure, it’d be more of a challenge to make it accessible... but if someone is so mentally deranged to make deep fakes of a nonconsensual person(s), then what’s stopping them from using it anonymously.
It’s just a matter of adapting to the way things are now and doing our best to minimize the damage AI poses on ethical use. But I can’t imagine how we could possibly stop something of this magnitude.
youtube
2023-07-19T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxMmedBY7huL3YNL_N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOXhvbjXlZcLut3yJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzxxp54I_EVDv55TvN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxenvIyoqdQuMmXxE94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzvKngei2bC-pf51B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz52YPEEUU7xo0KZWh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDJ-mAAC8T6topkEN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxpSEyaZsqSK9CiwJN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx32PUv_7ycuPuqvSd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxcPVvGE-DdA_HKcWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]