Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi first time viewer here, though if give my 2 cents on the matter
Ai in the c…
ytc_Ugzhw7HHc…
G
I'm someone with a supportive and loving family.... But also a family that doesn…
ytc_UgwAYMj0e…
G
>Yeah any public company or billionaire owned company… which NYT is.
>If …
rdc_odi8ahe
G
12:23 there are at least two ways that "could" lead to a possible verification p…
ytc_UgyKo3SEh…
G
A.I and the rest of you are all identical....if its not fake, its an outright li…
ytc_UgxGeBbvY…
G
11:50
A needle hurts irrespective of a human or an AI who is drawing blood/givin…
ytc_UgyjkxGp9…
G
Additionally, if we decided that AI agents actually can be said to "create" the …
ytr_Ugx0RnKau…
G
Here’s the thing, I think this is extremely important for our country moving for…
ytc_UgxYa6-_6…
Comment
The real problem is the automation. But we don't really have any laws to protect against automation. People have been losing their jobs to automation for over a century. And it just gets worse to more sophisticated the technology becomes.
Traditionally the argument has been we're making work safer because people no longer need to do dangerous or monotonous jobs. But now that art is being threatened by AI. People are realising nobody's job is safe. And we're really only scratching the surface of what large language models can do. We haven't even touched on what general AI will be able to do.
People should be concerned because governments don't have the legal infrastructure to safeguard jobs.
youtube
AI Responsibility
2024-06-03T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyeGX5MaxSW3R_b9454AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZyxPRRGeAOIyMq3N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF_wwFDgoaDzyDJgN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_dwQPySUXeno5_I94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNwWNLaAt3lGabXsd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyWrYsWuQ7KEcAYHAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwh2UMC7soTaYtrWI94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwX4ccdc_wu7ZG29h14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxP2cef6ITk6cGOpR54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJWI5WYfUQcfMzamB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}
]