Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some of these have appeared in controlled tests and there are white papers about…
ytr_Ugyrxo3Yl…
G
I hate ai it’s stupid we’re acting like we haven’t seen terminator or heard of i…
ytc_UgyMY17S-…
G
Excellent overview. It seems we need guardrails at various points in the AI la…
ytc_UgwW8HbaY…
G
and also if you need character inspo, plenty of people on deviantart sell ai ado…
ytr_Ugy_1aCHL…
G
The problem with exponential growth, as it pertains to AI, is that you have to h…
ytc_Ugz27CdL9…
G
I can see video titles 10 years from now going like " Ordering pizza and having …
ytc_UgzEGJEX5…
G
Well the chatgpt that answered to you is not the chatgpt that answered to AJ. I …
ytc_UgyyVZJvc…
G
Of course AI will be capable of deception. Billions could be lifted out of pove…
ytc_UgwtErlaA…
Comment
What the AI leaders say publically and what they say and do privately are two completely different things. Horowitz will publicly say that AI will make everyone more productive, etc, when privately the goal of AGI is to replace human cognitive labor with things that don't need unions, breaks, health insurance, or need to be treated fairly. That is the goal of OpenAI as well. And the argument that technology always creates new jobs in the realm of AI is bs, and they're smart enough to know that. AI will take the jobs that AI creates, because the commodity is intelligence.
youtube
AI Responsibility
2026-04-22T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVpGv9sBveMGotwnx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxnCJmeEPLlJmMfj-94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxFixHL-pXyEP4kZo94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwDwh9l2qXM18tKlG54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwyl10Buc2VMavn7Lt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxNa4VfTvnfo-tt3iJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy5TTAyKcDFxZKHvvt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxQFMbw7mT2dtX0Nm94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwO0VrKOIcQ3L1wULh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyG4IuOQbBIjLapUB94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]