Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not the AI that's broken. It's us. If we adjust the AI for our wickedness i…
ytc_UgzBRJAU4…
G
Whereas if we don't regulate ,it will be on ...who's hands again?
Just accept re…
ytr_UgxATXJqv…
G
I disagree with the idea of Elan Musk of creating policies behind closed doors o…
ytc_UgyzC39Pn…
G
Waymos fascinate me like a herd of buffalo would, to watch really hard from a ve…
ytc_UgxWu0nt4…
G
Do you think China is going to regulate? Do you think India or Russia is going t…
ytc_UgzKlxCJj…
G
I live in Indiana. A few weeks ago the Republicans in office hired an outside gr…
ytc_UgzQMNzin…
G
If you make a robot out of edible material and give it consciousness, then eat i…
ytc_UgySFriW0…
G
I’m all for capitalism. But this is industry ruining neighborhoods.
There are …
ytc_Ugzf-AA29…
Comment
There is a lot of holes in these arguments. Two points: on a relative scale, most humans are doing interpolation too and AI have a much bigger pool. If you work enough, you know humans make mistakes, they don’t learn from their problems, they push back feedbacks and etc. the effectiveness has to be measure comparatively. Second is more of scale, there r tasks that if u need to do reading comprehension, you need to hire lots of ppl, train them, walk their work and etc. much more efficient with AI and even with hallucination, it could be a more cost effective method. It is not like human are way better. Humans sometimes are worse
youtube
2026-01-24T17:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzlEX1w3yvGnqlbSit4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzsYn-baN-vNCzxjnt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBfZJT-UmJKWBx9LB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_v7KHAhkY6dTBN3Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwnZy7whAUA_tuatlF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_U2D2QMa3l3cN1Jt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzObe7Q9Tlpnzl9rm54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQ-p3ZCUWJ_4BcOoZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyMHRovwGoEmGmiYCJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxISdJQfke3IZvow2R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]