Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If I were to play devils advocate a little, there is an argument that some peopl…
ytc_Ugwm23llJ…
G
bastards, ffs.
I was thinking the other day. In a couple hundred years, will Iv…
rdc_deumern
G
I guess it begs the question… who gives a shit? You talk about stuff your entire…
ytc_UgxBqqn-m…
G
would choose forgiveness and restoration over punishment wherever possible.
Non-…
ytr_UgzHmGKuq…
G
You would think that when creating AI, the three laws of robotics is literally t…
ytc_UgzNhhpTs…
G
If AI ever does appear to be killing people, i would guess that its actually peo…
ytc_UgzM5Tcui…
G
I'm here to end illusions, curses, spells, black magic, enchantments. Everything…
ytc_UgwktzPMq…
G
It all makes me really sad because A.I can be really good when it is just a mere…
ytc_Ugz-ebZ8W…
Comment
Mark this down: by 2030, robots still won’t be able to handle electrical or plumbing work as a full service. 99% of those jobs won’t be gone. Even AI cars aren’t great yet, and replicating the dexterity of human fingers is incredibly hard.
I’m not disputing everything this guy says, but if he’s as smart as people say, he should know this. I’m not saying it won’t eventually happen, but it’s a long way off. Maybe robots will assist in 5 years. We need to hold these predictions accountable “you said this then, and now we look back and you were wrong, so why is this different?” Right now, he’s sounding more like an extremist than a realist.
youtube
AI Governance
2025-09-16T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx4C1CJcIAie6JyfkF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxFPkTNWxN2sni2xt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVqNUPTvA-_RcUnGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzpUmXLMT80PcMpyqB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxf04wjZ81zKubyhOB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyKZNOM_Ew8juTGZIZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx4LYpJnAzH_W9MbV14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjqT9iHCEy_WHGXP14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxNnBlggs2UvjcqjB54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx2LD3eZO4mS1P1hLh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}
]