Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon Musk est pour un revenu minimum universel…conscient que les pauvres non édu…
ytc_Ugzw3l3XU…
G
Well AGI isn't going to happen in next 5 years and it's never going to come from…
ytc_UgyWS5tpK…
G
Okay... let's suppose AI replaces us in 99% of jobs... now I ask you, who will b…
ytc_UgzlqM6zr…
G
That's because we train AI on our data which is full of our own, human overt and…
ytc_UgxLcMyzJ…
G
???? Consulting was always a grift, why would it suddenly stop being a grift aft…
rdc_n8357cp
G
Great interview. Musk is wise- AI take over is predicted in many circles. Exits …
ytc_UgzIXYwSz…
G
Looking today on lack of moral, lak of humanity in our world leaders, how they a…
ytc_Ugzz3FqgA…
G
If you think every major company working on AI doesn’t already have military con…
rdc_jnlyj3e
Comment
The thing is, governments must evolve and look for the wellness of their citizens, doing researches, tests and tests again, Why?
Because they're lazy and make easy solutions for difficult decisions, because they assume the facts, they don't count with possible inconveniences; here the AI would be a helper, the AI would generate some scenarios to see what would happen if that decision is taken but the AI wouldn't be able to make the best decision, because the AI isn't real intelligence.
So our actual problem is laziness to research and test, this is not just a governments problem, is a society problem, an individual problem, we take functional solutions but not the best solutions.
youtube
AI Harm Incident
2018-10-03T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxnpxhH_Nbu_b3dgwt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyR7mljahWpCyZbEjx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw_BHceCzO1wt50kpx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJbLa1nbNuIsuBeMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmGIbc0j-lv9PJHgZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyd2rSqglxv64S52hR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyzKKvF-iMhl199fMB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzHZOxCDM7BfsGI2vF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy6M51r6al4XwcxDeV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyRVodl1leYKG6IdoN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]