Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked ChatGPT this four months ago and she said she would never ever tell you …
ytr_UgyM0ShbR…
G
One thing that I find really funny about AI is that it already started eating it…
ytc_UgzSCSXt3…
G
@LvnarWillow „Yes, but it's the attitude of artists that got on peoples' nerves.…
ytr_UgyHkfdER…
G
Yea the video is misleading. The % of Tesla incidents is lower than Waymo incide…
ytr_UgwXPaUhW…
G
When my kids are a bit older I will retrain from law to midwifery. I am 100 % re…
ytc_UgwiuG9R5…
G
Just want to add/point out that all of this is kind of bull shit because the pol…
rdc_eudlm95
G
As someone who has never spoken to AI in any way, this is very relatable…
ytc_UgwGTLmxD…
G
There is good AI
But not generative,
there is AI outside the type companies are …
ytr_UgwcJ7jkn…
Comment
I feel he’s trying to relieve himself of guilt by saying all these things now. How could you not realise that AI could eventually be more intelligent than humans? It’s been in films and cartoons for years. It would have been pure ignorance and more ego to have gotten so far in creating something so powerful and then thinking “shit” what evil could this do!
youtube
AI Governance
2025-06-18T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxKF7C8oSgs1xdfQBV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzRkgT0zdgx2g5BlA14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwVhZcj6oyfBcGiekl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxeRjmkjJJUfMCUaiF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzGTYs8M891eYsCOVB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQzCWXV5yWLREt5L94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzrVd7bQQ4rfEOtTxx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzIxWP0Dic-lyoeXvp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzqQmRh3winnptBSKx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyYTHcsJDru-fO6qzR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]