Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just hope the cryogenically frozen (morons) will be ok in the event of an ener…
ytr_Ugwmrb9Qi…
G
A super intelligent AI will dominate humans if we fail to align it to our goals …
ytr_UgxqiQiFz…
G
Hey there! That's awesome to hear you're starting your YouTube channel and focus…
ytr_UgykxcFdU…
G
The thing is AI is like an atomic bomb, everyone wants to use it but no one want…
ytc_UgwKs4kmG…
G
Did u also know that because AI data centers are responsible for higher usage of…
ytc_UgzmkiOGg…
G
Le danger de l'intelligence artificielle est immediat sans aucune régularisation…
ytc_UgwS4W5AM…
G
Can you look at the first one? She has a crack in your head so it’s probably a r…
ytc_UgyJDwp__…
G
Perhaps AI would see the 1% for what they are and eliminate them instead of all …
ytc_UgyUN9K7-…
Comment
He’s absolutely always acknowledged that AI could obviously cause problems in society. As always CNN is fear mongering as if he’s had a change of heart. Sam Altman now, and has always maintained that AI will probably be the most positive single force ever to impact humanity but that it could potentially be very dangerous. CNN is garbage.
youtube
AI Governance
2023-05-27T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxIOUhzOvxeMwSmu8p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKcWN23-q6AdJrUvt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyqyreWwpWDi-JyiyR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzDAoiANXXAow-R7VB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyZijRLRZacFDp3X5p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzb7-dPm09rBHTAA4R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyDjN6JedRBua0VqN94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw2H16t3w9-PraEnjB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzymCbIOpEtb8PRloV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpUQGT4TuJnnuCEt54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]