Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yeah but this robot didnt use a chess peice in a certain place to win!…
ytr_UgxqfcSja…
G
Thank you for your comment! Sofia, the AI robot in the video, doesn't need to us…
ytr_UgzvsQI8e…
G
Ai can't do that because humans made those robots anyway and my second answer is…
ytc_UgyooOyT2…
G
Yes I have mental health issues I’ve got manic bipolar depression I’m also antis…
ytc_UgyS2tt-B…
G
I think thats what alot of folks who understand art at a philosophical level kno…
ytc_Ugzt2Tgv_…
G
All that said, Tesla autopilot is much more advanced comparing to other companie…
ytc_UgxRpjfrq…
G
I don’t know if I were to give u a side by side comparison of some really cool a…
ytr_UgyM1g7Pj…
G
It probably doesn't, you're better off with aimbot, although I'm pretty sure the…
ytr_UgwBBt_g7…
Comment
AI . I don't think that AI just has the potential to be dangerous or destructive, I think AI will definitely be dangerous and destructive particularly because it has been created by mankind who has danger and destruction in their makeup or spirit. You can try to edit that out but you can't. Just as our thoughts can't be regulated. If AI has it's own thoughts, it will have it's own bias. It might start out with it's creator's bias but will reject that for it's own bias which it may not consider a bias, or worse, may not care
youtube
AI Governance
2023-04-18T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyRktXepxWcbhUYrQ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQTz9fLXAPUpjYu7Z4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfkOn5G8MfPau3gZ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyAsiGHE6asYD_YEoV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4MellDNC2qolkWSl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw9gWVsVaNO4f9BhjZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzEqTtZ6zZIivkef694AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyCrbs1vJuUZj83LIh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwA5mdu2NzuU9eliUN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugw6w63LK8WneeuPA6Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]