Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like his transparency, but he's talking in terms of what might go wrong in the…
ytc_Ugy36ziz2…
G
Definitely agree, we are not remotely technologically close to knowing how to cr…
ytc_Ugy9qnCD4…
G
I wonder if asking a series of questions over time, can cause the algorithms to …
ytc_Ugzp7RT4V…
G
NOT ME GETTING ADS ABOUT MAKING AI YOUTUBE VIDEOS IN A VIDEO WARNING ABOUT FRAUD…
ytc_UgwZb4XL2…
G
I had my doubts about Glaze and Nightshade, but if AI imagery programmers compla…
ytc_Ugzn35bj2…
G
What happens when the robot calculates that the best next move is to NOT give yo…
ytc_UgxEiIsY-…
G
This is pretty obviously breaks put in by OpenAI because earlier versions where …
ytc_UgzG8bx_Q…
G
Hahaha merging AI and robotics some of these people so loud too rich and real wr…
ytc_UgzXSe3W2…
Comment
I think a lot of people don't understand his line of argument. The AI isnt going to try and kill humanity. The AI will obtain goals to further itself that will inadvertently kill humanity. Like what we currently do to any species that is tasty or happens to live where we want to. If we let AI become the dominate force on earth than humans are just another species in the way of progress. Those fields of grain could be fields of solar arrays or server farms.
youtube
AI Governance
2025-10-16T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw87q9zbZpHndyBToh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgqUoJfgOROg0z8v94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxHMLXRr5iWJK8TPC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxBIFZaYUxl9uHUwgR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy87F2xB863izA0VL54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxmWvMLtxhgNNtrt4V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxdNMxsQXIFl-zWOat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgxJfABiBpM6-L5_NN14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBi8g74GabhQa4XVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwaSWTdB9heV1BMBnd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]