Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
a person can draw way better than the AI water pour drawing, the AIs did not eve…
ytc_UgzUZjtJ9…
G
“It’s amazing, yet terrifying.” That’s where everyone should take pause a realiz…
ytc_UgxuxE6M_…
G
If I can buy a robot to fix cars, #1 I own that asset (the robot) which is going…
ytc_Ugz5R17XI…
G
Letting AI choose who lives or dies. I can't see anyway that this could possibly…
ytc_Ugww-m5iL…
G
Live event coverage might be the only job left in the photography videographer s…
ytc_Ugw7kPzGy…
G
And you'll be fighting their army of automated death drones to get to them. Good…
ytr_UgyeYH-x2…
G
Harari is right about leaving Israel as he did.
as for AI, he is saying what rea…
ytc_Ugzvx7JWa…
G
Well, that's great in regards to just the coding part of a developers job. But w…
rdc_jpt9vwc
Comment
13:02 Can somebody please explain to me why a program designed with the parameter of "aligning with American interests" is considered flawed for choosing to remove what is assessed to be a critical threat to those interests? The AI doesn't know or understand anything, it is an artificial computer program designed by humans, with coded goals and parameters. If the goal wasn't to protect American interests, or the CEO wasn't a threat to those interests, would we get the same outcome?
I only ask because I worry we project far too much of our conscience and biased decision-making onto a literal computer program. In my eyes, if you tell a program to save lives, but take them if they threaten an objective, then I don't see why we would be surprised if it does exactly that.
I must add that I have very little knowledge about AI and am not asking for arguements' sake, but merely ask out of ignorance and hoping to have an explanation of where my thought process is flawed. I appreciate anyone who can help 🙏
youtube
AI Governance
2025-08-28T13:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwmUaXBvHVLZXigRT54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfrWTiejZmlkI7aYt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzovDY7oF-khB_V0fh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxtEnmbWr56eRVmB4F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwH1gDAg2cljXYxdeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]