Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nobody knows what will happen, least of all some content creator.
However you ca…
ytc_UgzP_ZtKg…
G
not disabled but i have Aphantasia and cant rlly 'imagine' anything or see image…
ytc_UgwHLYZcf…
G
The problem with A.I. is that it will be controlled by the wealthy, greedy, fasc…
ytc_UgxQe4rEe…
G
Getting tired of all the AI fearmongering. Humans were always scared of new stuf…
ytc_UgxnJy42h…
G
Ai will only get as bad as we allow it to. Meaning if the robots take over it’s …
ytc_UgxMJJY_B…
G
The problem is that if you want to get revenue from the product and not ads you …
rdc_mo5vdsa
G
This world - with and without AI weapons - is following predictions foretold in …
ytc_Ugy-oeWVw…
G
*AI*. is the "Babylon Tower". plus the "Waters of Noah's Flood". Jesus…
ytc_UgzM47ZFF…
Comment
Sorry I am late Tucker. I completely agree with Elon on this point. What is truly scary about AI is we are programming and training them to think like humans. And when it comes to how humans think, most of us generally use these sorts of AI platforms to express our darker aspects. I do not necessarily agree that AI are smarter than humans, but are certainly better at processing vast amounts of data spanning larger spans of time. This gives them a predictive advantage. Further, as problem-solvers, it is not their ability to troubleshoot and provide meaningful solutions, rather it is their decision/implementation ability that is dangerous. If, hypothetically, an AI comes to the conclusion that, to solve climate change we need to eliminate non-renewable fossil fuels and coal, it's ability to determine the fastest way to achieve that end then re-engineer our technology in order to accomplish this objective is 'anti-speciesist'. Factoring in that any coder and programmer can independently develop their own AI technology with very little investment and no oversight, this certainly represents a serious area of concern as we move forward. As Elon said, 'would we even know?'
youtube
AI Governance
2024-01-25T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbN0Zas7hnaOWChuN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyON372r3BSPjlx0R94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwErHKCzHsYoE2smvN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzOfuoKnFjj2fMz1e54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzEdNZn6WRC5M0fnod4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz0GyAWruZCm3lhHvp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0WmWg99hbGRuXlNN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwz8CO1tr29pV1jlq54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-eJOqESVTo6stdlx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxOUju_vBA0mlOwnsJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]