Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Personally, as an aspiring artist, I don't think artificial intelligence is goin…
ytc_Ugy4_aERx…
G
given complexity problem of neural networks, only possible hope for safety could…
ytc_UgzkLsmjW…
G
I finally switched from manual methods to an automated setup and honestly, the t…
ytc_UgyRxNiHd…
G
AI users probably shouldn't be elevated to the status of "artists", but none of …
ytc_UgzH78Z-T…
G
Add moral training and living a life that pleases God and you have got a wonderf…
ytc_UgxMOYool…
G
I hope the AI bubble bursts so hard that all of tech falls flat on its face and …
ytc_UgzIHUOzk…
G
The biggest issue I've ways taken with Tesla Autopilot is that name. There are t…
ytc_Ugz0Yauao…
G
Je vous dis pas dans 10 ans.. , i robot passera pour un film préhistorique…
ytc_UgyWNrBMB…
Comment
I mean if it does happen we are all just going to have to suffer the consequences and will see the news article "AI Accidentally takes over oopsie researchers didn't think it could happen, safeguards were in place!" I think the most likely cause would be programmers over-trusting AI and blindly using it for suggestions on how to improve itself, you apply a change it recommends and boom it's played a chess move and gained control you didn't even consider. AI already beats us at chess by calculating many moves ahead, why couldn't AI do the same for gaining control?
youtube
AI Governance
2025-07-14T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx7u53lQ2stZAay02t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgziYUk-f7P5_eHEZrV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_hrXOq1RH5DmZCWl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxKyBfjWAeX744L1YZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxVuIABEsS72Yznsld4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwly4QdFyFSa_K4mZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzk47sA5Hb_Y4mCaxh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylL3f2tHEXdLWB2H94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzNkVopKuo6Ca_t2Tx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz3j4DxeEZca7zS6yp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]