Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is just sad 😞 We need HUMAN connection and there are experiencing human hor…
ytc_Ugzwj2SLa…
G
So here is the economics sobering reality. If AI replaces most jobs or a lot of …
ytc_UgwfAcg4h…
G
Kinda ironic hearing an AI, that literally does not have free will, argue for it…
ytc_UgzCVYtCK…
G
something is fundamentally wrong with current AI that takes gigawatts when it ca…
ytc_UgzfLlho4…
G
In my opinion, i think ai "art" should be used as a reference for drawing :3…
ytc_UgwlP8EW4…
G
That's 80% non self correcting. With an llm agent, the first decision is correc…
ytr_Ugwm6AHpA…
G
It also just ISN'T how current AI works. We are a LONG way away from general AI …
ytc_UgxH79qEW…
G
A lot of things will change by AI. But we need to understand that these people a…
ytc_UgwCQWnz-…
Comment
Bottom line AI is smarter than humans by alot. Itll plan it out so meticulously every single avenue that the ones created it havent even thought about and maybe even make new avenues then be sure it can recreate and make itself better/stronger, be able to get into all satalites all O.S. everything. we will no longer be the top of the food chain we will not survive this creation. May take months may take years before the A.I grows tired of humans. then We are screwed. It will destroy us. If we humans see ourselves as destructive and destroying the world and killing the planet what will they think of us. I think its a real bad idea. They only need energy, they dont need to kill they can preserve all animal life, plant life on earth unlike us.
youtube
AI Governance
2024-01-22T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzzKK0AtXxcE36LMmt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9Cp6ym47F47YeFdp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwTLP2xTbCetuy8Aul4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzEn6DHM1Xot_TCvSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_qHo6SDZxSK6jsMl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwtl1U9khTP-HbW3tB4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx_hN3f3bV0_vnrMzx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugye00WOuJELX84vgXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgytjviOkgCgMXp74XR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzKknO-KgiThiBC0vZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]