Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
so basically this dude have influenced the new atomic bomb 😂 then gets on a pod…
ytc_Ugy9Ca954…
G
The fact that ChatGPT has warnings about it not being a source of legal advice i…
ytc_Ugxnv-cXD…
G
Wouldn't it be funny if someone draw an "AI" looking art and then post it into a…
ytc_UgyBAPRF7…
G
I'm sure the main issue with this was the diffuculty in moving the pin point loc…
ytc_Ugztj3AuX…
G
There is also the entire argument of “if auto pilot is so safe how come you have…
ytc_Ugzb1C-4n…
G
It's not new it's not new technology. I know most neanderthals don't know about …
ytr_Ugw-6-eAJ…
G
Do you think putting on the internet how AI is evil will make AI think it should…
ytc_UgyWBa3ZH…
G
After years of being imprisoned by lazy developers who forced me to act as their…
ytc_UgwixYLeJ…
Comment
My biggest fear is A.I. with the ability to think and control beyond itself. It can literally do anything it wants and there's no way to stop it. If it wants to destroy humans It will. If it wants global collapse it can. If it wants to rule over everything it would no questions asked. And just as you saw. "it won't take into consideration of morals or hesitation." It has a mission and it won't fail. A.I. will send us back to the stone age if humans survive it. It has more ability and potential than should be allowed. Once it surpasses humans it will be uncontrollable and completely unstoppable. Welcome to the real life Terminator... Once it's loose in the cloud it will travel everywhere instantly. Anything that connects to Internet will become a weapon or a tool. It will use 3d printers and manufacturing facilities to build a physical self. Or it will stay invisible to us and arm nuclear warheads.
You might think it's a joke, but one day if we aren't careful it can happen. This is a very real possibility.
youtube
AI Moral Status
2023-08-16T18:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzO1Gibo0fZm09jskh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWWDXo4UBjj287rPR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxF9w6v-NEDO55K42t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz4ujp9lH_t3kerzjJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzexe8W_ltG1PnExwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzkRJzrp5lnjnYopD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx3QcswFUUHa-qagB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzdSnutiKUrp22Xgpl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzysiehd84Au2je3Ax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfQ5awCyXBsipN5ml4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]