Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT is like a prostitute. They tell you what want to hear, they think you’re…
rdc_mbh7413
G
We appreciate your perspective on the impact of technology on society. It's impo…
ytr_UgzsvMmau…
G
Fuck AI and all this new tech bullshit! I only TRUST myself driving a vehicle an…
ytc_Ugw8Znnaj…
G
The US has huge issues with laws that prioritize corporations over it's own citi…
ytr_UgwiWPA3x…
G
It seems like it could be the potential prompts, if you have a prompt like "Act …
ytc_Ugz_QB51M…
G
I can see governments stepping in here to regulate this.
They're potentially los…
ytc_Ugy3SsSah…
G
gods creation if flawed.. so of course anything the flawed ass hat created will …
ytc_UgxAMZ6xF…
G
@jayk3551 Idk man, I think the potential power of AI is overhyped in my opinion.…
ytr_UgzviQjK6…
Comment
Computer programmer here. The real danger in AI is as follows.
1. AI will make controlling the populace wide spread and with scientific precision.
2. AI will confuse an already confused population about what is real and what is artificial. It will cause people to become even more delusional than they already are.
My conclusion as a programmer: if people aren't opposed to AI they have no idea that AI is as dangerous as an atomic bomb. AI will allow powerful people to gain Godlike power more so than we have ever seen in the history of mankind.
youtube
AI Governance
2026-01-05T14:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw0KLZHzjKSmcBmzSZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7PMj-KGEcuXJsBux4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwcXYxhCgF2Ox13q7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwBGBHIubfJmBpzxSF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-nl2Y_FC-zsddL6N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwUJZn0uBzRAH1djQR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzvNtFgJuiumP_Gtep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxaSuR3zMpbhYzVViN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzVRSuS8IRHjbcFAjF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxVgX9sbMRzB1Isva54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]