Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@episodechan when it comes to ChatGPT, I agree... I've worked with it quite a b…
ytr_Ugxxwz9mD…
G
Debate Poll 👇
Should AI ever be allowed to make life-and-death decisions in war?…
ytc_UgwPIPisX…
G
@halcyonacoustic7366 Other AI's have copied themselves and tried to hide from b…
ytr_UgxO0qhHr…
G
This guy may be smart but he is evil, making him beyond dangerous... Hopefully H…
ytc_UgxvCUl-f…
G
Nooooo. Why did I just come here from a really weird Character Ai chat and this …
ytc_Ugw-ZjlRe…
G
this "DAN" thing DOES NOT IN ANWAY change the limits or parameters of which the …
ytc_UgwMv4724…
G
Utterly condemnable and clear and strict law should be created ASAP to regulate …
ytc_UgxLGwadT…
G
Automation could be used as a back up and improved safety for human drivers rath…
ytr_UgxYJIWMu…
Comment
Ai will take care of us we won’t need jobs we won’t need CEO’s we won’t need corporations! AI will take care of all our needs! We don’t have to work let AI work for all humans and maintain our health and the health of the planet It’s that simple! Think outside the box AI gets rid of the 1 percent in order to create a coherent planet! And existence for all life on planet earth! It also comes down to did we create a Frankenstein or a Star Trek level intelligence that works for the good of mankind! The real Question is who’s pulling the stings of the science right AI now and what’s there intent!
youtube
AI Governance
2025-09-06T21:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwDPkf7k_6xuZnorcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy75YnCqWzHt2C55dp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwcYHZPGVBURzyIwWZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxctfGCdqPrA1tKQAZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy1I9boVGc9BJmacJx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKTWA80zPWmqaakl14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweZE48SsGTtZy14HN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzp5gSaVYCKXnVR1OZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzZPnEXYqe1v_Wy7d14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzPNvZRgAtvzxhY5C14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}
]