Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is all fear based BS. People will always be needed. Period. It’s common sen…
ytc_UgzOWH0X5…
G
ok, the whole "eradicate the white race thing" isn't the personality of the AI, …
ytc_UgwfqqJMk…
G
As someone trying to code in Unreal Engine I can tell you AI is terrible at codi…
ytc_UgwX7qjzo…
G
if you send a letter to a painter asking them to paint you a picture of somethin…
ytc_UgyCCwHgz…
G
(Ai) vs. [human house + jobs + drinking water + trees + electricity power...]
Co…
ytc_UgwtCES4_…
G
What? I live there and this is the first I have ever heard of this. Most of us l…
rdc_eczkppp
G
If anyone think that thanks to AI you will be working less and getting paid the …
ytc_Ugzh4GLV7…
G
Yup just wait for the GPT Plus Plus Lawyer Edition.
They do it with everything …
rdc_jha73cd
Comment
Love to see this conversation starting to play out on a larger scale. I think the important thing is to not trivialize facts or rush to conclusions. Personally, do I believe LLMs pose a mortal danger to humanity? No, I think they are too constrained by their training distribution. But it is important to distinguish that super-duper AI isn't necessarily an LLM. We are just scratching the surface with what these datacenters are capable of. And it is true that the companies rushing forward are barely paying lip service to the valid concern of harm. As Nate said, 5 years ago the machines weren't talking.
youtube
AI Moral Status
2025-10-30T20:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyxUlZ1U3WDzWd6NA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjC4ynLPj748PRgNJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgydX9KsXkvPOd_CNVt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyJyFcYmflsqfeWYNh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugz1e5I1tkoZ41iRjf14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxOyOUos_8xSp2pq8d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgywYasgpXCF0OXUODR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyP4FR3gM33-qNFYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJgI4op1Lq_OxmJm14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzt7NEvame2ldE72X14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]