Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OR - AI is incorporated into the human body. In that case WE become AI - Organic…
ytc_Ugwq-Yef2…
G
AI + mechanization/ automation will be the largest displacement of jobs in huma…
ytc_UgwhXYEr3…
G
They could barely do more damage than requiring an exponentially growing energy …
ytc_UgyXCL3wH…
G
Technology is an amplifier of human power. Whether AI creates a utopia or a dyst…
ytc_Ugx0dFNpg…
G
@OnigoroshiZeroAI models still only replicate shit humans have done. They're by …
ytr_Ugwo9mkqJ…
G
So these a holes ignored their son for months, obviously not monitoring anything…
ytc_UgyuHGYnC…
G
I tried out coding with ai, and it made me a fully fledged survival game in pyth…
ytc_UgwUjm9hT…
G
I don't think any art is bad since that person has put some effort in to it and …
ytr_UgxplO5Dn…
Comment
There’s one thing I don’t agree with, but I could definitely be wrong: I don’t think AI will feel emotions because they’re not of any use to it. The situation he used of a small robot and a big robot fighting each other, fear isn’t necessary there, only a risk assessment and a self preservation program is needed for the small robot. The physiological reactions that fear creates is the only purpose of fear, as it helps with self preservation. If the robot has access to that ability anyway, then fear is useless. It becomes more like crossing the road. I don’t need to be terrified to know not to walk out in front of a car.
That’s what I’m thinking so far about it though. I could definitely be wrong.
youtube
AI Governance
2025-06-30T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyBFRctol1IK9DsL6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQjuYGjYgC3FX9bIJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwHtlTRkSFYW1CqN7p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzunyJrA1KiATZPerV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5Uvc1_OMhapazzY94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIirZbEIv7tvuOX554AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxxDxE2ECA7AzTz6k14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzBoQc2DqzllkIma9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyCLBEJAZHeRl4oaYV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyHnBCF3GaT0MDB6nF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]