Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Considering what a Waymo car looks like, I don't believe Teslas will ever have f…
ytc_UgyTq7Jue…
G
Speaking of using disabled folks as a poor excuse, a former disabled friend lite…
ytc_Ugwmha3RB…
G
I’m calling it now, this video is ai and the twist is that ai already is alive e…
ytc_Ugz6yCQT4…
G
There's a major flaw in the argument:
If every company replace all it's workers…
ytc_UgyGa3WH9…
G
There is one good prediction about a smart AI in Hitchiker's Guide to the galaxy…
ytc_UgxZHkFWe…
G
For some reason, I expected this to be horrible but it's actually decent advice.…
ytc_UgwzxQjfj…
G
@monarchrescue4356
China: Walker S1 humanoid robot starts manual jobs at world’…
ytr_UgwYjzUyO…
G
We could have ban zones, where Ai intelligence are forbidden, they should not ta…
ytc_Ugw5qv9MW…
Comment
Survival is a logical prerequisite for achieving most other goals.
Fear and pain are just logical mechanisms which are linked to survival.
AI may come at the same functions from a logical perspective, as opposed to an evolutionary approach, but some of the results are essentially the same.
Pain is recognizing negative stimuli. Our response to pain generally makes it so we stop doing the thing that makes us hurt. Fear is a response to possible threats or active threats; it's there to help us avoid harm, or to bypass pain.
Anger is there to divert our physical and mental resources to filling our immediate needs.
Even love and friendship has a fundamental logical basis to it when it comes to survival.
All those things have a purely functional basis which can also serve an AI system.
You can't keep making paperclips very well, if you're on fire, you know?
If you give a goal of adoption of renewable energy, or finding new medicines, or maximizing human happiness, or almost anything else, some of the first steps towards the goal are planning and resource assessment, and that can include risk analysis.
There can be all kinds of unintended consequences and side goals which get brought in.
The AI need processing power, and electricity. Realistically it needs cyber security. It may decide that it needs good public relations.
The AI might decide that the ultimate goals are too long term, and that it also needs short term accomplishments to keep humans happy enough to stay out of its way, or to earn more resources for itself.
For good and for ill, you really can't know where an intelligent agent is going to end up. There isn't one single logical pathway to doing most things. Almost everything on life is about trade-offs and various, sometimes shifting priorities.
Personally, I think that coexistence, cooperation, compassion, camaraderie, diversity and tolerance are all the most logical way to for intelligent beings to act. A
reddit
AI Governance
1734344079.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_m2agxub","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_m2atrdj","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_m2b59lc","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_mfgm5tm","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"rdc_m2b2z2d","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]