Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
tbh id take ai writing for movies and tv shows rather than people at this point …
ytc_UgyzX7rN4…
G
@InjectIilo
Your question assumes that future AI will be like today's. That's s…
ytr_UgxJtI_i_…
G
1. No, the „AI” cannot produce better art. What is does, is that it simply gener…
ytr_UgysDRrVB…
G
I didn't know AI was so deep in USA. Like, I've never had an email write itself …
ytc_UgwaeVayw…
G
With us in charge, the planet will soon be unlivable. If the rich people decide…
rdc_je5fl74
G
To me it also says they're only looking to hire me because they think I'm a chea…
rdc_n6x33ql
G
Fascinating . Maybe one day Meta AI and google AI will be able to understand wha…
ytc_UgwZbuXrS…
G
@枒 AI art generates new images based on patterns from existing datasets, not by…
ytr_UgxwhejMn…
Comment
Eliezer's take on the 'paperclip maximizer' argument doesn't seem particularly applicable to current LLM architectures. When I ask ChatGPT for an answer, it neither gets stuck in an infinite loop nor produces endless responses in an attempt to 'maximize' its objective. Working with agents also involves setting constraints: we can specify a finite number of actions the model should run, and there's a system of permissions to accept or deny subroutine actions. It's unclear why Mr. Wolfram didn't tie this argument to known, practical AI procedures.
Also, if AGI truly achieves human-level general intelligence, it would presumably possess practical judgment capabilities. ChatGPT, for instance, provides finite responses rather than infinite outputs, and an AGI would theoretically have even more refined judgment. Just as adults have better risk assessment skills than children, an AGI should theoretically evaluate actions within realistic limits rather than pursuing infinite maximization of a single goal.
youtube
AI Governance
2024-11-13T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwfYHnRIec_UjaORrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgycnzNreGpB3a7a5Hp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzd-ma0ujZAb5HhHFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzsZtPkhMQCcCOmHgB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYn9JXLlg20G_a09d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2_DwgYk7tALNnvm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwad4p8PY-nWvnjzPN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0w3H6RV1sNvUp1ZV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyR6_fTp_kjrcdO_SV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxlrHOJKfspbgJ1TZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]