Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is dark, but when robots produce most of the gdp and economic value, and wh…
ytc_UgyKE6nnL…
G
Hmm, Then you look at Apple research creating unique reasoning puzzles. And all …
ytc_UgycXxmwq…
G
@CrystalRubyMoon As a person who is very much into drawing i heard and know that…
ytr_UgxZjHaPx…
G
after hearing about AI, my mom told me I would have no chance in this industry.t…
ytc_Ugz3DrZUG…
G
I own a food truck restaurant. And I have an AI music/video production company. …
ytc_UgzZSxOJD…
G
We are literally having AI do everything fun that there is to do in this world f…
ytc_UgwvtFI1d…
G
And what if you live in a thrid world country and cannot afford AI? And what if…
ytc_Ugwg8lViO…
G
Simply and succinctly put. With this explanation it's now clear that the users o…
ytc_Ugy1fTA35…
Comment
I think there is a language problem. We have a word for the type of intelligence that Roger is talking about in ML, it's AGI. We all agree that AI today isn't that. Roger is talking about AGI and interviewer AI.
AI is, by definition, AI. It doesn't need to understand to be itself—that's inherent. It would need to understand to be AGI. The discussion is really: will AI ever become AGI?
There are some points in favour of that and some against. At some point, we as humans didn't have consciousness, but now we do.
If you look at most very difficult problems in physics or other sciences, we often solve problems before we understand them, so I think it's an important step. The wheel was invented before the rules of momentum. The photoelectric effect was discovered before wave-particle duality.
youtube
AI Moral Status
2025-09-10T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyxGCD_X-dPwkmxcI54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxz-BAbIRfP8hD_4X54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzc5lkWoeUcwqw3oRZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZe2dt4Gv4NhyTZEV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXjddOdAhwUAcQ1-14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwByju0kxeUBGiHfch4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwP5mcGO_hOuzQg0tJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZpOzJbJSIkJng31F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzcHwIHl4ZC1-mMje14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxp1D3Z-lpq8cib4QV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]