Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Disabled artist here. On a hiatus due to my health as usual. I can go months bei…
ytc_UgwLEIUad…
G
We're likely moving towards an ownership economy, much like the stock market. Th…
ytc_UgxY3U89h…
G
Oh! I can actually provide some perspective here as a previous wet-lab scientist…
rdc_ohw3r03
G
@ it already has decided the way it chooses to spell its name Dangerous. She h…
ytr_UgwqLnJbW…
G
A fundamental gap in the entire argument, that is, a part of possibility missing…
ytc_UgzXEsRob…
G
Worlds of Math Destruction by Cathy O’Neil really touches hard on this subject, …
ytc_UgzCvUw9N…
G
If we are going to have AI just make sure that a). They don't get access to mili…
ytc_UgzN2SDs4…
G
@ 3D printers DO use AI not generative AI so you're right there
Everyone has sty…
ytr_UgyHRWvUb…
Comment
Sabine - I like your comments. But I think they do not go far enough. The real problem for AI is that there is no definition or measurement of Intelligence, period. Are humans intelligent? Is there any measurement or proof? Not really. Sure, people can talk, listen, communicate, and think about how to solve some problems. But there are many limitations. For example, I can talk and listen with the English language, but not with other languages like French or German. If Google AI can speak English, French, and German, does that make it more intelligent than me? I would say not. I can solve some math problems, like algebra problems or the Pythagorean theorem. But I cannot prove the 4-color map theorem. An AI program (LEAN) can prove the 4-color map theorem. Does that make it more intelligent than me? I do not think so. When we try to make AI software more intelligent, we really do not have a definition of the goal. We do not really know what we want.
youtube
2026-02-11T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz1AId5aTrB0vU068x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwilofhGKhwDu0L7Ft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"disapproval"},{"id":"ytc_UgyFYXqIPZgSqF1BRIt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-tXmr1Qbf99MFrdF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy6ZM9msFzNFdQRhp94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzGNsSq1iKLhPOIyGV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgxWCzblxI-AOE55SLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgxdU-TuCxJaaCrlPWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzRtXhqWxt8hXpcZyp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_UgzGjyzcKfKocvAvCr54AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"]}