Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@charlesgedeon I too echo your sentiment. As someone with a deep interest in the…
ytr_UgylyRxp-…
G
Let students use chat GPT .. then at an exam ask them to write up essay answers …
ytc_UgzeXZx7F…
G
I love this deep fake Tom Cruise a lot of people would love to see him and Tom C…
ytc_UgzH7_nRY…
G
Too many fearful naysayers. AI can be an amazing tool for the savvy. 10 years of…
ytc_Ugz3U2dLJ…
G
So, all the other companies tried to pin down OpenAI when GPT4 was released, now…
ytc_UgyQXM7gy…
G
When the AI stutters and repeats the "when", just makes it more sus @6:54 and …
ytc_Ugzle_m2E…
G
It's a very unfortunate situation we'll see again as AI begins to spread as a to…
ytc_Ugx_26XYR…
G
So. My take away is we should plug AI into animal brains and torture them…
ytc_Ugy8a2jv9…
Comment
Aren't most of these "tests" really just reflections of what we think a computer/robot/program can't do? We believe computers aren't lazy (can't be) so that should be a test. Anything can be simulated, that's the point. So I believe it comes back to whether we are convinced. Once that happens then it has proven it has a moral impact on us and therefore merits some level of moral consideration.I could say that a test would include the robot's ability to reject parts of it's programming. Because we believe that a computer must do what it's told, that means it can't arbitrarily reject it's code. People can. However, can't this behavior also be faked? Now we have to add, ..reject parts of it's programming, but is not faking it. Now we've tossed the entire premise out, robots are possibly human because they are able to fake intelligence to the point that it no longer seems simulated. There seems to be no "test" that doesn't kill the whole idea in the first place. Hence, therefore (five six) it's all about how it affects you. If you believe, then it is. Ego Credo, Est
youtube
2016-09-03T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgijOXwzX5ll4HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi_L9Ps1Ao3wngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjE_qt3DXc4AXgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugg7AdD3sDYLcHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgicXkrK5at_b3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgiQFXdWgMS6SXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UghbylCg24GyCXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjxB2hYHk0ringCoAEC","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UggLyqhud7inwngCoAEC","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgjsMxoDrjmOY3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]