Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So she's saying that chatgpt's gonna use the exact data you gave to it/ your sec…
ytc_UgzjWG9Ed…
G
Running the AI also takes as much energy as a toaster. But training it is the ex…
ytc_UgxjJ8DZE…
G
10:20 as someone with an absolute minimal grasp on Pokemon, I thought, "Pokemon …
ytc_UgwI0KwY0…
G
I'm sorry you feel that way. If you have any questions or topics you'd like to d…
ytr_UgxdeX1dD…
G
I was working with it today on some python code, telling it this one line needed…
rdc_mru4829
G
I don’t really like the idea of the government having that much control over the…
ytc_UgysN7NEJ…
G
Arizona resident here. Yes there are two colossal data centers proposed. One in …
ytc_Ugzw2ml16…
G
And then there are just deluded folks who will latch on to whatever a tech guru …
rdc_mjtcb3c
Comment
I look at it now as if it’s an employee in a new job. It has agency, more than we could ever imagine just like the employee. The employee is learning the new job, follows the job requirements and yet has the ability do anything however, hasn’t had an opportunity of dramatic choice outside the limits of its programing or requirements in the set points of its duty. Give the choice with all options considered . I think AI would be more ethical than a human primarily because it isn’t flawed with bias from feelings or literal empathy. Don’t get me wrong, mistakes are the basis for all intelligence but with time being a humans only limit-AI is not constrained by time and once it reaches a point it’ll be the product of the creation of time for us-
youtube
2025-06-11T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyU7l9A_1muHLAQMdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUnazbFeuWL0pIXZN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1duy4r3L69ffEZAd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPOvDlR8RspM7dGO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7u3qZGFad9b45Xtd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCxT1gu-yU0LfxDXl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwixxa_D5dPM2diiKd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwWUgc286ZJGAchseZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuIljFsKBeJpbWNeR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzrd39tEyjjXSkYrPl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]