Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LOL, I just tried this and found out that in the version of 3.5 for Plus users a…
rdc_jg8zcli
G
I have to respectfully disagree with the points raised in this video. For me, ar…
ytc_Ugz_Gy85_…
G
Deep fake orgy with every lawmaker who exists and toss in some Black Mirror pig …
ytc_UgxadZl_J…
G
This guy has never worked on site. A robot simply couldn’t be an electrician. It…
ytc_UgwjNi0eo…
G
If that’s the case, shouldn’t we slow down on AI investments? Guess we all are p…
ytc_UgxuNKNYn…
G
Yeah because government regulation definitely will make society "ready" for prop…
rdc_je58g1i
G
This is a philosophical question.
There’s no evidence that how humans “think” i…
rdc_mzyw68u
G
Don't want an autonomous vehicle. I want the freedom to move freely without my …
ytc_Ugxp5uwZv…
Comment
not even the autogenerated subtitles are fitting all the time, I really doubt machines will program themselves with some kind of direction anytime soon. And the worst is, without human training data this so called intelligence is nothing. nada. It would be really a great start if the makers of huge models would pay for their training data! curretly meta pirated a huge library and shreddered it in the machine learning machine and have their public voice say that books basically have no value, but they'll use them anyways and spit out something way worse. This is not even close to fair use!
youtube
AI Moral Status
2025-04-28T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzNotkM78ASHjz4ND54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy6CuFtoB8pbuz24lF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyovjxje96Q6DwuxaN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxrEa9aU8EwT7wsvs14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxF2I6jSWZK6Ar6DpN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwKOcU9pIkAXIGZjhJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxlgVz8I3DEVkggwJV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxe2xCQjdJUWC3RLL94AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzgl5huLGa3gOgZnHx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyeMT9xhg0ahJMkX5R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]