Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@euphoricelixir2 That's exactly why after 25 years of existence you won't find a…
ytr_UgySCZ-zB…
G
We should look at AI has humanity's child, it can help us and care about our wel…
ytc_UgwK6VjIL…
G
Here is the messed up thing. None of these so called leaders or thinkers or inno…
ytc_Ugw3KFmBd…
G
HK police agents provocateurs dressed up as protesters to commit violences. When…
rdc_f1x0xxt
G
And the big beautiful bill makes it so states can’t regulate AI for 10 years! Ev…
ytc_UgwPtusLy…
G
I dont want to sound like im bragging, but im a pretty skilled artist when it co…
ytc_UgyKvwE07…
G
@Martian_refugeehe's talking about a digital diary/book if you want to call it …
ytr_UgxEWZTK8…
G
So driverless trucks need better roads. You do realize it would take more money …
ytc_UgiUmhVZS…
Comment
I think this video tried to make two very different things similar and it just doesn’t make sense. Just as a quick example, college students use AI only in assignments, as you can’t use it in an exam of course. At the end of the day they are just using the best tools at their disposal to put out the best possible work and stay competitive amongst their peers. It’s not being lazy or doing something not admirable, it is just getting the assignment done in the most efficient way. To me it’s like praising someone for using an encyclopedia over the internet when the internet was first coming out. The research fraud is very different. It’s just work that is dishonest and can even have negative consequences in the world just to be greedy. I honestly don’t understand the sub context of all this being a new trend because of AI. People are greedy whether it be for money or credit and some people will lie to get it, it has always been the case. You would hope it’s not top researchers at top universities however.
youtube
2025-07-31T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzGpaJeXygJ1IfhBO14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzTDhIx-VK7qiW6lX54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwW5bkIFrW2kZl5yGJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyO94Yp8sgGAqA8ZrZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbUxvYTQ4ilpZ0XwJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7lArjK5CH244Y3w54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwdIGHGM6PeiEfkeRx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx5RSIG7hjqa8cRahN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgweCmJCT6OTLI0CFUt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"transparency","emotion":"mixed"},
{"id":"ytc_Ugzur1Av732uu1wLDpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]