Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@SijuMarkose it took the world by storm and the same will happen with ai imo…
ytr_UgzUWMSGh…
G
Erg the boycott groups gonna get scammed so hard by virtue signaling people just…
ytc_UgwvYX6WE…
G
This man HAD several forms of identification and got taken to jail anyway becaus…
rdc_oa708nz
G
America needs to keep their AI technology within it's own land and need not worr…
ytc_Ugw6foN-b…
G
There has been cases where Au art just recolours or very mildly adjusts peoples …
ytc_UgxfTBoSp…
G
This happened to me and i spent 2 weeks in jail for credit card fraud,bank fraud…
ytc_Ugx5nJ3nY…
G
To sum it up, ai art is boring because there is no effort behind just a prompt a…
ytc_UgzOVRUFn…
G
this is the weirdest argument for AI. "oh you mean you've never stolen media bef…
ytr_Ugw4BN9W1…
Comment
I'm thinking how does it know who to learn from without having levels of importance and priority programmed into it?
If you were learning what is correct, incorrect, right and wrong from the bulk of information on the internet, then you'll never be 'smart'. Theres so much false, misinformation on the internet, how do you recognise good information when you find it, even humans seem to struggle with this.
An what does it do when people disagree with each other, does the AI take sides, or just chose to not learn?
Whenever I type something into one of the big search engines or Wikipedia (something that is very familiar to me), I spot mistakes nearly every time, so who corrects them?
What is 'smarter' than us, knowing the most popular opinion of something?
youtube
Cross-Cultural
2025-11-11T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugyud99hEP-lnqYzzcB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKkc7NEl38kx8R9nR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2qacGkRJjV0ra9o54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy10ShaLGjFCwC-h1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxY3QLu6bHRqg2mzbJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyq8ukV552ZEYoCM9F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnoyftupX17IKX6wt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVYoGTflu5mywBEvF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyoJ55WG1a4f6ac6C94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2NjAQ7nvqil_6sc54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}]