Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What do you expect when you create AI/LLM's that have no morality or ethical gui…
ytc_Ugx0Mz-Uq…
G
Sorry, its all a bit of non-sense. I work with AI on daily basis - this stuff is…
ytc_UgxS3RQOo…
G
I watched the NBC alt angle and took screenshots of classic AI like the invisibl…
ytr_Ugz3N5VvY…
G
The problem isn't a genuine artificial intelligence. It's if instead of making a…
ytc_UgzR9dGui…
G
So, coding isn't gonna be coding anymore. Design a solution, glue pieces togethe…
ytc_UgxETvQD6…
G
U should not have helped the process of ai. This is going to lead to terminator …
ytc_UgyeuGM-b…
G
You know I dont agree some people just want to study and know a subject. On coll…
ytc_UgxHUaxq3…
G
"Artists should quit," do they know that artists - not AI - are the only ones wi…
ytc_UgxHQt1iC…
Comment
Theoretically, AI could be used to achieve incredible things and be a massive help to all of humanity. In the real world, billion dollar companies are going to use AI to become trillion dollar companies while eliminating all of the jobs that aren't managerial. There's gonna have to be a universal basic income or anyone who isn't the child of a politician or a millionaire is going to be boned.
youtube
AI Jobs
2025-10-09T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnZFS_ZROH31AYPDJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3n1yFpbp0IxEFikB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgyUZmSlmm78aEM6J6N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwpjcICj2EpkXGJrJp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzfHJCabL18Kq8vlsR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzQnBLHWmiO0CzBuu14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgypdW-KN3ZtzLDyWll4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxUdIF7kmVJqK11Mvp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwf0RiXvPVgt2sGkYt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw4xPsXbW6kUs2BIgZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]