Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tesla has single handedly set back true autonomous driving for decades to come w…
ytc_UgyAs2ZL7…
G
There are hundreds of thousands of data centers deploying dozens of different ki…
ytr_Ugx3u8gGQ…
G
there are very easy workarounds to these tools, and they are not going to be eff…
ytc_UgzSrOQ2j…
G
Trustworthy AI depends upon accountability. Accountability presupposes transpare…
ytc_UgyfRxlRy…
G
Are there any rules about making an AI ID and using an AI account on any of thes…
rdc_n7gmfya
G
They could map my brain, my neural arrangement , and just put it on a little ha…
ytc_UgyhVXd5S…
G
100 years? That's how long it takes for humans but ai learn at exponential rates…
ytc_UgxGp7ue3…
G
Humans are here to introduce robot to this world. That is why humans need to exi…
ytc_UgjAASra7…
Comment
People mistake AI, training, etc with some kind of actual intelligence being involved. It is just vast amount of what, if, then subroutines. As such it can't account for every possible scenarios.
youtube
2025-12-04T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugz7ueGOe0N9wrC2AYR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzJlK46aSBfsdNfegx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy61xh6TTfsq4aH2H94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7XUozohH17aB20eh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6aoBpdEKVrzxJQq54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz_QJljt7mY7rY4VDt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMuN08WIDwhVOruSN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5sq4AzXq8CrRc9I54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugzq7toNy8dIHf4kEv54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy37m2IQWXUO11lfRd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"})