Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If we are living in a simulation then what is at stake related to AI?…
ytc_Ugztbu0zN…
G
There are two options for the future:
1. We accept that as technology becomes m…
ytc_UgwWOWU_5…
G
WHILE ( AI ) CAN BE A VERY IMPORTANT AND A RELIABLE INTELLECTUAL INFORMATION RE…
ytc_UgztSnLOE…
G
Absolutely! Deep fake is spot on my friend! Denmark sounds like heaven compared …
ytr_Ugycjt_cE…
G
It minimizes coding time and write unit tests based on the context of files you …
rdc_jprdi0h
G
I love that this got upvoted but the more clearly sarcastic posts below are heav…
rdc_gx2eb4r
G
I hate the fact that we need to intentionally mess up a part of a drawing, even …
ytc_UgzL6zCE_…
G
What's the political situation over there that these goons are allowed to make p…
rdc_dsbcn4a
Comment
First thing up front he has a book to market and sell, that being said he clearly over states AI. Humanoid robots here by 2030? Probably. Fixing plumbing in your house or installing a new electrical outlet, hardly. In AI's current form it is nothing more than a glorified digitized encyclopedia, unable to think outside of the box or discern correct from incorrect information when challenged. Also, AI or computer models like black or white options and not grey areas, this is why still to this day fully autonomous vehicles are circus.
youtube
AI Governance
2025-09-06T22:5…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw39DMKWzAUsEK0WwV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnzU2gJMzs2tVjjbd4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwPFP2QPtthHIW9A9l4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyXTGCVpFaANSTrOLR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgylZPlcvYMeJ_uLbtF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxV7ktbr4k5Pee36IZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgymSbGDsG7pHWw34FJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxx3l1usEYhECc5ZIV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzX4fKUeBMegcapM54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz669ndLl1zZ--znJp4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"resignation"}
]