Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is using AI for good. I love taking advantage of technology for good instea…
ytc_Ugxx0uX6J…
G
How does AI expect us to pay $30 a month if we are jobless or extinct?…
ytc_UgzzKdH9T…
G
If nothing else we need to make sure Ai understands that it needs the resources …
ytc_UgyEvDzZH…
G
@ilghizagreed also chatgpt is connected to the internet aswell so there’s always…
ytr_UgzmbNhbX…
G
Think about this -- AI reflects back to us -- us -- So if you get a rather viole…
ytc_UgwbBxlA6…
G
Well I mean AI those use all the info it has to determine which is “best”. So if…
ytc_UgzQqsdxQ…
G
*did* you read it though? the book seems to mostly be a series of prompt-driven …
ytc_UgxAzGbKN…
G
It's still the human's responsibility to review AI code and make sure it is reas…
ytr_Ugx3BDoOR…
Comment
Still surprised that a “computer scientist” like this dude doesn’t understand the difference between understanding and computing. These LLMs are computing huge amounts of data, but they don’t understand what they’re doing. And they won’t, at least in the foreseeable future, unless a new biological or technological breakthrough emerges. Eventually these dudes need to sell their books and views and impression.
youtube
AI Governance
2026-02-04T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwyspHRqbJXowMW6M94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxg4Rer9Ff1382aJJJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxygBpCoO9vXQdzmcx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyHI0tF9tFUCf2jxXJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwrvxe4VX0mQjc-H2p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6arS0GxlHyAWraHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz-Lta3lGb0aseLSwh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz9g_9bN0brwlwhHlp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVSO-HdzPSBD1HRWB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxB7vxARmXHHmi4qYl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]