Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If u want to see if it’s real or not AI always make the eyes expressionless that…
ytc_UgxONwC1L…
G
chatgpt gave me 3 completely different answers when i asked it to explain what g…
rdc_jskk6er
G
I wonder how long we going to listen to these stories before we actually do some…
ytc_UgzMo9PMg…
G
After watching 20s..I call fake.... GPT would never ask something like "What ins…
ytc_Ugz4If08M…
G
Love how the thumbnail of this has nothing to do with the content in the video. …
ytc_UgweoTnhE…
G
Schwartz explained he used ChatGPT because he thought it was a search engine and…
ytr_UgxyKoB_u…
G
ChatGPT: create me an article about a company that replaces its human workers wi…
rdc_l9wduuj
G
I thought for sure the blurry background was a tell tale sign it was AI generate…
ytc_Ugwh9f4Kl…
Comment
i'm not sure i agree, AI needs that last 20% of improvement to be REALLY useful. Anyone who uses AI knows how CONFIDENTLY incorrect it can be, and not some small % of the time. That 20% is going to get harder to get, when conventional LLM training becomes less effective, compounded by the fact that so much of the internet an AI scrapes is just other AI content.. LLM training becomes almost incestuous.
youtube
AI Governance
2025-06-23T11:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw5OnOq1apyhxfSInd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx6GMwGphlc4zcAETB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyplHmgyL3envBc9i54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy3QhWnjyuD8NIAG1t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwWfjVYCJpa52r7Cb94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyR74bbngVvomG_UQh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwo5CEQJ8pbMaoUsq14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwi7uMkQj4_bJB2BeZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtX2prwNhbxONBT9J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzbgJShS7Z7OsqVSm14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]