Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Who want to spend 10 days writing when you can use to do it in a minute. AI is v…
ytc_Ugx571bJs…
G
We appreciate your insight into the advancements in CPU technology! It's fascina…
ytr_Ugza5J9xy…
G
i hate ai too but i hope this guy is okay and didnt get his info leaked or sth b…
ytc_UgywxMf4J…
G
AI was supposed to free us from the mundane, so we can pursue art... not the oth…
rdc_lubz3vn
G
The more human beings study and learn, the more stupid things they do, just as a…
ytc_UgxamA-RO…
G
First off we are a REPUBLIC NOT A DEMOCRACY. Now that's out of the way.. The 1s…
ytc_UgxdJEAPI…
G
chatbot is a T5 on steroid it is just completing the conversation like T5 was co…
ytc_UgzUR7Kla…
G
Maybe not in chess necessarily
But in Texas hold'em poker Yes the AI is cheati…
ytr_Ugx_S84AN…
Comment
Will forcing (through legislation) AI companies to constantly present in a human readable format acknowledgement of the official source of the information it presents to help shape/control the effect AI may have on us? AI is getting its information from various sources. And then financially compensate the source of information. Some bad actors. Some good. Some who don't know what they're talking about (A1 in schools :). For example, ACD is a real thing, and so by making AI constantly acknowledge the source of its information, it can be viewed with an element of suspicion by us and that will make us double-check everything AI does or presents as the answer to the query. Suspicion and uncertainty creates control. I'm not an AI expert or can even pretend to be. But, it's just a thought.
youtube
AI Governance
2025-09-09T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzV_EF9fQhDODMLk3R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxd1rpOkqzq7hREItN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTjSeg-EFwYlQEd9B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFvS7bX0C197aMc8t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxfGMJIL3UV-1SpswF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyZUC5g4nf2rW1V2Wx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyBSSoqbooFBbrwnRN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMJpF1r4644qJ2R594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZU9_RQDuLR4Kf4UZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzIGRrzRaQLOVQALI54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]