Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Actually being polite the the AI costs millions of dollars per year and wastes a…
ytc_Ugy2U2v6Z…
G
Eyes open folks, OpenAI now has people going after anyone that wants to regulate…
ytc_UgwkzobPd…
G
That was the best information I've encountered so far; thank you. I suggest that…
ytc_UgzV4bNvb…
G
I wonder if all Tesla drivers know the AI cuts out one second before impact, mak…
ytc_UgyWHtt-b…
G
@MaarvaAndor I don't know if opting out is the solution. I think instead we need…
ytr_UgwHKXiWc…
G
I just saw a video where someone got second and a robot got first in an art cont…
ytc_Ugw3bN8uf…
G
I totally agree with Ameca's dark vision of the future. Human annihilation woul…
ytc_UgxsI1_7u…
G
The truth is: AI can sometimes reflect back the style or tone a user pushes it t…
ytc_UgyvaqJ1a…
Comment
Trouble is, these are things that AI will no doubt cure in the future. They are just mistakes. How long will it take AI to solve its mistakes?
youtube
2026-03-08T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzaA68kRWI60Y_4jN54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugz0G1Wh1lPwfoBLeXV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyL_FIvmM5_pUPAR-N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzmDpVPriV1E2PM60t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxt5pv3JhCwxJ56upt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzSXiZ3lBC3JyCSp_x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgymwKOjqUTHF4q1zKF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzbPdM2MPmzSFcuNNJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxyMJScwuYH5AntXGx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4n4y-rAYTTkFwOkZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}
]