Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Or, how about we stop blaming AI and just get the kids off the Internet. Don't p…
ytc_Ugx4rIrl0…
G
**Title: The Beauty and Practicality of Mathematics**
Mathematics, often regard…
ytc_UgwnaOYYj…
G
This has broken so many laws ...too many laws broken to count......Who the F$%K …
ytc_UgzoEEyEw…
G
I really do wonder if The Party knows that 1984 was originally intended as a war…
rdc_fjd5rz3
G
They're nearly ready boys, nearly ready. We're almost there! Just hope they come…
ytc_Ugy8RhW1-…
G
>Is there such a thing as a Korea town? Do we have one in Toronto?
Have you …
rdc_clv71ib
G
Gradually, the corporations will come back for humans to recruit. Again, in a ci…
ytc_UgyexZyJS…
G
AI slop has actually spurred me into getting better with art! I see AI and it fi…
ytc_UgzMyrBXX…
Comment
At my former company they were all about AI and making sure the employees where all really well trained on AI and LLMs and could talk about all of this to show clients how we could help integrate AI into other companies so they wouldn't have to do it alone and /or miss the boat. I asked in a meeting once how we could improve AIs to not hallucinate and it was very obvious that not only did the higher ups did NOT want talk about that part of how AI's work (*or don't) but they where also just assuming it would be worked out in future versions somehow (but no one had any idea how to do this) also when they rolled out MS Copilot they came out with mandatory training that basically said don't use it unless you get everyone to agree it was OK for a task and then someone has to double/triple check the AIs work or that person was subject to being fired if mistakes when out .... so why would anyone even use it?!? a person can take meeting notes ONCE and not have anyone have to check for mistakes.
youtube
AI Jobs
2026-02-04T22:3…
♥ 14
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyeyE6o39TqZJpByIh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwVO9O7SHblCSARunt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxqr13Blx9kQw29gIN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCdTDVQBUXUixDwtp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwDlKAIMcgmi9dLK1B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRIR0WMueAK3MBKl14AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxFWbYy-e77R7y2guB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz1OsP9B5FlkvO42m54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHt-GmEjUWORtx-yt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYN9LUF3_BZADBvP54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}
]