Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Something all the drawings have in common is they're actually good unlike the ai…
ytc_UgzEdjqIf…
G
@nirorit papers have been published where ai was able to create its own training…
ytr_Ugz7kkmiy…
G
Assumption of "nieces" job at ~4:10 is wrong. Nieces company will be able to cl…
ytc_UgxjsMfAD…
G
Not that forehead popping 😂 man I would probably make a great robot. I got a hug…
ytc_UgyCFHgJS…
G
I think LLMs will continue to have a place, but when used for things like orders…
ytc_UgzlTtlIJ…
G
Friendly reminder that all major information sources are controlled by the C-I-A…
ytr_UgxT4R5Rh…
G
Everyone in here talking about robots taking over and forget we are going to evo…
ytc_Ugxe66xah…
G
When you find ai oc art on pinterest it's very common to find user comments say …
ytc_UgyhgzfVI…
Comment
People keep saying that everything with A.I. will be fine ultimately because everyone will just adapt to the new technology just like they did in past technological revolutions. But honestly, I think there is a huge fundamental difference between A.I. and previous technological innovations, and that difference is the fact that if humans do find ways of adapting to A.I. advancements all companies will have to do is feed those adaptations into their A.I. models which would make people's jobs obsolete all over again. The A.I. revolution could usher in a constant wave of repetitive human obsolescence. People would be completely at the mercy of the intentions of their employers, if they want to provide people with employment they can or if they just want to make higher profits you could wind up having mostly empty offices with just a tech guy to manage the hardware.
youtube
AI Jobs
2025-05-31T10:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyHE9GXhj73FNDP-QF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0JfaidfVu995fsZd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy6ng470YVKGQsanx14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxwS4-AE2HS_LDw6zt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQxhodNRslBrAcCkp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]