Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> I imagine alot of the AI stuff would be closed systems which would make it …
rdc_ohvoggo
G
Any shared profession when asked basic stuff will answer similary.
But "AI" Art…
ytc_UgwvI764H…
G
It's not ceiling fans. There are none here. It's the stand up fans that oscilla…
rdc_clv6v89
G
0:45 As much as I hate generative ai videos and images. This at least has its ow…
ytc_UgwHZe_ut…
G
After listening, probably the mos informative info I’ve heard, and ..if you are…
ytc_UgzI5z4up…
G
I will consider AI ethical when it either:
1: requires an entire conversation in…
ytc_UgxrVTlRD…
G
AI. is the future. Sorry to the fellow who got crawled. AI learns from.many diff…
ytc_UgzQr3mla…
G
There is a tab called web where you just get the 10 blue links and no ai. You ha…
rdc_n8keiho
Comment
Not to hate Max on these, but honestly it's tiring to always see contents putting AI in a bad light. Associating it with fear and job insecurity. While these can be a worst case scenario in a dystopian era, it seems we fail to see that any form of work would still in essence need humanity. No matter how calculated and data fed AI is, it will never be 100% humanize. It will never undergo life and gain experience from life like humans. At most AI when refined will only be a helpful tool for us humans to do their jobs.
Like can you imagine trusting your health to an AI alone instead of a human doctor? Why is it that customer chat support would still have an option to talk with a "live agent?"
Humans would still be needed to facilitate the use of these AI inventions.
The question should not be if AI will replace us, but rather how AI will reshape and refine our jobs.
youtube
AI Jobs
2025-09-09T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgySSJNVlHJHT30My8J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTIekQLv0Hy2wFzjd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2DyKGcfxeSrnGeS54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwFdjjTas6FvulgmV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3nCOrvgPPOGaj0vJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWBszzc6apaVYcCa54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyXvsxpcEw7p1TGKZZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2AXzQTwiqTDVn_u94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxaSUulL-3XErjeIRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKAe5lvbu0bmJxK114AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]