Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dear Prime Minister,
The United Kingdom is confronting a convergence of challen…
ytc_UgxxHx6pF…
G
Darker features are inherently harder to differentiate even by a human eye. No w…
rdc_jv6d7de
G
As a block paver let’s see a machine put down as many blocks as my team can as ,…
ytc_UgyRfjRMB…
G
Art is like signatures, each one is different. When I see a style I like and mim…
ytc_UgxfCuGer…
G
Customizing your resume per role is the biggest unlock. ATS ranks by keyword mat…
rdc_oadias8
G
My friend’s son was killed in January while driving a Tesla. I’m not sure if aut…
ytc_UgwuRx9Up…
G
Sophia looks like that one good robot out of thousands of evil ones that tries t…
ytc_UgydUZ9O1…
G
@aev6075 also, more specifically. A person who uses these models, is not an arti…
ytr_UgyQ72k0N…
Comment
Owning an AI model that is either cutting-edge or kinda sorta open-source, you'll get pushed out of the market if you charge a price and can't keep up with innovations.
Entertainment. Human interaction will always be king, it's more authentic. They are just tools, and they will remain as such forever, I'm 99% sure of this, people need people after all.
Abstract jobs that can't be properly taught to an AI, but this will fizzle out over time, it just won't happen in the next 5-10 years so you're pretty safe until then.
Security. AI have systems that are innately vulnerable to other AI. People introduce human error that AI seeks to replace, but we can't be screwed with in the same way. They'll always be vulnerable.
Innovators. Scientists and the like. We're not exactly developing AGI yet. We'll be needing innovation for a very long time. They'll be replaced eventually, but I can't give a timeframe, it's too far off. Likely not within your lifetime.
youtube
AI Governance
2026-03-02T19:3…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxjL2hbNVlFRppXxSZ4AaABAg.AV0hqT-lAGnAV0k-ITFz7U","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxjL2hbNVlFRppXxSZ4AaABAg.AV0hqT-lAGnAV0ktY7EaY1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzr_dGza4U624ENI3t4AaABAg.AUOkMo84zuCAUOvTNu3P00","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxH_SBMuCxyMuigWuh4AaABAg.AUOeovR6QmnAUP21DLg9I4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgyOwyV98Nfe3smjXtF4AaABAg.AUNie2gvyycAUOsuy_79RK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugz0pPGuvZfQN61xP694AaABAg.AUNavLZQgdEAUOzCxq30cr","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_Ugzn53Cj5QRmGewX4bp4AaABAg.AU8m26SnstaAUBezDUmN5A","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytr_Ugyd2ax3Q0c21Fthnsx4AaABAg.ATfDj4zlqFFATriCZ6E6lY","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzYg0IvYNNNsbQFuJ54AaABAg.ATcy0TUIQi8ATwcG6ej36D","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzOLZDQI3Lgsu5uAed4AaABAg.AT_eT1oevZBAT_mWM01xqt","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]