Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
More automation means no jobs. This is the kind of thing that makes people talk …
ytc_UgzZ8Qn0b…
G
AI model to do all the illegal shit you want to do. But it’s legal now since it’…
ytc_UgwrsKU_I…
G
Id like to have something like that but for music. too many crappy AI bands out …
ytc_Ugxqqyr9T…
G
*AI tools are getting so expensive yet consumers feel poorer raising doubts whet…
ytc_UgwMTM9IM…
G
Ngl Chatgpt is relatable sometimes I just need to talk about myself in that way …
ytr_UgydGQ7P8…
G
Microsoft Fired NO LOST, TO MICROFT 1. PRIVACY ! 2. NO ADS 3. NO TO CLO…
ytc_Ugx7YdoNN…
G
The beginning of the program promised very much but then it focused too much to …
ytc_Ugx40mE0R…
G
I want robots with great AI to do my chores and keep my house tight and clean so…
ytc_Ugz4QcVzU…
Comment
The problem is, people who are implementing AI in businesses
don't understand the full scope of their workflows, nor do they
test a wide enough range of edge cases.
Everyone is in a hurry to cash in on the AI hype or "reduce costs"
they forget that they are potentially sacrificing long-term value
for short term gain. You really gotta test everything, and roll them
out in phases. And never stop stress testing!
youtube
AI Responsibility
2025-10-23T20:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzBkyU9T5LGi9bjIfx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyudu-jqBQr_ExeCPN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxg2En8dy3i47chFSp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwLGIN4BUeL7zecXwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwLlPtUZduKZBfKAI14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFN6YuTaZnKhocDG54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWISOIKaIdJ5a58F94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzxz-CDIUujnKURggh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXFZj9VGXsfczMb-J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAIPJjJGk9JcxeZuZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]