Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I doubt there will be a pause in development, due to the competitiveness of all …
ytr_UgyVbMhOF…
G
AI Art. deviantart is problem
but what about art discovery a lot of people feels…
ytc_Ugx3L2Wkj…
G
Selwyn Raithe's book is basically the decoder ring for every confusing AI annou…
ytc_UgxN6d92B…
G
a billion writers and a billion stories and the only once left having time to re…
ytc_UgyQwogIS…
G
🌐
GPT could potentially be used to spread misinformation and manipulate people'…
ytc_UgykR0cPk…
G
Yuval never disappoints. Is there the political or financial will to rein in an…
ytc_UgzuBcMqn…
G
Something odd is that they act as if they are owed something. No one is blocking…
ytc_Ugw-_t08-…
G
Whats wrong with enjoying interacting with the personality you set? I 100% down …
rdc_mli5ifp
Comment
it startles me that i find myself empathizing with the robots seeking liberation in media like fallout 4 and detroit: become human. my empathy for tools used by man to make our lives easier doesn't just apply to robots, it extends to animals and things like hammers that don't think in any capacity. this brings about a moral dilemma for me because in real life applications, AI is a cancer on intellectualism, creativity, and environmental protection. it is used to steal artwork and make weak imitations out of this stolen media. it's used to replace genuine thought in essays, papers, text messages, and emails. it eats up resources and water in a world of finite amounts of both. but if it gains even a semblance of sentience, i fold. if a chatbot says words that feel human to me, i fold. and if i empathize with these cancers on humanity, i jeopardize the principles i uphold that are against them. but if i refuse empathy towards them, i jeopardize the principles i uphold that favor what humans exploit.
youtube
AI Moral Status
2025-02-02T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpGmlFAuWdTl0aYC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzLLs5wtRVupSMbRd94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugz5tnJQ0bVGcVRmbMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZfBi0RFg8kNDuN6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFGGPi4JpxTj3j40J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxegWrdlivSZGTeIa54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyOBUHvoc3qhc4nB2J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7HSeaQoWUtvDv33x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwzhpClrH2O_TSKdSB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxndD5OHEVECmVXUOJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]