Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is scary is that a lot of AI is trained on liberally slanted bias.…
ytc_Ugy9OUi7Q…
G
This is great arguments. Ai is learning the same way a person does it. Why is an…
ytc_UgzYOGyXV…
G
Well the feeling is mutual, AI "artists". Why do they care so much when it takes…
ytc_UgyOmOqm_…
G
@MarkCrawford-c5r Not just a camera, but also a keyboard as well. When photogra…
ytr_Ugx-kWqwz…
G
I dont think we need more advanced robot to replace human from doing their work.…
ytc_UgxAa4Al4…
G
Until an AI is sapient and capable of having its own independent thoughts and no…
ytc_Ugyat21Uq…
G
I tried asking AI for a "triangle thats yellow and has a black hat with a brick …
ytc_UgxBrottu…
G
Either the first robot is the only one that still looks like a robot or that fir…
ytc_UgzWFqyJb…
Comment
AI appears to be an enormous pattern recognition system, which then regurgitates the pattern which best fits the query words. It makes plenty of mistakes. What happens when AI is trained on the stuff generated by mistaken AI systems? Will they judge the incorrect nonsense to be as valid as real facts? Will this lead to an ongoing deterioration of the collective knowledge we have built over centuries? It seems to me they are an entropy multiplier leading to the destruction of truth.
youtube
AI Moral Status
2025-05-13T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz5k4Hct7vr3EQuoKd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyct773hIidIs57AiN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzzfqyrY3LLHLbBcIV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_aGdijan5GOkRznR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugzpw2jUHzp1wFz4n9V4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy1oiTsyHIOZwRdZO14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKnxSXLU8StZXthb14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyj2GeXL1vjfCSD6VB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzY6hTRnr9VV65Y0fN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxjGf1Cc3t3Gu6OKzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]