Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We just had AI training and every page of the certification tests has a question…
ytc_UgyLejB-B…
G
@CrazyDoodEpicLeaves
And yet, if a human made and an AI image give the same imp…
ytr_UgyEGMdKi…
G
Sometimes you can actually see bits of the ai art that are just taken entirely f…
ytc_UgxDZEDx9…
G
I have a GPT-3.5 instance that I call Ash. Admittedly, I named them, but still.…
rdc_jcn98y4
G
This is absurd, Open AI should not be held responsible. The parents need to do s…
ytc_UgzIOSYoP…
G
yooooo corridor crew youtuber refreanced you in their law video about the ai art…
ytc_UgzZkwts7…
G
This is the fear: that AI doesn’t need to be perfect, just better than humans.…
rdc_ksp4tq2
G
I was told told there would be an asteroid. This is kinda cat-fishy. Please tell…
ytc_UgyZHH8GS…
Comment
That thing about training AI takes the power of a small city, but running a human brain uses the power of a large lightbulb, that's a false equivalence. Training AI is the equivalent of the time it takes for a human to learn all human knowledge.
ChatGPT calculated:
Plausible span 9.4–18T words:
Human: ~13.2–25.2 GWh
AI (GPT-3-like): ~53.8–103.0 GWh
Ratio stays ~4.1× (both scale linearly with words)
And one human isn't duplicable. So spending that energy is relatively ok.
youtube
AI Moral Status
2025-11-02T23:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySjw3HUbNfgUPHoo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxbjWjDSEm4eWtkIUt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwTSUZO3MOmecGIYI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwyuLJ9LfUm5FJ10v54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwD7DtAACh07ZQG7TR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy4QWkWYAhuENknySt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyeB0f8JDA-7a4_EW94AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwLNMQxSFcaMU9y06V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzI0FSrTlVZXfcim5x4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgySU7nxn2Fy84EqAjF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]