Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When a human copies art styles they on some level understand what they are doing…
ytc_Ugxhs6wzV…
G
Remember: 100% match does not mean the faces are identical, it means the AI is 1…
ytc_UgxV3_UsQ…
G
Dave not only doesn’t understand shape or size of earth. He completely doesn’t g…
ytc_UgyTwl9vL…
G
AI definitely has its spots, but it shouldn’t be used as a replacement to creati…
ytr_UgwVtLcf1…
G
AI models are literally just really insanely complicated matrix math, it is not …
rdc_nnpmqk2
G
The missile warning system is completely automated, and triggered by sensors in …
ytc_UgxAz5AOX…
G
I’m scared for artists I’m the near future
AI is gonna take over their job if so…
ytc_Ugzjb5kYP…
G
Your choice People. keep retraining yourself and getting laid off my automated…
ytc_UgjdZaWZ1…
Comment
The real problem isn't that large language models suck at maths, and they still do if you make it hard enough. The problem is that mistakes can pop up at any time and really screw up the student's mind. Perhaps if you mix it up with other tools, such as wolframalpha, that also include AI but aren't language models and actually solve maths problems in a reliable way.
youtube
2023-05-03T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzMcVWcuj37XNNOm5N4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNigIDVXtNvxRMh3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwSpNiUSFVD6QnWuLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_99ZbqxjFEKC4v994AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzck4DZLQq0NQKzCTJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxU18bFy542mRqZH814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzTtwJDgT_HKA8MBWB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyiGy-bhzfoeRZupqt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZOMowWJKqNxiR8el4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxXDPvbAm27QMzK3Hh4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"}
]