Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sam Altman ruined AI. AI could have been developed the right way just like the i…
ytc_Ugw7qlOKQ…
G
What AI does isnt illegal cause if you read the fine prints on all contracts YOU…
ytc_Ugz08O_z3…
G
Oui , mais ses robots ont besoin de l’humain même sans une grande intelligence p…
ytr_Ugx0LQxyM…
G
@joeyhernandez3865 Thank you for your comment, Joey! You've caught us red-handed…
ytr_UgwC5Xs1w…
G
This is probably an AI generated psyops to make feel there is less threat while …
ytc_UgxzoT13q…
G
@cleve741 AI image models are here, out of the box, and no regulation will make …
ytr_UgynWNMSn…
G
It's words - it's language - it's all we have to communicate with them. You are…
ytc_UgzpA5KA-…
G
I’m a trainee solicitor and I’m feeling this video in my soul. I’m also old and …
ytc_Ugy8NTVOr…
Comment
It's amazing how this group can flirt with the utmost edges of philosophy and existentialism without any reference to an objective standard by which we derive human ontology. Without such a standard, philosophically speaking, there is absolutely nothing different between us and the rocks that we trample on, let alone AI that apes humanity. Without us as Image-Bearers of the Divine, we don't get to view ourselves as the sole bearer of rights, since such a viewpoint is as arbitrary as proclaiming your favorite ice cream flavor as the objectively best flavor. A standard is needed, and a conversation void of such a standard is philosophically baseless.
I appreciate the news on AI and the technical discussions - these facets are superb! The philosophy is abysmal, however.
youtube
2026-02-07T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwoXsJ8CyjpeEBxVzx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw2BeeWtYDTXDgD6jl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJy525o4uk1w82cuN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgznHUSheQH6F3n7Ax14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxFFabcKC_5Z6HbKD94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz8hfacgTX1MD5xG-J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwsEEpJ8IufH9nmqW94AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzlzf_pNsr_xM91t7d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzsxJyXfmmOllUhnDB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw2D9w2kKvEuv8D39p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]