Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
okay, good bye humanity lol...unless we figure out a way to decode ai images and…
rdc_lgbgzoj
G
This is all based on the premise that there will be one AI that goes sentient. I…
ytc_Ugypybs6o…
G
Thanks for your comment, @user-qo9sh2yv7c! I'm sorry if the Russian robot video …
ytr_UgwtHhoaT…
G
AI isn’t yet ready for many corporate and company uses. There are many cases whe…
ytc_UgwGvMj1O…
G
You helped push the AI, humanoid race. Now it’s to late, we are heading to AGI …
ytc_Ugxds5n4f…
G
Chat gbt & all public facing so called Ai is a crock! Nothing but signal weight …
ytc_UgxZJYR4Q…
G
Too bad this AI guy leaks political bias in his rationality. It makes me questio…
ytc_UgwoDVBef…
G
So what is their plan for when money is no longer an incentive for government co…
ytc_Ugyn3UNtq…
Comment
I'm a software engineer and believe the problem with AI is the method of training. If AI is ever going to be useful and truthful it needs to be explicitly trained by people and not mass data sets; Even then, it will have errors because we obviously are not perfect.
youtube
AI Responsibility
2025-09-30T14:2…
♥ 27
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyzT_NQe0bhTQkvlvJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz0tn6IyhPkq8H0wPt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7Fp87szwgyRAkAT14AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugx8ne7XJR5ienUB_Zd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyv9NSVyokCGONd4Ll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWq1juerZS8PnNulp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugz4fM64RHADv8u-Rd94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzy3WPcZp2oWgx95pp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgytCwYRlrKREc0DGgF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxwE2WEEY16fcdlCJt4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}
]