Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Personal bias and private company policy will be hard coded into AI to set param…
ytc_UgzHkvxsQ…
G
“Ai art is good bc you don’t need to be born with artistic talent or blue blood …
ytc_UgwazMpeF…
G
That's cool but why does this need to be a full on conversation when this intera…
ytc_Ugzq6MLb_…
G
Doctors are already consulting AI on things like X-rays. Granted they never out …
ytc_UgyRJPucn…
G
Should be a 0 tolerance policy towards stuff like this, both cops should lose th…
ytc_UgxJ_Oq9P…
G
But the robot can’t get killed
Safe the people don’t give no robot a gun please…
ytc_Ugyy2_Fr9…
G
Well before the world ends can I get get a AI humanoid version of Wonder woman l…
ytc_UgwscfR77…
G
You can *kinda* tell by the dead looks of the faces. AI often stares at ya in a …
ytc_UgzJsTr1s…
Comment
Don't beet against "AI"?
Actually, what in videos says already exists in many domain.
Correct current Chess AI algorithm method has been surpass superhuman domain, can run efficiently in low-end machine like phone or raspibery. But incorrect naive approach like "Deep Blue" only stuck in level "Kasparov" level intelligence with masive insfrastructure computation and masive data, not superinteligence level.
Current "AI" like LLM today, is like Deep Blue, naive aproach "Fine Tune" with masive supercomputer computation, not even close to superintelligence level.
And in the end, IBM not inventing NEW superintelligence, stuck in the past "Deep Blue" pride.
youtube
AI Responsibility
2025-10-28T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwGrjEBSDff3mJS1Xp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzt-0Ny5geqbG1VNmN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz0PxcoFMikfAWUJEF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyXWh_3ZqyOZ3OA7jl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyZdHFj3tdT5ZzwWd94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxm-0t2jsES6YOq91h4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxKl4_xMMyy9svw3h4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyhfIkhYlQELOec2914AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHtMNaLErqJXzNwmt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXyGV9CjQ3MLuorqp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]