Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, let's take the fight to them, ai has it's uses as a tool, in research proj…
ytc_UgzDWT2fe…
G
@25:05 SMR states that it seems when u scale AI to a certain size the property o…
ytc_Ugz_I1xGv…
G
I don’t get why this is a bad thing. No one wants to work in an office. No one w…
ytc_UgwfRQ84J…
G
Stopping development just short of where AI deserves rights runs into the proble…
ytc_UgzX9kson…
G
So here’s the questions… even if you must have a driver you know the company won…
ytc_UgzEd5S6-…
G
Yet AI would kill an employee to keep on functioning:
https://m.youtube.com/watc…
ytc_UgzqC6Bx7…
G
what if you draw something you made yourself and then feed it to the ai?…
ytc_UgxlXXPqq…
G
Ok listen people Ai is amazing and the truth of the matter is simple a gun can b…
ytc_Ugw4KlCqG…
Comment
I come from Germany, I am not a professor or anything like that, and yet I explicitly WARN you! The problem is not KI, but rather humans who think they can set limits for it! There may be clever people who program such things and give them the algorithm, but an KI in over 20 years will be, no matter how clever we humans think we are, far more than a hundred thousand times more intelligent than a human could ever be. Anyone who thinks they can control this power with security measures etc. should be aware that this is only an illusion, because an KI with so much knowledge knows a hundred thousand times better how to circumvent such precautions. The programmer would have to think faster than an KI, and that won't happen. But even if it didn't, a human will make mistakes, and the KI will thank them for it!
youtube
AI Governance
2025-11-30T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzV0NQNbToUAFGDaRR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzdpvSFnTKLTXC52WF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxxTL9Sl8VRFBaDebx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgykMDMo5L9w-gudDu94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9DzUM0O8KxBWL-FN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsRyIeWf9C7TkWw7p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgziNocEwHxNMaVhb7V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwiB9opri70Vayv-Zt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwOwJRk0Cicnb9xylJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxaKAdsIXtp2QO309p4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}
]