Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wrong about books and futuristic movies...HAL 9000 is a fictional artificial int…
ytc_UgyIzDmLH…
G
Maam I m new to your channel and I need to tell you that you are such a good art…
ytc_UgxyKci__…
G
If people want to use AI in their work, that's fine. It's just needs to be comun…
ytc_Ugz0qR4-4…
G
The Google and Cisco people are dead wrong. The Internet didn’t make video renta…
ytc_UgxYxtSfD…
G
A.I. is like a system. You can't get more energy out of a system than what you p…
ytc_UgzEbZeE-…
G
I disagree. AI is a tool. Spend time working with it to get it to produce the pi…
ytc_Ugzcn1l4t…
G
There is no argument there
All parties are right
Prompting is thinking and expl…
ytc_UgyhIbazr…
G
Then if that's the case they need funding to get an actual accurate facial recog…
ytc_Ugzrr9cZT…
Comment
AI is the invention of man. It's programmable. Whatever we program it to do it will do. It like the law. If there was not any laws what humans would have done and they can do. Now AI are simulated human beings. If laws and restrictions are not put in all aspects of AI what will they do. As simple as that, there should ve modules in AI which restricts them from doing this or that as if they do those things they shall be auto destroyed. These are programmable things.
youtube
AI Governance
2025-05-28T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxgKfOaHDdN_rcNqd94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwU6jUHAwTtkwX2Af54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzsMY8cOXXACkmDZ0p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUBASy2QNqZQdPjTx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw1NCsO0rfG5cF3MUl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyeNcbn6-8d7eBt-YR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugx3Ni37noR36ZwlllZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugxi0LEFrF4YxnhSde14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxMPZJ7fEqviJQs5Sp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYSmgK1SIqJBjVM_l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]