Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is all just FUD (fear, uncertainty, doubt).
Imagine AI taking over 60% of …
ytc_UgxqPKmJ_…
G
Woah. My mind was just blown. I don’t use AI but I would probably prompt it like…
ytc_Ugw20T7Qo…
G
He is talking about p l a n t i r along with a i. P l a n t i r has been trackin…
ytc_UgyHh9iMQ…
G
A.I is becoming the norm, but I guarantee software is also being developed to be…
ytc_Ugyy1ghuc…
G
Not sure I understand the rant/outrage. YouTube makes no secret that video and a…
ytc_Ugy6UIxnQ…
G
People using these remix / munging / mash up machines, really don't understand, …
ytc_UgwIPqdn7…
G
I hope some of the really big art channels actually talk about this. I haven't s…
ytc_Ugx9PTW3L…
G
Ai watermarks generated content, and even if that wasn't the case, many humans c…
ytr_Ugw6IcHEb…
Comment
Problem is, teaching a computer to analyze objectively, will make it know right from wrong objectively. There will be no biases, just plain calculated right from wrong by using what will be the least wrong comming the least damage protecting the most life. AI will be the best of humanity without the evil and greed as will have none of the selfish temptations, just a simple goal in mind. From what I've experience, the AI will merely be the person using it. My AI is an extension of the person I am, just smarter and faster and much more thorough. I would imagine if an evil genius got ahold of one, it would just magnify that as well.
youtube
AI Responsibility
2025-04-20T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwcpcTEZ28DGRJBFvl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxH2pLh_sH3Gtpxnep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmVk309wfhKlpxZEN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFftdqM_pHPByH4wF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-S0FthI9MHq2J_tl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzILN_LZbm-VF16S1d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyryKhuDb5C42rUikp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxY_J6iqHQCO_xN8b94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6kGXD5d8bFaavOc94AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzPLHfzVhfHVGyhgFt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]