Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
From the 'evidence' presented by Blake here and in other interviews, I'm not con…
ytc_UgzYraMYI…
G
This is so far away from being doomed. All such clips can easily be recognised a…
ytc_UgxiHbDZS…
G
AI would basically be the one to discover every possible loop holes that doctors…
ytc_UgzY79F7i…
G
The only real reason to use AI for reference is for inspiration of some kind, or…
ytc_UgxFELT6i…
G
What would Humans get out of making a robot arm sentient?
What would Humans get…
ytc_UgjrHFVNf…
G
They make robot to increase their business productivity, it's not for people ben…
ytc_Ugyh0xDKw…
G
Personalized AI in the pocket for every one?
One for Trump and another for North…
ytc_UgxbNaEdb…
G
no but like you don’t get they get no game so they have to talk to chatgpt to ge…
ytc_Ugwd8U__3…
Comment
Today I have received a link to this video (of 2 months ago). What is often missing, the governance. How much cost for me and for the planet to run all those models and technologies associated, to do the same that without it. For example, automated cars: I am able to drive, what is the extra cost for me and for the planet, to have automated driver for my car. Is not in the analysis. Then how u could decide ? With only direct costs ? I do not think so. My 1cent is that the reason some people is betting against AI still. Because the risk of not be able to afford AI in the big scale.
youtube
AI Responsibility
2025-12-15T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzJD4677wXn6ZZa2BJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_813MxAtv1gyK4u94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzNAd02qBx7Noc0mrF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwXUSXxGlVLzkXcoG54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz3CmBvEbqmY9qZ6D54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwtnqL2wcNYfTPSUgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyGTmI9WYL0ou-ANXp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzLdppqLlP8mQaAQyN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwYZrYtNmu4CTLbu6F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFo5pY00-f8IVodaJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]