Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, remind people to use cross-walks (good advice), but don't remind drivers to …
ytc_UgzdWGg9O…
G
Finally someone who said they can't control a.i. . They can't it. Bunch of smart…
ytc_Ugz68y-_s…
G
11:57 I think the simplest explanation of what a neural network does is a simila…
ytc_Ugz6yBunr…
G
If I get injured and my doctor says “hold on lemme ask chatgpt how to do this su…
ytc_UgwUAUIHy…
G
@yourparalysisdemon1936 I don't think people are going to stop using AI art gene…
ytr_UgxjP25VX…
G
Yes very funny when someone is killed by a "robot car". Let's make more jokes.…
ytr_UgwB8JKM6…
G
@rookideetrainer1635 Except AI will tell you "Do not do this"
What was glazed o…
ytr_UgwEwlLhE…
G
You can also ask ChatGPT about being an LLM and it will very clearly tell you it…
ytc_UgzLQrFDr…
Comment
YES but.........The people running this AI regulatory agency cannot be morons like we have running many of our agencies today. They cannot be appointed by affirmative action or some "program". They have to be A-political. They have to be extremely experienced with this technology. In other words they cannot be Democrats or Republicans. The problem is that the federal government would regulate our use if AI, but use it to their benefit to control the people. It might then turn them once they get control, because if AI is so smart, it will know that our federal government is easily corrupted.
youtube
AI Governance
2023-04-18T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzA7a1lYbXq7Z4VpSB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzA018fJuq3baUoyzJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxISmjRnuwOkwtTRrR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyjUs_OEihLbBB32P54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwbjHhc6LEUzpDA-yN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyrNtpt9Kmpzmes6t94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw6FiCC_ZEFDcQcsgx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyffRqAY2vVFyRfSQN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0ifdMyXPrP7epTQh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvldEwWb41-KmNnu54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}
]