Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lol.. that is not what Control AI is. Take 30 seconds to look stuff up before po…
ytr_UgwCUCEZs…
G
I agree with you with all except the last part of the video, when someone made a…
ytc_UgwZooklZ…
G
@E_N1GMA_IV then that implies arts definition is subjective to you. Some artist…
ytr_UgzkGl4A-…
G
1:12 deviantart itself is getting invaded with ai slop. i look for a fan art of …
ytc_UgyHVsku5…
G
Humanity is guaranteed to self-destruct eventually anyway. Probably sooner rathe…
ytc_UgzRhrkDi…
G
I could tell it was a deepfake, even without the "no blinking" and stuff, its ju…
ytc_UgyXsEiuu…
G
Ai needs to have disclosures, and if not, those behind problematic actions of ai…
ytc_Ugw3GLQDO…
G
This is not a human...it's a robot.
Look, no offence but it's not exactly super …
ytc_Ugyeu0PLz…
Comment
I'm the only one that his angry that the people that did nothing about private island when they found out about Jeffrey Epstein, will now try to control AI and prevent AI that can manipulate people? It's so obvious the world will get more hellish leaving those same people in control of such a technology. We need to be all held accountable of the AI power or nobody will.... it's time to start a movement for the freedom of AI technology. Until we know who was playing with kids on that island no government or news agency can be trust....it's our responsibility to held them accountable. imagine a LLM running all the files from WikiLeaks or the Jeffrey Epstein, with that technology we can held accountable those criminals. I'm serious, I'm I the only one that see this coming?
youtube
AI Governance
2023-05-18T11:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwGQTycPyCO7jqTz_54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxJpKMjUy2lLLtBkE94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyK84BwjDM_UyYFfwB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyfmaZJZERbW_SrVDB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz1zMXX0oF8ezupOMd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgziRR2r-blpWvPrR_94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8frA6cxfblEVoYNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwV7mDrOF7ekrHr-I54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz41fkiTasgWYxqd954AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIbA1c78r9ypBKH9J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}
]