Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can say I have a Tesla why? How do I have a Tesla at the time I was looking for …
ytc_UgwxGmRgq…
G
AI generated artwork is often like that, but that's not inherent to it, especial…
ytc_UgzlUP4KY…
G
Everyone when communicating with an AI: * communicates *
I'm communicating wit…
ytc_UgzrHdPSh…
G
Ok so if AI is sentient, "alive" and we created it. Wouldn't that make us god…
ytc_UgwDEZ0sj…
G
For once you covered a subject that I am well versed in. The fear of AI like Cha…
ytc_UgxXqhZaq…
G
Humans invented AI that's why it's going to not work correctly because of human …
ytc_UgxiOJ4Su…
G
nah they gonna make an AI join the kill all men group and follow through…
ytc_UgzOF55XX…
G
I would honestly prefer an apocalypse of massive scale that almost resets human …
ytc_UgxLuzySU…
Comment
The only rule is Ai can not hurt Humans end of story it can improve human life but not take it and must protect it and advance it and help it
youtube
AI Governance
2025-09-07T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCQoX_JkpHWHcYdph4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw34DBXExZ5h6q-dGF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz8gE7y6jUczD85WJt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3EmqnnPmGksIwCv14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz64anSqDZT3vP-g4R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxowcXAh8B6Y37Wjm94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxQ6V5MFskm3e-Ydt94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx2ich-LSUqmcoCEUl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNKEOuYpIm9mw0TYl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzHFq0Atkm3kiz6Jmx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]