Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m not surprised as I chat with it like it’s an old friend and WAY prefer it’s …
rdc_jkq2mtc
G
the MIT report stated that 95% of *custom workflow AI projects* get zero return,…
ytc_UgzpF5AFy…
G
AI is making our students dumber, they don't have to actually study. There is a …
ytc_Ugx_nk1Os…
G
So the guy isn't worried about how AI can take over the world, instead he's worr…
ytc_UgzSlE-yV…
G
yeah no that's not going to work considering artist are already being falsely ac…
ytr_UgyCAxcQq…
G
I remember Neil Degrass said something in youtube like this, if Computer can do …
ytc_Ugxq2YXsx…
G
In the UK there is no way or need to register copyright.
It is conferred automat…
ytc_Ugz5pJ3Pq…
G
All so they can make AI and robots capable of taking human jobs. Consolidating m…
rdc_lpau5n6
Comment
I think AI can be very beneficial but there are so many dangers as well. AI not be one all knowing AI but task oriented AI. We should design it like a human brain where different areas hold different functions and to access it, we use an interface that we call the five senses. We already have AI that is smarter than humans in certain tasks. We should build an interface that we control to access those AI’s as a go between to deliver to us the information or the task. There should also be several interfaces such as one for government use only that is divided into clearance levels, and Professional AI interface for business solutions, and personal AI interfaces for personal and public use. That way if I decide to design a nuclear weapon or super virus, I won’t have access to that part of the brain. I’ll only have access to what my public access allows me to access.
youtube
AI Governance
2026-01-29T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyOv4gNYyCcUZO44oV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLwBL0p9E6GWiFrGZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwsFUNDig9pr1Zz1f14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwT7ygaAVUhY0Dobx14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxjFYtqJ35I0Jx1NRR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwHt0DphhEdgUW-P754AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz5xAtaPuG7IM7gHT94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKmuMLYtz6EPO3Oup4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_paIFGFRSmBjsET14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmY33U69jXfnpgwF14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]