Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This has been going on for years. Even since Mandela was in office. Luckily he u…
rdc_deumxlc
G
I find it hysterical that a documentary about surveillance capitalism is filled …
ytc_UgztYa-Ae…
G
If that robot can get down in a trench full of WATER and repair a broken main I'…
ytr_UgwV1HGm5…
G
I think a big oversight here is that the world is still ultimately people doing …
ytc_UgyOBtdVs…
G
Not gonna lie, autopilot is not as good as I thought it was. There no way Tesla…
ytc_UgwFdAFcL…
G
Here is the Dillie-O on sentient A.I.
#Cern has opened doors to Dimensions conta…
ytc_UgyYamWfk…
G
We have standardized requirements and tests for all sort of things vehicular and…
rdc_f6xcnzm
G
For every unbeliever, ...this video already is the trojan horse of AI itself to …
ytc_Ugzzp4bBO…
Comment
I agree with other comments that this guy is not the best at explaining himself in the back-and-forth, but it's refreshing to see an expert in how AI systems work explain the real dangers of where we are right now. We're not at the point where we have to worry about some grandiose development like skynet taking over the world. Instead, we have to be cautious about the fact that we have less and less direct control over our AI models' behaviors as they get more sophisticated. We create situations where an LLM can potentially ruin someone's life through its interactions with them, and we don't understand how to curtail that behavior reliably, even as we acknowledge it's happening.
youtube
AI Governance
2025-10-15T16:1…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyRUbpBI9j6RRbvOut4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwyhRynN5eXO6cOcl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwy6Ya4-kpt2i4XZP14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyGa4S8SgchrDP-_-94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOWv5nCu3aTf6GOJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwx2COP9PTFWz8AwV14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyrztQWCIkdOhXw5il4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxygWqXL_6_xbYJoVV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyzMU25mfS7puIar14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9siEg4WVGOuU47EF4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]