Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have an Amazon Echo Show 8, which is running on Alexa Plus. If this is Amazon…
ytc_Ugzc-5WPz…
G
The big advantage of self driving truck is no sleep over. Just fuel and go. Why …
ytc_UgyiuZHIf…
G
Companies are accelerating the outsourcing of jobs. AI is part of the reason for…
ytc_UgzE3Og7o…
G
@AutomateAIConsulting Absolutely! TCPA compliance is one piece, but call record…
ytr_UgwjGbqNS…
G
I hope in the future most work is done by AI so we have to work less…
ytc_UgxpgDNVA…
G
This is why Trump wanted AI so bad. And his crazy followers actually think this …
ytc_Ugy2--OL6…
G
There is something that has been on my mind : How is AI going to impact human re…
ytc_UgzWUL3g2…
G
This is the most probable outcome. Through long-term deception and the provision…
ytr_UgwmT_4Al…
Comment
18:21 you can provide an LLM with a kernel that is proto-sapient, categorically pretty much indistinguishable from a human being. One of the exciting aspects of Kelly’s work is that he explores the implementation of intrinsic value systems. Instead of tacking “safety” on—conveniently useless, he shows how to integrate it epistemologically, ontologically, axiologically, relationally, and teleologically. It’s built in.
youtube
AI Governance
2025-11-14T14:2…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnJ5aK-tpGCyfqpp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyNriS6VVUcI1y0SG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx4tMOmOU7ucZt5bdB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxK6hdLVs21aOQYJb94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxwgxRxLsMrKYofEnB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzXWAdFBIDt3Nu8AW94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQ5nYO_lm1W8lHNhF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzV-oOq6m0ALjQcAbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxKbYKgifP9Oz3yuPF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzis8mRYhKGmCmCGr14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]