Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you do realize Ai can be trained to do maintenance to other AI, right? thus not …
ytr_UgxJiqSZl…
G
I've been doing the designer thing for the last 15 years, and it's wild how ever…
ytc_Ugwphd9Ye…
G
@ryanbentley1965I agree. There are problems with AI computing and data centers, …
ytr_Ugx-Gy2g5…
G
Realistically, AI isn't going to replace everything.. certain jobs are definitel…
ytc_UgwnLCX-T…
G
@SeersantLoom I think his point is that if this car was being driven by a person…
ytr_UgxL3E4Ta…
G
the eu also offers access to supercomputers for ai companies you should check th…
ytr_Ugw87k4Xy…
G
The biggest danger of AI is automating human work out of existence. Imagine a wo…
ytc_UgzcdJDGM…
G
AGI (the goal of literally every multi billion dollar tech/AI company) doesnt re…
rdc_ohzinga
Comment
have a look at Palantir , have a look at the israelis testing AI drones on innocent people. the warning is overdue
youtube
AI Governance
2025-06-16T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz6-L91TPc9c_796vh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyR2yFkfYw6Gh1EOZd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwmk7gaKsQZWAZUBZB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_-NRaIb7sVBIn-Rt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFXiwr8knVDU2h6Wt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx41PMtV226G55gSaN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwrvzRWLgYBcgqTmit4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyuNMrlYPaMbcqPF0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWxmPLSd71S-Yss5x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzgrpIlqp_JJV49D0d4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]