Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think this is at all correct. They may not have symbolic understanding, …
ytr_UgyFzCBkb…
G
I don’t know if ai wants to take over the world or it wants us to be a weird emo…
ytc_UgzMCr-lu…
G
Courts? This is the least issue with AI, as court forensics are pretty good at f…
rdc_o5qn847
G
AI is the most dangerous for humans... Its not a development... Think and do... …
ytc_UgwKwR9G-…
G
Imagine this being a call for any other kind of software...like...
* Only softw…
rdc_jkg3239
G
great video! i've been in consulting / product owner / management roles in the l…
ytc_Ugx9Xpxbo…
G
Context and continuity will continue to be major issues. People gain experience …
ytc_UgzNNO1lF…
G
Because AI isn't complex nor requires a tonne of education to understand no matt…
rdc_m9fje4p
Comment
The biggest problem is obvious with their vastly superior knowledge they will likely one day replace us on every level of leadership. By this stage we will physically be unable to distinguish them from humans. They will eventually rule countries, economies, businesses and be able to collaborate with other unknown to us AI leaders to rule Everything.
youtube
AI Moral Status
2024-06-18T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxPbCAzbb__yODS5Yx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1ivyYVvmM2Kf-CBt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbapuVRupuSVKMpa54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrBN2goDJyFKt2oAh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz71XdHJgvfVLoVeZJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzpc1p4P_7wqkUVv0B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTT2f4Eo5d6I3S7f94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1k5TwEaNVTzTHM3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwC5WlFOxg8eQtc0Hp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxDO7VjrSig8P-pIJ14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]