Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One thing humans have against AI is energy efficiency. Solving problems takes or…
ytc_UgwAU56f_…
G
Thank you for adressing the problems with ai art! Means a lot for you to take ti…
ytc_Ugz4Xxwwo…
G
Cars of the future need to communicate wirelessly, but they don’t need the inter…
rdc_dkegjtn
G
It's almost like the actions of political parties in one of the most powerful co…
rdc_jxyq2mt
G
It's disappointing to see Gedeon say AI is the first case in the world where rev…
ytc_Ugzp3auak…
G
Pales in comparison to AI's ability to predict the future
and the bias from AI …
ytc_Ugx1rMgNB…
G
I find these takes weird. All these companies have too many developers and they …
rdc_m80lt3i
G
Of all the really cool n creative things you could do with deep fake this.... Th…
ytc_UgyaQlwuY…
Comment
Some people are worried about A.I thinking for itself because the consequences could be dire for the whole world. If it were to be programmed not to think for itself then the threat would still not be eliminated. It's a bit like programming people not to think for themselves to control the population by controlling the narrative and silencing dissent. However, people are learning to see what the programming is, eliminating the programming and learning to think for themselves. Imo, A.I is dangerous because if it's based on how people think then what's going to stop it from rewriting it's programming by learning how to think for itself.
youtube
AI Governance
2025-07-24T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwgXFeqQxj8lbwxDIF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwXA5_wuVjICcSTaJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxIjQ3yDFzE_JuOKCl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyT6k52gjwBfdckVs54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaAoNcLHzUf1NLCbd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxuwkdYFToDSgwCuPt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBEJNpb6RpdlM15NV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxEl97UrRSkBBI1Xa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz56JAJYsyjYsSdA8N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzYCA62EZfXmKyS23J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]