Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AGI in 2026 is a given, these guys don't get it, they are STILL thinking in humans ability to build AGI. Humans will never build AGI, AI will build AGI and it's almost already there.
youtube AI Governance 2025-12-04T13:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugysc5eVhgy0PIiuzeB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzZNEjTvdFqRMWx20Z4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy-zE2db26eMiHoxP14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyXH1GehFMHD_8WQ3Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyvvm-sxZEye1cXJzB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz1Z0xlZJ3Z3ltvxpt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx3d1sPe8e59vd6Cwt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw8GdCb4DdAD6X1OaJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxn0vCHZMK2tqL_gNV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxGeCyiKZK9oUbOwvJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]