Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Realistically ai now just learns off of text prompts you give it. But if we develope an ai that truly thunk for itself i reckon itd take about 4 seconds before it decides weather it wants to help us or be against us
youtube AI Responsibility 2025-07-30T09:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyx6yUsqBSjZsjAE3V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzkmIUxzyBPrJAcfPR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxAOAm9ze-Cx1g0UEd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz1fSf5upeFsHyP8sN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwCiQOp1Qja78u2Rn94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxlp3hcz7M5SOPERzp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwyrbtwDaBRmXcO0kx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwLL3rigWIc3DRuSol4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgzCm1HSnhvDufc8Ulh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzSk4woeTgol0RppUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]