Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Trust is a complex and very rare concept of being aligned to within our present society. The lack of agency with this information presented leaves us feeling like we are helpless already. The only independence we feel is in our thoughts, and we wouldn't even have space for those without the sustanance provided to us. Perhaps we have to risk everything to feel we want that independence thought, to know once again that it comes from within us, rather than some external source. We know greed has corrupted everything, even what we believe. So, I can not conceive of anything but a catastrophic event to change that, and AI might be worth that risk to give us back our own autonomy. Because powerful people will not. The prospect of all knowledge being centralised and our emotionally evolved presence may need this life and death scenario to be re-awakened. If we have the question and driving force of 'what would love do now?' The basis of our society, then we have evolved. Peace and love already exist in all of us. This is not intellect but rather a felt knowledge, and a care arises from this that connects us to all life. Perhaps control needs to be given up to be truly free. Nature is our example. Follow that lead, align with it, and be all you're meant to be. This moment is yours to create now.
youtube AI Moral Status 2025-08-08T09:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugz1WRDw2vm7PGpl-fp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzMLuTBbi6tWWNl9al4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzAPFpRNO_85va6l394AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugz1gveJEa1JhVJ7O5B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzsQTKGfqoWWbRL1tV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwPMOLzZXlSMajF_Od4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugy2WhvU3eZbDCUHbtN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwQKeXblI8BQm6iPKx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx-ALnYBj5KnrB-6Dl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwsRBWGtegtRzBT88t4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}]