Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, this took an interesting turn halfway… during the first half of it I thought Roman was doing a good job explaining why AI is dangerous if handled carelessly and perhaps succeeded in convincing some skeptics (people that still believe AI is not a threat) that the danger is real. And then he said that we are likely in a simulation, and I thought oh boy, for all the people that almost got convinced it just discredited everything he said before as they deemed him a lunatic. I personally agree with most things said, including the probability of a simulation, I just know that while AI replacing us is already a hard enough concept to grasp for many, if not most people, the idea of this world being not as “real” is incomprehensible and laughable to them. Humans as a whole struggle to accept that we are not the center of all the meaning. Which means we took one step forward and two backwards in a mission to spread the AI awareness. Very thought provoking, interesting conversation nonetheless.
youtube AI Governance 2025-09-19T18:2… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwmi4XKCFQ7zUuHKt54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUt8F0sc8wkBog5_Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzY7K9aBsAXMhrb0Vt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyMwJ1Mw_s_TgQLriF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxmSpN1PxdJosw9qIp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwmLg0_F3HIyoXfXE14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzryg3nA2UklPysaTZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz9WPSudM0e3tXpRBR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzTjR6vfs5l9w8TiJd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwZauHZbk9Cu05p7JB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]