Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
01:15 control of super intelligence is impossible (I agree) you can't control something that is smarter than you UNLESS it lets you. 1:00:00 we are in a simulation right now (The simulation "theory" is based on a simple distortion of reality - originating with shadows on walls & reflections in mirrors and in modern times changing into a comparison to computer simulations that we as humans can now create- Originally the theory indicated what we see is not really reality, as opposed to what the theory has been distorted into - a foolish conspiracy- We can now create worlds and simulations on a computer so we "might" be in a computer simulation ourselves. It has literally taken the "intuition that there is more out there than we can perceive to your life doesn't matter because everything is fake... complete 180. 1:02:45 Creator is bad with morality (disagree. Choice/freewill/actions is ours to and so are the consequences of those actions. Point and case AI Safety being needed to guide the poor/detrimental/immoral choices the AI creators are taking with the lives of everyone on the planet) 1:04:00 simulation theory makes world lesser (I believe God's simulation is about who will or won't chose him. Without choice no simulation makes any sense and would run exactly the same 100% of the time.) 1:09:00 life set to 120 years (This is actually Biblical - Genesis 6:3, where God declares, "My Spirit will not contend with man forever, for he is mortal; his days shall be 120 years.") We see man's foolish per suite of immortality all throughout history and mythology. -Question for Dr. Yampolskiy: How many of the psychopaths/criminals/tyrants do you want to have access to this immortality breakthrough? Follow-up question: How would you regulate their behavior without resorting to capital punishment, which is yet another human driven system that can be manipulated by the powers that be? 1:21:00 always people join the safety profession and not leave it (can't unsee/unknow problems at hand) This is why Adam/Eve were kicked out of the garden "Can't unsee/unknow" wickedness. As for the translation of Biblical texts, the Chrisitan Bible has never been more accurate. (We do have the original manuscripts, and in modern times it is far easier to translate from one language to another... you talk as if the text has been retranslated 100 times from incremental sources... It is translated from the original language from the original source material. TLDR It is a direct quote from the source material. It is not paraphrased) I enjoy watching/reading about your work with AI safety. I've been following your work for about 5 years now. Ultimately people are stupid and those with too much money/time will bring about not only their own destruction but the destruction of many/all others as well. "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." Albert Einstein
youtube AI Governance 2025-09-06T21:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxnYcp3LPq0j4tdj4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyCK2jJMzpyBD0V26x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzybWNM7qDfr73p92V4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy6x0mdJwhuF0eMN5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQLcKdD6IVkuP8ydZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx3FLGRJtcdPHnJXil4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzAQbJXT-uOOsq0Crx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx05cbZEEb44P85lZR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxOrDxSX47YPdW-nb94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwHxEJVy1trzz6wyGl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]