Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I strongly support a pause on frontier AGI research. I do want to note that Dave may want to dig deeper. One can reasonably argue that experiments on LLM model organisms risks then noting the history of AI extinction concerns and roleplaying the way those stories tend to go, rather than simply manifesting instrumental convergence. (But what kills us, kills us.)
youtube AI Governance 2025-08-28T00:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwGcqiUSu8cYDEti-54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwHYRPfGwNlHXWXURR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxV20I5bW-QpU2dx954AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy2SmVzK217NOCMDNl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxNTV-vFGbfLC-Zoa14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]