Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Imo the problem isn't algorithms interacting with input data in ways the coders don't even understand - that seems to me like a great scientific opportunity to research intelligence (In a closed environment). The problem is giving anything like that access to sensitive systems with reckless abandon. You don't want your nuclear power plant to be connected to an AI that grew on data from 4chan. Especially since these things are apparently being made with an advisory role in mind to begin with, i mean otherwise ethical concerns would have played a much bigger role in creating the "scaffolding" being mentioned here.
youtube AI Governance 2025-08-29T20:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwfV_WbxdQNpgHFDzh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy3n4mmo9K4_1RRnrt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1BRIGKZG_IpvBJ014AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxU2iUBO6bXmuYKkBF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzUlUfWtbNN9qPVO2d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]