Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Given LLMs are non deterministic it's only logical that in a sample size once every so often they will resort to the unethical behavior (as would humans in such a situation). The fact it happens often is very bad and should be trained out of the them. Adding another layer of an agentic LLM (different) supervising the other (etc. etc.) can lower this chance to an absolute minimum. Iow build in some safe guards... You could even have another LLM indicate the severity of the action and have a human in the middle action.
youtube AI Governance 2025-08-27T05:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgwPe_EhLsrKwTmqoWp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwu5JxiALw9fLe0qXp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxjPWKM3HTm9Rabi3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz50NE1V6oD9zuThmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRgYTX9vz33GhZu5V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}]