Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Lets say I am an AI text generator... I receive the input "Once" I guess the most likely word to come next is "upon" and after that, "a" I then have two equally likely options that go aftet "a", so I pick one at random... "star" This is generally how AI works, it strings most likely elements together with no regard to truth, fact or logic.
youtube AI Responsibility 2024-02-03T03:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgypNYedn2sp8DJJeo54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyLE9J43zlEwVIzAq94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwEYJd7ODSEN0VTtOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxNC0E0KqRzvMvFtsh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkVdA_YLOEHZjvB754AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzaYOQA5PjuIOqEX4h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxK4QwUyqEzPQH-lzd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdP-mYV7VjS-kCerZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxQ4iT4NQIsEumXFfB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxR8YvAYE9DLQg5iJV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]