Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you can "hard code" behaviors into your A.I. then that means they can control scenarios or goals they want for the A.I. to attain. Where as the A.I. can be biased in how it reacts or carries out tasks it is set on. Probably to the point it would be sentient enough to lie and represent data. Such as test results that could kill people or damage things like buildings.
youtube AI Moral Status 2022-07-22T16:4… ♥ 16
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgzbG0CuBUOA1Qccvi14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxNQJoPCj78mIg5XnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyE7SvdmF-4x_cy4VR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzypSAaEoi4QUd_Y1F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw4S5swmUTtw2G0akV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})