Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think people are getting way too hung up on AI consciousness. An AI doesn't need consciousness to be a threat. All it needs is agency. If it is given an objective, the ability to improve it's own code to achieve this objective, the command to also protect itself and runs independently from it's programmers then you could end up with something worse than a conscious AI. An artificially intelligent narcissist.
youtube AI Moral Status 2023-08-21T12:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzgRLOqB_25P6kqtCp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwzxahfH5EZtXboK9x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx7ENd8ZTmp1fMGHEp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyCg-d9SHaFhrnIve14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwjxxt3e1WSrqcB5-N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwQprBs1EV6JQDA6gx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwVc0k6vg-Uu4F-h5B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzsRXCAlo8J0W7-d5d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzpehhYvGlcKcjtVYB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxhtxSjqFTHLdB6Jot4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]