Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hinton’s argument is flawed. Yes, the artificial neurone may exhibit the same behaviour. But that’s the crucial point - consciousness isn’t *merely* behaviour. An automaton which behaves *like* a conscious being is indeed a very sophisticated automaton. To assume consciousness, based purely on similar behaviour, is hardly rigorous thinking.
youtube AI Moral Status 2025-06-08T10:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwre70aIj-hPtlnZb14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFhp-QT4N0MKJ5v794AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyxuU46ss1o-J_E-wh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxaojaku5ZPY48kLYZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxLnke2DGgj1-YVbMJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwTkdtM34zkonOMWmt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxuJF2qHtgCyfTioWh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGztS6jptsUxelZJB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8UmI_nckWhkbqgTB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwqluLzWpc-jLY55et4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]