Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Rough Timestamps: Part I = go forth and Mystify | How will we know when AI is awake 0:03 - 0:56 Program Eliza 0:57 - 1:45 Will machines ever seem awake? (Chat-gpt) 1:46 - 2:35 Why do I have to Bing search? (Hypercharged Autocomplete, Future, Recources used) 2:36 - 3:02 Recources used by Leagueage model (AI Winter, PoC, Homegron versions? -> what then?) 3:03 - 5:25 An Emulect (CatGPt, FlatGpt, etc. Missinformation Campaigns) Part II = if it thinks like a duck 5:26 - 6:46 AI acting like humans 6:47 - 8:18 But they won’t be humans, what makes human humans 8:19 - 10:43 sentience, sapience, consciousness 10:46 - 11:07 Machines conscious, or pretending (not) to be 11:07 - 13:04 isn’t conscious, is pretending to be | isn’t conscious, isn’t pretending to be | is conscious, is pretending not to be | is conscious … is conscious Part III = lucky all the time 13:05 - 14:16 Self-awareness Robots, wrong conclusions/contradicting orders 14:17 - 17:41 Problem of Alignment -> Don’t let it out into the internet / It will trick you Part IV = will the last one out please turn on the lights? 17:42 - 19:10 The Triassic (Dinosaurs Story) 19:11 - 19:50 The Anthropocene (Humans story + AI, but don’t worry nothing bad will happen ...) 19:51 - 20:29 Brute Force (Imitations of Consciousness | Risks, Threats | Future) 20:30 - 22:21 Can Machines ever experience? Can Silicon wake up? And how do we know/prove its awake? 22:22 - 22:37 Sequence (Oh look an asteroid, I hope it wants to be friends ...)
youtube AI Moral Status 2023-08-30T13:0… ♥ 6
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwDjeOHFJLhUxm02xp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzGeet3vFYQMPX4NzN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzpXXa3rtpEbhptQ2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyRUh0vWeUDvQTQzL54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxV7h_cDJhGEX38I0B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8MAjyWO03nLEuXNZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7x9sPPpWlfVC-Isp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw1rNHicqe8ydLz2hJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyF7g9XT-IWSpY7Afx4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxxKtBISzKCL3EkodN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]