Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The entire convo was led towards being conspiratorial and like some dystopian sc…
ytc_Ugwu4d4kA…
G
Give me a human interface any day. AI has to become much more efficient before i…
ytr_UgzhyuoCs…
G
Why don't these assinine people stop twisting around all the technology and get …
ytc_UgyjCgNV0…
G
People made the same argument when the excavator was invented. What are all thos…
ytc_UgzX3Aa5t…
G
35% of entry level jobs has been taken over by AI algorithms created by tech bro…
ytc_UgyCbQM39…
G
I am wondering how AI shuts down the grids of the world without destroying itsel…
ytc_Ugw6oybp0…
G
Am I the only one not scared of AI and robots? I hear a lot of fear.…
ytc_UgwhM23vQ…
G
Ahhh 😀 there is nothing better than a morning with a doom and gloom end of the w…
ytc_UgyLaQanF…
Comment
Another expert commenting on DW said we should limit its proliferation in the same way we limited nuclear weapon proliferation. I like the analogy. Clear seeing of the fact that this , like most technological advancements, can and will be weaponised by sick people.
But about the agentic/ Terminator scenario: I didn't quite get the motivation of these bots even in the movie.. What incentive would they have to act toward anyy goal if they're not sentient or dependent on biological resources and feelings like we are ? And then : How would AI create something without opposable thumbs ? Instruct a 3d printer to print it hands with opposable thumbs so it can make whips and chains to enslave humans? How did this scientist explain an agentic ai may be mmotivated and why?
I find it most scary to think of a dictator who invests massively in these things, and then we get a 1984 scenario.
youtube
AI Governance
2023-05-03T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxOJMPK2xWs7ZtveBh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx73ZYMkpiP3unFZ-B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxd2H-47YVL7nRn5Vl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy1gZTucLGijAbeZEZ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZ2UtNpEgRG4iDNZ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKD3xL9CcoGeUGPox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzySmA_FV4w3rY-VXV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzE64Cmq93JxlXmx0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXITTu92mBuJRJCIN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzqHkUZ8udvK-GF8MJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]