Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The entire tech build on top of knowledge and data AI comapny do not have which …
ytc_UgxjDhlGg…
G
I don't fear autonomous weapons. I fear the people creating and programming them…
ytc_Ugj-QbhQa…
G
To be fair, it's easy to think "that's just a movie".
However, Turing spoke abo…
ytr_UgwCQDuPX…
G
I mean if a job can be automated is it really worth a human doing anyway? It wou…
rdc_mxyi88x
G
I think you have to consider the use of the training data for AI. Unless you own…
rdc_jwvqofk
G
Yeah I admire the ambition but that’s not going to work. The self driving cars w…
ytc_UgysaU8TD…
G
Artist here! I only ever use AI if I need inspiration, but I never claim anythin…
ytc_UgzkcWubL…
G
Pythia Brixham Yes but if a robot didnt have emotions what would stop a very int…
ytr_UggLATWm7…
Comment
Carbon emission? Look into financial data (cost/usefulness), and you will be shocked even more. I made AI since 1980th and can definitely tell that the most danger is from its imperfectness, not superpower as Hollywood presents it. Indeed, if you research blockbusters, this becomes clear. The most impressive (often the key) features are completely out of science. Not clear how to implement them even theoretically. Even the superpower of SkyNet in Terminator (put aside time travel) is fake. Internet is well controllable if really needed. All traffic passes just a few hubs and governments use this when they want.
Since inception, AI passed several "winters". When it revives, this is accompanied by a commercial boom. What's really dangerous, using of half-workable technology to pump profit.
6:50 Sadly, these systems are black boxes, and even their creators can't say exactly why they work and the way they do.
7:02 For image generation systems, if they're used in contexts like generating a forensic sketch based on a description of a perpetrator, they take all those biases and they spit them back out for terms like dangerous criminal, terrorists or gang member, which of course is super dangerous when these tools are deployed in society.
8:47 It's really important that AI stays accessible so that we know both how it works and when it doesn't work.
Frankly, this is a basic rule for any applied science and technology.
youtube
AI Responsibility
2023-12-18T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxa2LLQ6IDu0rn6lGd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwCo_0m7BQQrSUJzZR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgzwpegQZGJ63_exHvh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz4xRHdTXNVHW9wALd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzcEVrNwDtjhe_Fd454AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxrijh-Bw5pqTy9khh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4OCO2gXNgOA6LRTx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgzNRdRz9JzJkm41eGZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLtUrgM64mASnsAtN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwkn3yH5oz4DOSYhCJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]