Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I CAN NOT WAIT for the AI bubble crash.
The AI slop in information, news, image …
ytc_Ugw6eHMUa…
G
I feel you OP I work in film and things have been pretty rough due to strikes an…
rdc_kt5hl58
G
learn to adapt with AI and learn to be valuable with current and future technolo…
ytc_UgzV8Z5Sb…
G
Sure, ai can allow you to make art, but it'll NEVER be yours. It will always be …
ytc_UgybXHJzH…
G
In this lightboard video, Phaedra Boinodiris with IBM, breaks down what AI ethic…
ytc_UgwBIPrHG…
G
The AI managers like Musk and Altman act as if a bad outcome wasn't foreseen. Wh…
ytc_UgxLaYkQm…
G
I am an avid user of AI generated art.
I use it to create placeholders, or to b…
ytc_Ugxv9Xzjx…
G
Anyone else kinda see what the point of chatgpt is? To make it so we cant think …
ytc_Ugw2bBxk5…
Comment
The worst danger is that AI would be smart enough to lull people into complacency and to be given too much trust, then it would turn around and occasionally do unexpected, completely stupid things. Basically, a program "bug ", but much more difficult to anticipate, debug or avoid with thorough testing, than with conventional programmimg.
AI is not so dangerous to the degree that it is relegated to a pure computing and advisory role, not given "arms and legs" to physically impact the environment. You would not want to trust it with mission-critical functions such as deciding when to launch missiles.
youtube
AI Responsibility
2024-07-03T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgywaZxZTXKnNvrcKeF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0rw4n58jJFtT7TZR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwpRJrxDazs-y0P1Sp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyNNRyLrIiuHG7Liax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxT0wZ7593zk5UqybJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzoJtaRJeWkyReoWIl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw3AJ_yXIK7-79kMeh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugys_s7wKRTMhJcxAYp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzvZhBawyhhTulkBo94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzr1QL5ttXHh6x9F_V4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]