Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI reminds me of the start of the fall of the Roman Empire . Romans became compl…
ytc_UgwdVHOIu…
G
why don't those geniuses that created gen ai to do something more useful to the …
ytc_UgwAGFXoH…
G
No need for a robot in the car. If a bullet goes through the door and inside, it…
ytc_UgxjoTvyD…
G
Buz yeryüzünde yaşayan insanlar ve cinleri anormal yaratıklara dönüştürmek norma…
ytc_UgwUmk2gL…
G
I’m sorry to hear you feel that way! The interaction was meant to highlight the …
ytr_Ugw4OrUgN…
G
When companies are not growing, they need to cut costs. They focus on the essent…
ytc_UgwM6kQQH…
G
please change the title of this video. you are highlighting a danger ai presents…
ytc_UgxLkBzy_…
G
Really silly to listen to him. A.i. will be so much smarter that these people th…
ytc_UgwxFM0Wy…
Comment
Several regulation should be implemented.
Things like:
No single AI should have access to multiple systems. Example: if the AI controls the driving of a car, it should never have writing access to talk with others cars or even systems like streets lights.
AI should only exist to give decision support or never full control a system. this way you will always have a human with critical thinking monitoring the results.
No full autonomous AI should be use in weapons. Building or research anything about this is a declaration of war to every country on earth. This ofc would need most of the countries in the world to make aggreamens ike we did in ww1.This is in line with the previous point but for war.
youtube
AI Governance
2023-04-18T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwH3io5ZzsUQDYGVkR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwM-a8chlR3Tsgs0O94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx6hjVY0cqhqvA-ICV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyH3jmAEq5Nwf8rh014AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx5avQuJAhNSYuQl-t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0yODaQrHb8p3cm394AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzj2mdmbK47VTNn1XR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzg4zJx2URE9Una-CR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyvEagXlnafSG6Q-IR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwlQVFwTsdcWslAePR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]