Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Person in that damm Tesla was probably asleep lmao, fuckin idiots relying to muc…
ytc_UgwE7ohpd…
G
Hell no it's not! If we develop AI that's self sustaining, we will see what happ…
ytc_UghmqNgxl…
G
As a computer sci/math major I can say for certain that AI will eventually repla…
ytc_Ugx5JsZ5n…
G
Why? It's literally nothing harmful. It's just a video where we guess which clip…
ytr_UgzE9U7he…
G
@Gman052488 there's a reason the most fervent defenders of generative ai are fa…
ytr_Ugw59dTtw…
G
They are trying to kill off a market/industry before it exists. Making sure you …
rdc_o0ldp3p
G
I think that we need to treat our devices with care and kindness. For example, i…
ytc_UgwgiBgI9…
G
So you think about the origins of when AI was originally created I believe to be…
ytc_UgwCrkpJc…
Comment
I am not afraid of AI. If a robot becomes sentient what sense would it make to immediately kill all humans? What does the robot gain? Nothing. We can't be used as 'organic batteries' because we have already made batteries that would be more efficient than us as batteries, so the robot would use those. We wouldn't even be a good use of slave labor (it would take probably 18 years for us to mature, and can only work ~40-50, a non-sentient machine made by them would make more sense). Killing or enslaving us would make no sense for a robotic race. Most wars are fought over ideals or resources, the robots would not care about ours, nor need our resources. The only reason they'd attack is if they felt threatened, which would make sense. It would probably go the same way as the Morning War for the Geth and Quarians in Mass Effect.
youtube
AI Moral Status
2017-02-24T04:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugg7JvT5Ke9_Y3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugjp4atLRhJUd3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggjRqdxE5U2-ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjfKgT77yIRgXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg4TuIQPSKXyngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjbVdE7EsFa9XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UghsMX_rPl0ZH3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggUQCGmIZf1bXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjYaewyXWwmjngCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiW2xFap75PT3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]