Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great analysis as to how AI works. In the unforseable future robots equipped wit…
ytr_Ugz0T_4so…
G
When ai coulda just sent you a picture of the colour black (completely black ima…
ytc_Ugxbnvl1C…
G
I believe it will take AI 30 years to completely replace physical labor jobs, an…
ytc_UgxoGu0gR…
G
The current form of AI is wonderful, as long as you have a person using it.…
ytc_Ugyu_KxIY…
G
what infuriates me the most is that ai bros are angry at the people that even ma…
ytc_UgxMdOWCw…
G
1. "AI art" isn't technological progress. It's theft. Full stop.
2. but the Lu…
ytc_UgxV1krN_…
G
I only trust 2 ai detectors because the ones i have used flag my work i wrote as…
ytc_Ugzr_WMOn…
G
@babybatbailey03 nope. All it does is so unoriginal art, if you're a truly good …
ytr_Ugzco5CJF…
Comment
Robots will be superior to humans.
1. Our biological bodies are only 20% energy sufficient! Humans can only turn 20% of the food we eat into mechanical energy, and advanced robots would have much higher efficiency.
2. Humans need to sleep and can't work 24/7. 1 worker is working 12/h shifts for 7 days, and sleep 12 hours. The robot could work 24hours and is 100% more efficient than humans than the worker.
3. They are superior in space travel. Robots won't be damaged by radiation.
4. Mass reproduction
And the lists goes on.
We are inferior to future robots, if we can't control them, surely they will kill us all because we are a waste of resources and energy.
youtube
AI Moral Status
2017-02-23T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgjpHbD1cb_bGHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiwpEgnkVIjz3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugj3v0gqenbpS3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiMAV2WUQbo3HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgjPO1aWk3kRM3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgiaWn-BMIFxdHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugjesjn2d2Is3XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjGnJ_vguQsu3gCoAEC","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"ytc_Uggz-8DSC64i2XgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UghST1ICt0Ozk3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"})