Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You are missing the point really. Humans have been attacking one another throug…
ytr_UgxJZ1QIv…
G
Firstly I think that it's edited but then I realised that this is a real robot…
ytc_UgxOvbJkm…
G
And i'm telling you RIGHT NOW!
That "woman" RIGHT THERE IS NOT REAL! AND I DONT…
ytc_UgxGPJShm…
G
I think its even worse. labor-capital dynamic is essential to capitalism. once c…
ytc_UgxSteemB…
G
Just keep in mind llm models can only give the details that are already publical…
ytc_UgzHGk2M8…
G
Robots are not the same as AI models, but surely the AI can use the robots.…
ytr_UgzL93BVa…
G
How nasty of you to blame grieving parents. It's so easy to do. How about blamin…
ytr_UgwRwyU0T…
G
The AI with the red circle is psyco. They have actually made a Karen AI..incredi…
ytc_UgyXPvCSH…
Comment
So, the progress ethic pushes suggests that should move forward with new technologies because that is how we change our norm and when considering it, we should not take into account the potential dangers involved? I can follow that for robots and AI with little objection .... but when I apply that to other fields where incredible progress has recently been made I come across several personal objections.
Recently Crispr and gene drives, recent technologies that can allow us to rewrite the DNA and genomes of entire species, should be moved forward for progress' sake despite the very clear implication that that progress can get out of hand and destroy ecosysyems, wipe out entire species, and create diseases that can "racially cleanse"? It affects the status quo in the same way as AI and robots could. Though I think we can all agree that would be bad.
At what point does the ethics of progress need to be countered/considered with a morality of progress?
youtube
2016-06-28T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugi0VpRQcJ-dlXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiNbJ86LvBRq3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh1IEiVJXdTL3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UghLxJuzfSM6WngCoAEC","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggsqieRRiXa53gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugg4GbRqmgOic3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UghxMbj5aY2YSHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UghdOZ7MnsH1QngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjiENPkdW5qpHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UghmqNgxlJSGC3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]