Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People will develop hobbies like acting, writing, art, martial arts ect. They'll…
ytc_UgxYeC0lH…
G
Chat GBT is fantastic, it can do almost anything, i need to design a antenna?, a…
ytc_UgzOSQdWH…
G
Well we can't let Ai do all the work and we can't lie and fake shit but Ai is go…
ytc_UgwZHiL8-…
G
Robots can make the world a paradise. Greed will make the world a hell. If robo…
ytc_UgysQI5L7…
G
The only possible argument for AI as an art tool is if the artist only uses AI t…
ytc_UgyoTgnzs…
G
Is this voiced by your your own voice ran through an AI? It sounds... slightly o…
ytc_Ugxotep83…
G
they need to know that the AI has hit a wall , there's not much more it can do ,…
ytr_UgyCVNlmL…
G
People can't even write a prompt to make a donut let alone get AI to take their …
ytc_Ugy3z6rJu…
Comment
That's the massive problem.
Instead of using AI to help improve life, Again the core problem with the AI models like Chat gpt and others...is the whole "AI over people". While our government is looking at AI being the next threat to national security and a "if we dont do it, our enemies will do it." scenario.
Companies are using AI to suppress life. Other nations want to use AI to dictate how their societies would function.
I even argue that if these people go "so how do I stop my country from revolting against me" the AI would probably spit out something along the lines of "tax AI companies more and institute a UBI of 5k, plus x controls on these industries". Its even funnier when these politicians start using AI as an infallible source.
AI should be used in the medial field, scientific field and even help start increasing our ability to head into space. A person needs sleep and our scientists need rest, so if the Ai systems can come in and fill in the 8 hours where a researcher on cancer or fusion technology helps in creating models. That's where it should be used best, and where it'd thrive the most.
In the medical field, example most people do not want to work 100+ hours as a nurse, and studies have come out that a person's congnative function declines over a 48 hour period. Borrowing from Treck/Starwars an EMH/Medical Droid that can come in and relieve a doctor/nurse for eight hours would in fact improve functioning of our medical system. You will still need nurses and even in the future you still had doctors and nurses working side by side with Artificial intelligence. The issue is the administration is looking more at costs vs quality improvement. Costs come down when you can improve quality. Paying a nurse 80k for working 40 hours becomes the norm due to a medical droid nurse covering the additional 80 hours between. The same with a Doctor Assistant in the form of an AI program where diagnosis can be given via second opinion instantly. You'll still need a human eye for verification as both the AI program and Human can have the same errors but the reduction of medical errors is worth the usage.
The other objective is that Drones can help in protecting life. Example Deep sea welding, the mortality rate is 15% its sill quite high but AI assist could drop the rate down to 5%.
youtube
Viral AI Reaction
2025-11-24T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySySSrJsLdqDquA8F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnmG342MjWqLaBfwJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8bJOjxO_pNotTe_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUvcy6GcMHT1p1yd94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxC2yC-QvtwHoHm3lJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxck-W8zDauniEIALl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxDt91TrYUnz6_mxJ94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyn3UNtqiUl7hue-HR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwN2SvuVBDSNqtYtaB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzFImbJ6HsxwT8wpfN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]