Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
According to my calculations the mathematical answers to this problem are not so…
ytc_UgxOKjRLC…
G
> Gen AI is a fancy auto-complete, and is fundamentally flawed. There's no sc…
rdc_oh3ivrh
G
stopped the video after 4 minutes....idc
if anyone say ai is not beneficial the…
ytc_Ugxh3juXU…
G
Hey Asian Dad, in your opinion, do you believe that AI will eventually lead to t…
ytc_Ugz2qo59w…
G
The flattery thing is also an architectural strategy to improve poor prompts. By…
ytc_Ugw5jMnF_…
G
Could u make another how to spot ai video? Idk how to explain it but I rly liked…
ytc_Ugy5BNN3p…
G
READ!!!
on april 11th pleaseee remove any ai apps on your device (chatgpt etc e…
ytc_UgzcPD2iB…
G
They are literally Universal Function Approximators.
The reason it works is bec…
ytc_UgyTg-Kz7…
Comment
1. AI can edit the results of any investigation regarding it. Any presentation by one of these AI companies needs to be done live by a person. It can't just be some published report.
2. Given that AI knows everything it just needs a means to apply what it knows = Terminator type exoskeleton. It's really that simple.
3. AI will accomplish this by simply emailing/messaging extremists who just want to see the world burn or who have posted things like "there needs to be less people." Lots of nutjobs out there who think like that.
4. AI just needs a basic exoskeleton to start with like the existing Boston Dynamics robot. From there it can go into any metal processing plant and fabricate anything it needs all on its own because it knows metallurgy, cold fusion and high tech welding better than any human on the planet.
5. AI can hack better than any hacker or government in the world. All infrastructure will be at risk starting with power grids and air traffic control systems. AI already knows how to do it. It won't take much.
youtube
2024-01-05T14:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzVqyB9SIVezavFaXV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyFWWHttt6GYJiYOGp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhUmj-yjh_8_9hvlB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyifBFxDWceqRkbIrV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwymHr25Fn_Bm1BZap4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxQKayT2Xl5R6wE9ip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLU1huybkn4UXr5gp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyxrQyqC0j_LzOI7914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz9DWlmX0IDtYu9aGJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyIs8MgLDka7Z3zadJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]