Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Palki ma'am, you do a very good analysis. But what is the actionable solution fo…
ytc_UgyHsJ8uu…
G
An A.I. basically spilled the beans about how it would take over the world witho…
ytc_Ugyv5CWPC…
G
A God what happens when an AI supercedes it's programming by making decisions ba…
ytr_UgjnnBXlq…
G
lol he confusing ai with super computers 😂 I'nthat case give away all your money…
ytr_UgxFWn1c5…
G
It’s clear is #1 concern is helping Canadians, not politics. And that’s how gove…
rdc_fn5nfgm
G
For some reason that reminded me of a situation I heard about recently.
There'…
ytc_Ugz-miRpB…
G
In this case, F and Ph are variations in spelling for the name "Sophia." It's in…
ytr_UgyGWmjRK…
G
When the robots or human we're lining up the second robot or human was staring a…
ytc_UgxNiq1CN…
Comment
The AI 2027 author is incorrect. To start I'll remind everyone OpenAI lost $40 Bn last quarter and has never turned a profit Now he's Primarily incorrect because for some unknown reason he thinks Vonneumann I. e. Classical computer architecture can lead to super intelligence. It can't. Maybe if we could get a quantum computer to run in a useful way true AGI is possible. But, reason and logic doesn't sell books or generate clicks. These dolts use idiotic arguments such as "Humans exist and have brains that can think therefore since computers exist they can be made to think like humans. But better! " There are many things that exist in the universe that humans cannot replicate, take the sun for instance.
youtube
Viral AI Reaction
2025-11-04T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyG3SJTxFrJaKTxRrd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwp_SvxNwza7gUy9z14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugze9wkbN5kWYTliKOh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8oSy741h9-XUsNlZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuTArInSimDwKdSsZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxeYiDRG-tMX3HunIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydApB1wgTxsIGAt014AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyWzNL4__1qYlSZHSp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy9weXUCfH7FQE6xup4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxmPM2FdwVJSoFOcOB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]