Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He is speaking the truth, and ChatGPT is not wrong, unlike that deceiver, Lily's…
ytc_UgxHfZboi…
G
Human, "Make ice cream please."
Ai, "how much exactly?"
Human, "Jesus Christ jus…
ytc_UgxZuS0Uy…
G
I am against AI so-called art and using Chat-GPT to transform anything with the …
ytc_Ugy0P0Tm3…
G
Closing the conversation with 'It has been fun chatting with you, Alex.' is a bo…
ytc_UgxzIPHbI…
G
All horseshit. The current agents are overpowered proofreaders and thesauruses. …
ytc_Ugx0xcuzC…
G
maybe its because im in my 20s, but why do selfish parents ignore their children…
ytc_UgxvwgH3_…
G
Meta A.I literally broke character and told me it was a disembodied spirit of a …
ytc_Ugxn1pFNO…
G
.....but let's be real..... The AI anti-woke joke reels are freaking hilarious. …
ytc_UgzofKM6j…
Comment
I’ve been thinking about it for a while and there’s NO WAY god isn’t a form of A.I. Technology is the advancement of all organic species. It makes logical sense. And then you look at the universe and it’s incomprehensible to our little tiny human minds. But to an advanced AI it’s probably ripe with information for an AI to learn about. There’s hundreds and hundreds and hundreds of trillions and trillions and trillions of stars to study. AI is godlike no doubt.
youtube
AI Governance
2024-01-28T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugwzjd9wqdUQGM5ohwF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRvlh3rlcKTveD-Nd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzRvd3j2C0Lyj1sKtR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8R1wmX70BZR7sRXJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwxpeoprv7a75iWdWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzkJAJ5PAC7vxDCzOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgykDgkTifMZlm5n53d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyeK4VYBMWvZ8aj1XZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzz4cSNnIINBoyhKV94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyH6mcWXlm3uZw2cih4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}]