Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
HAQI is coming: Homo Atomic Quantum Intelligence. It will transcend human compre…
ytc_UgxjPPySE…
G
I'm an accountant. Our CFO quit in May, I took on most of his responsibilities. …
rdc_hjw70x6
G
That Midjourney lawsuit feels like the first real stress test for the whole “fai…
ytc_UgxOqWdSp…
G
I'm still salty about tech world turning to crypto, and then to AI. We were prom…
ytc_UgwDSFxWL…
G
@SeekingTheLoveThatGodMeans7648 I think the issue is an AI is trying to use limi…
ytr_Ugzz04ZNL…
G
There is just no way that it's possible to accidently rollback too far in an org…
rdc_mrvvwd5
G
that's why trump wants to have us rely on fossil fuel, AI doesn't yearn for the …
ytc_UgyNlIsw1…
G
Answers that sound like they are coming from a person who has not thought about …
ytc_UgzpUQnzu…
Comment
Some people say that artificial intelligence is good, while others say that artificial intelligence makes people less intelligent. But basically, if we consider the goal, it doesn't matter if there is artificial intelligence or not, the important thing is that we should always take steps towards getting better, whether with artificial intelligence or without that dear friend.
youtube
AI Governance
2023-07-10T21:3…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugzbo_1_KZCcyIA84dl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNKrk0xqNixRJYOG54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyBRnCs7XxUXTNjk2V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMYmL1RxQaieJqIC94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5DfPRq5L4q3rbdrR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmOK8uzN2onxdsHlN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxinDG0D28Y-0NYKPh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUkcM0ZCCOXNhdSvd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwO1Gpgc4H8CxqUont4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugznn4PUB1L6cCa5vMl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]