Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@T in this particular case, they need, in fact, if you hear the whole video and…
ytr_UgwKrQWE-…
G
Even if it gets fully replaced by AI, it doesn't mean the hobby will die.
I ju…
ytc_UgwHj7RP7…
G
Where I work, we went through the following stages:
1. "You should all be findi…
rdc_ofhkd0s
G
This is wrong. The answer is not "there" Generative AI will make a best guess ba…
ytc_Ugz8nB74J…
G
Am I resisting or participating in the Great AI Enshittening if I now insist on …
ytc_UgwiDcZwA…
G
90% of what Humanities majors do in Corporate jobs can be done by AI. Too many …
ytc_UgxvbHT2_…
G
Also fun fact: Ai CAN'T BE COPYRIGHTED. Copyright only applies to human made pro…
ytc_Ugzq1piZQ…
G
googles Gemini is a failure(so far) and isn't doing what they claims!!! AI is a …
ytc_UgxehfXmp…
Comment
Btw I ultimately don't agree that tools should have more regulations than: explanation of risks and how to avoid them, obligatory before start of using it. In case of some interactions - transparency that: you are talking to bot/AI, that it can be wrong/halluciate and that its creator dont take responsiblity for its use other than it is meant for.
2. As Sam said - you can sue for harm, if they didnt warn you before they should be responsible, just like medical companies for addictions, banks for economical crisises and social media for related issues and profit over safety way of operations.
youtube
AI Governance
2023-06-29T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugye5P668H0sFEzba1B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWj33fFsnXXkXy_nZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyxo1aWgTHd3EsFvwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxX9taOsHS6xjiYKGZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyAzK72rDK1CT1gTQx4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxMnJeU-xP1KgAC47F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4yZixOD852d0mnmR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUoQOOeAOeKDyGxz54AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyubwfKjUUpA6D0VMt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxsdpBH8Gv1qu9Yz654AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]