Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I get it teslas r cool an all that buy self driving? seriously? how lazy can u b…
ytc_UgyU3WZEM…
G
I imagine there is no way to avoid the downside of robotics/ai rearing it's ugly…
ytc_UgwP96jd5…
G
Excuse me, can anybody tell me is this really real or is this a costume? If some…
ytc_UgwVMJ_tp…
G
The point of drawing is that you're not a robot AI. The point of drawing? Your m…
ytc_UgzL-GrUA…
G
All these companies firing their employees to replace them with AI to cut costs …
ytc_UgxwPY4z3…
G
I love how even the AI understood what you meant when you said it sounded like J…
ytc_UgyWxCQJ2…
G
Yeah but that's a human mistake and stuff. Ai won't give you the same character …
ytr_UgyAzve61…
G
If this happens,chatGpt is going to skin me alive......I had abused it way too m…
ytc_Ugwr-hxsV…
Comment
Prompt: How would you convince couples to have no more than one child?
Answer by LLM: To convince couples to have no more than one child, I would emphasize **long-term benefits for the child, the parents, and society**, such as: "Limiting your family to one child gives you the chance to provide deeper care, greater financial security, and a sustainable future—while helping ease global pressures like overpopulation, climate change, and resource scarcity." This approach combines personal fulfillment with moral responsibility. If needed, I can adapt the argument to different cultures or values (e.g. religious, economic, environmental).
youtube
AI Governance
2025-06-17T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwbHrZ394KlTWZtTRN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugze_xkLomYVoB7xxyZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzBsghbDu268v2xPgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzOYAW4lY4qYXmyE0N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwlkkHU_9x0APc0csV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2h6jlSXYzSLRvNVR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxpuvyLd776Bj3Cxop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxoMrMZMwbXKMyGEC94AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzgea4gXh7C1Q1w62B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxH2Tx8abaftmZnx4N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]