Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
8:16 4th line. Methodology for normalising safety statistics for partially autom…
ytr_Ugy7S6ZT9…
G
Your not taking into account the exponential growth we are seeing AI is being us…
ytr_UgyV-PGfj…
G
was this video made with AI ? hmmzz that knive does not look real :)…
ytc_UgzTMALcY…
G
@41-Haiku I tried to be clear but it's a bit babbly. Tl;dr at bottom:
If ASI ha…
ytr_UgypwtV20…
G
Yeah advancement of ai and robots is also very beneficial. Like it has helped a …
ytr_UgzFxX6gU…
G
Solution "Hey AI, Im going to change your reward function now so that when I shu…
ytc_UgxavQGV-…
G
That's a great usage of AI art! Especially if the city backgrounds are extremely…
ytr_Ugzw3UpHR…
G
Keep buying TSLA stock until it crashes when enough investors realize while Tesl…
ytc_UgyfCZSPC…
Comment
The fact is that many people believe general AI can be here within a couple of years and that by recursive self-improvement, we may have super AI shortly after that. This is potentially extremely dangerous as nobody knows how to ensure that AI is aligned with what is best for humanity.
youtube
AI Governance
2025-08-05T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwuBbi1W30H3JmA__B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzs3ty5lLvPTKteWoJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgydbfMpoL_VhuHIVkR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7KWD6XHTUsC9AP3l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzBZPFnyyTI-okkL514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyX84T1RQNhBC71cO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxUp5OW_QQDCDlHeOF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxyDRTfmWbNb7WqkDR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyYst-12e95XTidJtR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzSxpIahXkBzBHxwCd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]