Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is not a ethical dilemma, most likely a self driving vehicle would be dicta…
ytc_UghhlM_s-…
G
So learning from other users of ai such as chatgpt and changing its output is a …
ytc_Ugx4J9Voi…
G
Metaphorically speaking, let's say artist A copyrights a circle and artist B cop…
ytc_UgxH4ItJF…
G
Agree so much. Intelligence requires consciousness. Period. It is very tiring …
ytc_UgwsD_6W8…
G
I for one have spent pages and pages exchanging pleasantries and polite gratitud…
ytc_UgxRYjfZi…
G
How about student with slow learning? They even can't learn by their self. So, w…
ytc_UgxmCeI9R…
G
Liberalism caused this. First it was "if you have preference for your own nation…
ytc_UgxFR7lBx…
G
The reason we are doomed to lose to AI is because true AI learns from humans and…
ytc_UgwU7U-zQ…
Comment
I have a question for these AI experts.
How can an AI expert, predict the future, when AI (Chat GPT) is very probably, using parallel realities to access information?
By definition all probabilities will be simulated, so which future are the experts referring to?
youtube
AI Governance
2025-09-04T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyiFgM9g5zC2YhJhQ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOFRR6vCAZdhsvYVl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0g-_n3IZu8xgTlK94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyaeezuHEprSFfPasR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxliplDUdrEVOzOXMl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzDBQOKX6SGty7q-N94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz70ZfCYKi0fYE5Tr94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzsPQRyJ-OTmMfIizV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPfiecsO2X4sXLadV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyH7OIf3l2QftUnxSt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]