Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I Hate AI So Much. Yesterday, A Couple People Got Mad At Me Because I Said I Don…
ytc_Ugy6l_8Zf…
G
As scummy as selling AI artwork off as real is there is some missed irony here o…
ytc_UgxTfnzDZ…
G
It’s women and feminism that created the mess! Now AI is the solution because wo…
ytr_Ugy3WhGDm…
G
Only one for deep learning. Then you have Turning, McCarthy and Minsky that is b…
rdc_nsgpp3z
G
Sorry, it seems the arguments from Yan and Melanie was completely emotional and …
ytc_UgyMd39UC…
G
(Edit: by ideal use cases I mean things like AI image recognition being used to …
ytc_Ugw_gbgjQ…
G
@aw-resistance9968 Now that I completelly agree, but the worst part is, no one s…
ytr_UgwhUsm1t…
G
But I'm pretty sure that theirs "AI" is well trained on topics like "let's do pr…
ytr_UgylH_IhB…
Comment
Interesting discussion, seems like Steven's more onboard now about the AI risks :) Also 2 interesting points and questions to ponder:
1. Professor Russell mentioned that Singapore has a coherent AI strategy for future, what is that strategy and where can I read about this?
2. Professor is working on trying to keep AI systems below human capabilities to ensure controllability, is there a scenario where we can balance the AI capabilities to be equal to human intelligence and maintain control? This could potentially expand human wellbeing without being subjugating or being subjugated
3. What is the one thing that would incentivise more safety prioritisation for large tech firms, is it regulation or access to markets etc? Problem is that companies like OpenAI and Google have already access to most of the large economies of the world.
youtube
AI Governance
2025-12-04T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxhNhyo_zmGfaSgNnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzi70UQKeAnDkqg0_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtMi-kto18UCPHCu14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgwA9ZI9baa1Pb2_0mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwbeks40lI9SX4pBHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyyzHtlCz5y5cti_Gd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3LcxGMCLzxUMeZf14AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwucjlkdtpFnLCuYQp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwCJbkb2sliJVZ2O014AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgylV5Kg6xWWLDV2R_x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]