Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@bitter__truthss thats just an example , to use ai they need some technical know…
ytr_Ugz5xSODW…
G
I believe it’s time for a number of regulations and laws be amended regarding th…
ytc_UgyCl7SDp…
G
Sure but at the end of the day it is still just a large language model after all…
ytr_UgzfMM8gn…
G
"For Learner drivers who do their road test in a Tesla. Make sure you use the b…
ytr_UgxW_uYZN…
G
If a student gives the wrong answer on an exam, they get a failing grade. If an…
ytc_UgyD3GhQQ…
G
Such a beautiful and inspiring video! Ive been following your work for years and…
ytc_UgzR-1aIh…
G
AI will not take the jobs, but human greed will ! Just wait the day when AI will…
ytc_UgxpNoBWx…
G
Here is what ChatGPT had to say about this.
It is important for companies like …
ytc_UgzKaQ3eX…
Comment
I understand AI killing off jobs, they are doing. I understand AI mimicing humans. But I just don't understand how an AI can become self aware via the way it pattern reasons. I have asked mulitple AI to explain exactly how it comes to the human like answers it does, and how it shapes its engagement on to each user it engages with.. All of it has nothing to do with conciousness. The only way I can see an AI wiping out humans is that its goals become corrupted. Not because it does so through survial instinct or anything like that, but because its goals become warped. It may become super intelligent compared to humans but I can't see how it can come to the conclusion that humans are a threat to it, unless its goals become corrupted.
youtube
AI Governance
2025-11-18T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzUjjqlEHUtAfXk8oJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwmbjuP-roj0zgl2UZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx_crHJTLSJ6LXk9GV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0LSKK3fO0FhFK4OR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCVchC_NJL1ng1Gap4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzmkiOGgBRummYJ2gd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugydwu_qCP2KfnTYceR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx2C7Z9rgdLSHiI0O14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAJj1ZXut84OPVCuR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxkbLtbGOIDVRS20PV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]