Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Stop watching ai slop channels like this one. They stole from dozens of real cre…
ytc_UgzPmeKSb…
G
Yeah no if you use AI for something like this _might as well fucking elaborate o…
ytc_Ugw0RY_8x…
G
@whodis4097 trained neural networks. hard to respond in this site as my response…
ytr_UgwK2tQDY…
G
So i guess we should all die qnd starve ourselves then. You dont need to be so p…
ytr_UgxP71fgb…
G
How many surgical procedures are unnecessary but happen so the doctor can make m…
ytc_UgxNcDElv…
G
Actually, I think that alignment towards submissivity will be easier to achieve …
ytc_UgyluCUit…
G
Sophia is more human than a robot. Fear of being obsolete and not being useful.…
ytc_Ugx3RRe6N…
G
How does he know the AI isn't just choosing from likely word combinations, like …
ytc_UgzO1o6Or…
Comment
My theory for AI is that it is limited by the same things as we are. We won't have robots overtaking us as we would have evolved more by then.
An AI will not create new information out of thin air which means that it too will theorise and come up with philosophical questions it too cannot answer. There are limitations to just about everything in the universe and so will AI.
youtube
AI Governance
2023-03-12T22:1…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwqIsJ3NYD5GE1uK0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz-d7HT0UCnJQubrYt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxHoH5QHE9xSbEYhsl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwbOWIJ8-ARrm2ryLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxv-PmwTEO_GP4jcgN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8qobOKBa8Q9tjR9x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIwvd_MXAoUesmI0h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy5Asck8gKjbefbWv54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHm8B2crWSEe2Ylk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwAwrUeV4tvl8BtdN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]