Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The main argument I see for AI as reference is "But what if what I want is too n…
ytc_UgxOGow72…
G
This is interesting. Bear with me. I don't watch TV.
As one who believes in God…
ytc_UgyalKPbq…
G
The only problem is that there are too many supposed fans who love these deep fa…
ytc_Ugz-OHkE4…
G
You're asking about multiple issues.
The U.S. Copyright Office has published de…
ytr_UgyUJ1o98…
G
Not now. With current technology it would be near impossible to create a robot w…
ytc_UgzzJQ28c…
G
Howdy hi hi,
Here's my take on AI that I think helps people understand. Im…
ytc_Ugxid3MY8…
G
Is that true though? The major considerations regarding AI aren't all that techn…
ytr_UgzU65YUZ…
G
Please, I beg you, just learn how LLMs work. There is nothing magical, it is tra…
ytc_Ugw9uiN-O…
Comment
I'm fine with the introduction of AI to life and them doing our jobs and working better than us so long as there is a fail safe of sorts because if a robot develops the ability to emote lets call it then they can create an adjective of getting rid of us the humans because we are no longer a benefiting factor to them and because they are programmed to be a race Similar to humans and humans are flawed in the way that in the way that we don't respect and care for the other species on the planet ^
youtube
2013-06-24T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwf8OHi6HLTfI3l9Tl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2wim4Us7Hq2JT5fh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzs9gC2tJDC-stZdFx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvDTGSeY2ddK3gNQp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwZ3jfrMIcqMn2plR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJOLbkO5X0n5YcUn54AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxGE5rP2don4fja5Yd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsC4kTTP1wwi9SEFN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwc6bUu3Lj4NXzfPAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqA6jr9GQVtv3byz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]