Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@banoriocanencia2248 Thank you for commenting! Did you enjoy watching the movie …
ytr_UgyetmaK9…
G
AI subscriptions are becoming the new utility—affordable add-ons (₹300/month) th…
ytc_Ugwhlsntm…
G
NVIDIA Jetson Orin: In June 2025, wreckage revealed the integration of this high…
ytc_UgyQu6PI7…
G
The unpredictability of AI’s impact is a lot, but Pneumatic Workflow has helped …
ytc_UgwjXFvVn…
G
I think we'll be ok, if a robot can't tick a box to pretend it's human....…
ytc_UgyrrDofq…
G
I'm a late millennial (early 30s) and have witnessed the fast-paced changing env…
ytc_Ugzpk4jS_…
G
Bruh .... I just saw this short but i used farmer and tractor example when I was…
ytc_UgzDaaxC-…
G
AI is actually more dangerous than realized, me personally have experienced this…
ytc_UgxACzGR6…
Comment
If the question of whether a AI is a person, try testing whether or not the AI meets the preferred criteria of personhood discussed in the last episode about personhood. If a strong AI was tested against the cognitive criteria it may be able to pass all requirements due to its own psychology and social ability's. While testing it against the genetic criterion its a automatic failure due it not being organic. So I find the first step to solving the question of whether AI can be a person or not, lies with the most popular philosophical belief of personhood and its criteria chosen by society.
youtube
2016-08-09T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Uggeu_dL2yyGR3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiIBZ-cQU9HDHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiV2FgtcXmuBngCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UggzYa8S3hn_p3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggl5ij_czn1Y3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgizldNKvQmfYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghZuYnwWCnE53gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghqISwFTBtRP3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghudDn8bG56WHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiEeEdmu4MF33gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"indifference"}]