Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ofcos their is healthier methods, since when does that matter? The whole history…
ytc_Ugx03XW7F…
G
Asi thought is not bad to use ai, at the very last is a tool, but asking a ai to…
ytc_UgybIOsYV…
G
I always thought that healthcare is one of the jobs that is irreplaceable by AI …
ytc_Ugy44MqjI…
G
AI isn't really required there, we've had computer vision models capable of that…
ytr_UgyXHrPKQ…
G
I asked ChatGPT the same question and got the following answer:
• To reach ato…
ytc_Ugw0occ1y…
G
@roxsy470So what if I decide to ask a commission for free by typing words…
ytr_UgwNYDbbK…
G
Hey, I really appreciate your comment, and honestly, I think it's good that you …
ytr_UgwpJQxjF…
G
I can't talk to my chatgpt4o it keeps interrupting when I talk to it. Can anybod…
ytc_UgzEDBF2V…
Comment
While the interview is interesting, this man is obviously not a believer in the spiritual workings of God the Father, Jesus Christ, and the Holy Spirit. God said we were made in His image and nothing else is. AI will never have the “humanness” that humans have (consciousness, emotions, feelings…). They can be intelligent and act like humans, but never to the extent of an actual human because of their lack of being created in the image of God. I say to all worried after this podcast, read Revelations in the Bible, we know the end, and AI isn’t it. Repent and be saved and do what you’ve been called to do on this earth: Love God, then love yourself so you can love others well! ❤
youtube
AI Governance
2025-08-03T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzAPqwGQeR6uYPLql14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAz7vgqpc49iOyeZN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFx-H-pJihj2fi1R94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw62bnUxrkxkZRBbtx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTpZ0vGP6TgIB5VRl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwq4Ms5JU9DpYdOy2h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyb99UcUlG-sU9Ff6J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUxo1x1Xsr4b09dJJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwRhHDc45fjnYPO_bN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx6Hk0entIAZTbt5Od4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]