Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Calling oneself an "ai artist" is such an oxymoron! If you think you're an ai aR…
ytc_UgyXo3q2z…
G
I understand your concern! In the video, Sophia emphasizes the importance of bal…
ytr_UgwQ80bBj…
G
If they argue that AI learn like a human does, that means they create like human…
ytc_UgxaxETyI…
G
Bro telling us how bad is ai, while using 50% ai made clips in his shorts 😂…
ytc_Ugz8LbRQR…
G
Hey @darlysonalvehs3280, thanks for your comment! If Transformers were a reality…
ytr_Ugx-yGPDS…
G
You said not to replace, why did they test an ai against doctors, because they w…
ytc_Ugydt8PlJ…
G
We are a small tech company, we have dabbled with AI assisted applications to he…
ytc_Ugxw4nsgr…
G
The only reason I semi support AI is for people who use it for original ideas an…
ytc_UgxpLuAF6…
Comment
1. Weak AI ( our current technology)
2. Strong AI- self aware AI, human level intelligence
3. Super AI- very scary, genius level- human level intelligence, with the ability to make itself even smarter, analyze, process, interpret and think hundreds if not thousands of times faster than any human ever could. Think of Terminator meets iRobot on steroids.
I do believe at some point this century will reach strong AI. Siri and that robot that beat that world-class chess player are excellent examples of weak AI. Trust me those computers will become smarter over the next couple decades. Ray Kurzweil the technologist, is estimating year 2045. I believe 2045 at the earliest. A lot of people from the technology community did a survey and there's a 80 percent probability that it will occur within the next hundred years.
youtube
2015-07-30T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UggHC7rLa4Gu0XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggqZ-Bfm6zNFXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgiRHZUgZugRGHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UghIxxuueQpi6ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UghHmu3UOIYh5ngCoAEC","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjviAibkxEovXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugg7AFhd6A9w3ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugix_u8m5HqkxXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugi2IphvaEHTxHgCoAEC","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjXp-Uti9IrE3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]