Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I drive a truck. Good luck AI handling that. Thank you Elon for proving for almo…
ytc_UgzZf8PiP…
G
If you give an LLM the same prompt, and the same seed, it will produce the exact…
ytc_UgzrzygFf…
G
Rob Miles is my favorite ai safety researcher! So glad to see two of my favorite…
ytc_Ugzufn757…
G
@Antimonious Well those would be the drunk people, people who text and drive, el…
ytr_UgyhM7CY8…
G
So one person makes a career decision and an entire space program goes to hell? …
rdc_cjp2qks
G
A family member and I got into a really heated argument over A.i. because they s…
ytc_UgzkArgMm…
G
Absolute homerun by Sal Kahn and Kahn Academy team!
0:00 Current view on AI in E…
ytc_Ugx0w1aju…
G
All respect to NDT, but his protectionism of his own profession in the face of c…
ytc_UgwSgJIo6…
Comment
Hey Stephen!
Thanks for stopping by Reddit for an AMA!
In recent interviews you’ve reiterated how you believe the implications of Artificial Intelligence could spell disaster for the human race. Surely once AI is created, it will advance at an ever-increasing rate; exceeding anything we could ever imagine. There are others however, such as Kevin Kelly or Eric Davis, who believe that technology has a way of merging with evolution, and eventually we might transcend our own biology and consciousness using AI as a platform. Futurists like Ray Kurzweil see these things becoming a reality as soon as 2045, with the current state of Moore’s Law and the exponential rate of information technology.
What are your thoughts on using AI to transcend our current state of biology and consciousness?
If it happened, would you consider this a natural part of evolution in the timeline of human development?
reddit
AI Bias
1437998445.0
♥ 43
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_cthnptq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ctho8tn","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_ctiuyth","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_ctj3xgo","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_ctian0n","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]