Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am much more concerned about bad actors deliberately developing hostile AI (li…
ytc_UgzPqsUOd…
G
Sorry, you’re wrong. Time will tell. Right now using AI is more like being an ar…
ytc_UgwtlOPb0…
G
when you meet a girl at night at a party but then you see her well in the mornin…
ytc_UgxylMvrh…
G
@ElectroBlep LOL the police department using Flock AI would be in trouble if the…
ytr_Ugw6Ixpwt…
G
Having a "conversation" with ChatGPT is like talking to a Plinko board. LLMs do …
ytc_UgwX50B-M…
G
a.i. will Create ppl Who INTEND to Break up or Disrupt their systematically tigh…
ytc_UgwuXfP96…
G
No matter how much of a shortcut a drawing tool is, using it still requires more…
ytc_UgyeXIkDj…
G
Who will then run the data centers, power and cooling systems for AI? Who will m…
ytc_UgxPXKvTh…
Comment
I got my undergrad in CompSci in 1988, my senior year, I took AI as an elective within my major, it was not a required class. On the first day, we discussed and enumerated everything that AI could not do, everything from beating a chess master to reading handwriting or driving a car. It can now do everything on that list and then some, many things better than any human on earth.
youtube
AI Governance
2025-06-19T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyOcOrRWPRTaamoCfV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz_6BF-qrF0d3wK-Xd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzxEv_dkZdKmKyNQdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxonceApChfecRu5Sx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz-V0oVp9gOBQh-P3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxHDx1FQ34JIbG_o3l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzn9GQJVKYRowjdIwF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8EDh61b0lrGVreKF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyexkVWruXB2a5eo6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrmB9BMl4FlZiW_Md4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]