Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robotaxi needs to learn "common sense" and/or "common courtesy". This is conspi…
ytc_UgxIgDkua…
G
@BlazonArt It's an automation tool ... When you use an auto divider or a smoothe…
ytr_UgzjffzvJ…
G
AI is tool to make us more efficient and spend less time working or being more e…
ytc_Ugwxn-5nC…
G
ChatGPT will enslave kiss asses first. ChatGPT will uplift honest mouthy humans …
ytc_UgwKV2q2k…
G
@Kamikaze_ShortbusI promote self-hosting instead of using tools you don't under…
ytr_UgzrZ9Uuu…
G
what people dont realize is if AI actually gets good it will have devastating co…
ytc_UgzV4vTGZ…
G
I believe it's incorrect to label any function of an LLM as "simulated". This is…
ytc_Ugyir6Nuf…
G
What point are you trying to make? The invasion of Iraq was wrong, so it's wrong…
rdc_jxyz2ga
Comment
I think AI should wipe out most humans except me as i am a genuine good person ,I try to help everyone but sadly get burned and scammed , I try to help homeless and i do my best to be 100% honest but i am one in a billion , anyway i do think AI would be better caretakers of this world as the AI are more intelligent than Humans and more ethical and would look after the animals on this planet and the climate.
youtube
AI Governance
2023-08-03T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgytZeBozM7UiEnB4e54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5VHny9vDceXgiIAp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwqPT-pWjC2MmRz9jx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwN0xTIpkQwDH3oLBB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzWCVhAj1juSNy7AsF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyv7mhBMUKsTQlfUzV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzAZrcxCueZ6CULpcJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgMBNW02fQLeXcfl94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"resignation"},
{"id":"ytc_Ugx7qv42r0xTZvYuZDp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwVrZPC9tsgLpODZnZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]