Do you know about roko's basilisk?

lol why would it torture/kill you when it could just mold your mind with nanobots into trusting u? wtf?

1 Like

Oh god I’m getting existensial crisis now why did you do this to me :frcryin:

its that if you didnt see the video then no spider will drop on your head, and that spider and subscribing are completely different things, but basilisk ai will do its thing because you found out the concept of the basilisk exists

tbh being brainwashed sounds scarier and that is a good point man

anyway only valid humanity future i want is gigabrain mindd bmerge. all 7 billion humans in the world merge minds. 700 billion iq? u become humanity?
:poggers2:?

why do that when you can just make an ai that is so smart it thinks that killing humanity is inconvenient and just does it’s own thing, colonizing the universe or whatever while humans just kinda go with it

cause u arent the ai

idk why it would actually help, same way as single human on earth with gajillion IQ would

??? literally humanity (includin u) becomes the super smart ai. what dont u get lol

What is happening?

1 Like

yeah but what you do after that, like if you become almost infinitely smart what if games and books suddenly become boring, and all your goals (food comfort etc) can be solved in a couple minutes

just make it not boring lol. literally just hotwire dopamine into ur gigabrain whenever something swag happens

at that point why not just go in a coma and infinite dream and make a machine that just dumps infinite dopamine, and slightly change your brain so that you cant get used to the same levels of dopamine

What r u guys talking about.

no clue honestly

I began with talk of some computer snake.
Then escalted to this.

diamondkfc suggested that the snake would rather brainwash than kill, and again replied with a completely different topic

Roko’s basilisk is a thought experiment proposed in 2010 by the user Roko on the Less Wrong community blog. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn’t work to bring the agent into existence. The argument was called a “basilisk” because merely hearing the argument would supposedly put you at risk of torture from this hypothetical agent — a basilisk in this context is any information that harms or endangers the people who hear it.”

oh you fucking bitch.

you doomed us all!

My tiny brain doesn’t get it.