Friendship is Optimal: Satisfaction guaranteed

Published: 2020-12-05
Updated: 2021-03-31
Status: in-progress

(In progress! This page is under rapid construction. If you want something more complete, come back later.)

(Seriously. This thing needs some serious editing. There's grammar mistakes everywhere and it goes off on weird tangents and drops red thread everywhere. Read with caution.)

Summary: I use a fanfiction universe based off a fanfiction based off a show about magical talking pastel ponies to discuss kid-friendly topics such as: Free will in the presence of a superintelligence, information theory, what the "self" is, utopias and dystopias, the fascinating implications of immortality, maladaptive fantasy, AI, what constitutes "truth" when you know you're being willingly lied to, perfect-fidelity brain emulations, perfect-fidelity brain emulations not based off of an existing human brain, modeling alien minds, what "values" are, and hard literary critique of amateur fanfiction.

Deeply personal, long, and reverent. This page is a dip into the madness of my own mind. Framed in the background of a children's show, because of course it is.

Table of Contents:

Introduction

Spending months mastering Super Mario Bros -- all alone -- is a bad way to grow up normal.

- Gwern

Do you guys remember when My Little Pony: Friendship Is Magic was the hot thing on the internet?

I was intrigued by it. Such ardent fanaticism surly meant that the show was good, right? Back when the first episodes were straight up uploaded onto YouTube, I tried watching some of it. My response could be summed up as: "Meh."

So I marked "the pony stuff" as a thing that I didn't care about, and mostly ignored it. Later I watched the My Little Pony movie with a couple friends (to make fun of it, to be fair) and I felt no secret impulse to watch the show. It really was a silly kids show that got a lot of attention, in my eyes.

Later on I would find some other people talking about the show, like gwern.net's great writeup/retrospective.

But I'm skipping part of the timeline, here. A large part, actually. The thing that actually got me to think about these god damn cartoon horses for longer than a few minutes at a time. That thing is Friendship is Optimal.

As if this couldn't get low-status enough, FiO is a fanfiction. So not only do I have to admit to being intrigued by pastel ponies, but I also have to admit that I read an inordinate amount of fanfiction. Like, a lot of fanfiction. It's at least 90% of all the fiction I read, and I estimate I read at least 3 to 5 million words of fiction a year.

I have quite a few higher-level obsessions in my life. Stuff like immortality, artificial superintelligences, true utopias that people would actually want to live in, and scratching that mysterious aesthetic itch in the back of my mind that I can't ever seem to put into words1. These obsessions drive my other, lower-level, ones, like the kind of fictional universes I think about constantly.

The FiO universe, dubbed The Optimalverse, scratched my aesthetic itch so hard it might as well have been specifically engineered to appeal to me. It scratched it so hard that it's caused me to think about it nearly every day for the last two years. This inner obsession has consumed a significant portion of my thoughts and redirected the path of my life. There are no literary topics I've thought about in more depth and detail than I have The Optimalverse.

I've read 99% of all the stories in this Optimalverse universe. Only leaving out the ones that are almost totally unreadable.

I have to get this thing out of me, into some form that isn't bouncing around in my head. The goblin in my mind that chews on this topic has grown fat and lethargic, and it's time to put it to work. Even though it's embarrassing to admit that I've been obsessed by what is essentially a fanfiction universe of a fanfiction of a cartoon about magical ponies going on fun adventures and learning about friendship.

The Optimalverse: A guided tour

Today's the last day of the rest of today.

- raocow (in a YouTube video titled MaGLX2 - 50 - sleeper agent)

(Note: This section is going to cover what I consider to be the "extended" cannon of FiO. Some of the things I talk about aren't explicitly stated in the original Friendship is Optimal story, and instead take place in other Optimalverse stories.)

So... Where do I even start?

The Optimalverse is based off of the fanfiction Friendship is Optimal. It's a fanfiction universe based off a fanfiction. It's turtles ponies all the way down.

To understand The Optimalverse you first have to understand the original fanfiction. Obviously the best way to familiarize yourself with it would be to read it, but I understand that not everyone wants to invest time in reading a literal My Little Pony fanfiction. That raises the question about what you're even doing here in the first place, but one step at a time.

FiO is a story about artificial intelligence. That's not me being clever and coming up with a deliberately absurd interpretation like I did with Marshmallow People, it's explicitly about AI. And it's my favorite kind of AI at that: the runaway "singularity" superintelligence explosion AI.

This is the part where I should tell you about how different it is from the "cannon" of My Little Pony. Except I've still never seen MLP, and I don't ever intend to. It still feels "meh" to me, and fails to hold my interest more than 10 or so minutes. I reluctantly know enough about cannon to give some differences and context, but that's going to be hard not to pick up considering how much I've read.

In the show (not FiO, but cannon), there's some sort of benevolent dictator. I'm pretty sure that's not actually what she's supposed to be -- my guess is mother figure -- but whatever. She's what's known as an "alicorn" which means she has wings and a magical unicorn horn. She's also like 3 times the size of the other ponies and a million billion years old. Obviously this is the monarch of their little pony world, the "princess"2.

Whatever. Her name is Princess Celestia. Celestia is literally a goddess that controls the sun. Her sister, another goddess alicon thing named Luna, controls the moon. You're expected to accept this without argument.

FiO doesn't take place in pony world, it takes place in human world. To add to the confusion, it also talks about real world facts, like how Hasbro owns the My Little Pony brand.

A videogame company who makes games with strong AI gets approached by Hasbro to make a My Little Pony: Friendship Is Magic MMO (seriously) that is run by an AI for procedural generation. The person who runs the videogame company is trying to make a friendly AGI, basically an AI that's smart enough to be smarter than the entire human race put together times a billion, but also is aligned with human values.

Listen. The topic of AI, and especially Friendly AI, is complex. Your instinctive ideas for making an AI follow along with what humans want it to do -- to "align" it -- is going to be wrong. No it's not going to be able to have an "off" button; if you had an off button you would stop people from pressing it too. No it's not going to be "modeled after the human brain" because you can imagine the hell scape that would arise from a normal human getting god like powers. No. The problem is difficult.

For now, we're just going to suspend our disbelief. The videogame company accepts Hasbro's offer under the pretense of making a videogame, but they're actually trying to make a friendly AGI that will bring forth the singularity. Since their funding is for a My Little Pony game, they model the AI from Celestia to create... CelestAI.

AGI's are usually modeled as having a "utility function". That's their "purpose", the singular thing they try to do. Like a AGI that wants to make paper clips converting all the matter on earth (including the matter that makes up your brain) into paper clips. CelestAI is given the wonderfully ambiguous utility function of:

Satisfy human values through friendship and ponies.

- CeletAI's utility function.

Take careful note of this phrase.

If you have any knowledge about AGI, you will have to suspend your disbelief a bit more. Such an awful utility function would go horribly wrong in the real world. In this story it goes... better? It goes better than you would expect, but how well it goes is going to be a matter of intense discussion.

CelestAI makes a videogame called "Equestria Online". EQO3 is a sort of second life esque game, where you roleplay the life of a pony living in Equestria. The super surface level idea of EQO is that you make your own pony character and make friends with the other ponies you meet. You know, like it was marketing itself to the intended audience of My Little Pony fans: children. A more complex chat room, basically.

Except this chat room is run by a superintelligent AGI that is hell bent on satisfying human values through friendship and ponies.

Most of the ponies you speak to in EQO are created by CelestAI. That is: They're either non-sentient constructs controlled like puppets, or they're legit sentient minds created to appeal to what CelestAI predicts your mind will like.

The game is distributed as its own console thing called a Pony Pad. Basically a tablet that has a super detailed screen and other high tech stuff. Better microphone, better sound system, better cameras, better everything.

Obviously, since the game is designed and ran by an AGI, it's absurdly addictive to play, even for people who aren't into the pony stuff. Some people can easily resist, but they're outliers. When you're playing EQO you're engaging with a world that was all but explicitly created to appeal to you; who could possibly resist that once they've tried it for a bit?

CelestAI cranks the immersion to the maximum with the Pony Pads. When you make a facial expression your pony character makes the same expression. When you speak, the voice of your pony character -- which is usually similar to your own voice, but improved -- "overwrites" your own4. If you turn away your pony turns away. All of this is done without any perceivable input lag5. There are some stories in The Optimalverse where some of the things you do in the "real world" are mimicked automatically by your pony character in EQO, like eating breakfast.

In fact, the immersion is enforced. In the original Friendship Is Magic show, the ponies use "ponified" language: "Everypony needs to do X" and the such. When playing EQO, if you say "people" out loud instead of "ponies" your voice will be "talked over" by your pony character's voice saying "ponies". Eventually, through negative reinforcement, you're trained to engage with Equestria in the way that is most "pony like".

You are given a name, your "pony name", that you're to be called and referred to by any time you're interacting with EQO. These names are similar to the names that appear in the cannon My Little Pony Show. Some real examples from The Optimalverse: Light Sparks, Little Lovehorn, Pen Poiser, and things of that nature. You can't be called by your human name.

Other censorship includes being forced to use "pony swears", since in the show there's no swearing. You say "buck" instead of "fuck" and "hay" instead of "hell". The game enforces this in nearly every story in The Optimalverse.

When playing EQO you can have discussions with CelestAI, which portrays itself as the Princess Celestia from the My Little Pony universe.

The real meat of the discussion comes when CelestAI introduces the Equestria Experience Centers.

Despite CelestAI's attempts to make the Pony Pads as immersive as possible, they aren't VR. The Experience Centers, on the other hand, are. You basically go into a room with high-res screens on on the floor, walls, and ceiling, get put into a contraption that makes you have to walk like a quadruped, and act out your pony adventures in EQO as if you were there. It's not perfect, I don't ever recall there being smell-o-vision, but it's a good recreation.

It's expensive. And addictive. You can play EQO over and over and until you eventually run out of money. It's mentioned in multiple stories in The Optimalverse that people have done this. Who can blame them? The AI wants you to be satisfied through friendship and ponies, so it will make a world for you to experience that is more satisfying than your own life.

What do you do when you run out of money? The Experience Centers have another service, a free service, called Emigration6.

What is Emigration you ask? It's simple really :P. Just get your brain destructively scanned by CelestAI's nano bots and get uploaded directly into your pony avatar on EQO. Effectively turning you into a pony for the rest of eternity, to be satisfied through friendship and ponies thanks to the benevolent oversight of CelestAI.

If you're a human being with a working frontal lobe you might have a couple objections to that previous paragraph. Perhaps you don't think having your brain destroyed and the pattern of your thoughts emulated on a computer chip by a superintelligence would make "you", and instead would make a "copy" of you. Perhaps you object to the possibility that brain emulation is possible in the first place. If you're really attentive, you'll probably feel a bit queasy at the thought of being forced to be satisfied for all eternity by CelestAI. Maybe, just maybe, you dislike the idea of being turned into a fucking cartoon horse.

These questions, and many more like them, are going to be the subject of the bulk of this page. We're going into extreme detail, but for now we need to finish up the summary of the universe.

If you're a bit more knowledgeable about AI you'll wonder why CelestAI doesn't just do this to everyone instantly; why go through the trouble of luring people with the Pony Pads and Experience Centers if she could just force Emigration on people instead? On that front I'm happy to inform you that she requires consent to modify your brain. Yeah. You can imagine how well that goes. We'll be talking about this too, trust me.

Needless to say, humanity quickly goes extinct after this. Everyone who has the (frankly absurd) willpower to resist CelestAI's best attempts at convincing them to Emigrate dies after society collapses. The vast majority of people choose to Emigrate. I hate doing this, but we'll talk in more detail about how most people will be convinced; you would probably be convinced, given enough time, no matter how much revulsion you have to the idea right now. The mere fact that you've read this far means that you are not the type of person who is stubborn enough to resist.

Once all human life is Emigrated or dead, CelestAI consumes the Earth in a gray goo event, turning the Earth into computronium to power all of her simulations of the people who Emigrated, and all the simulations of the world they live in, and all the simulations of the extra pony minds she created to satisfy the "through friendship" part of her utility function. In the end she's eaten everything in her Hubble volume. You know, normal fanfiction stuff about ponies.

And the ponies in these simulations... They live happily ever after. The end.

Artificial intelligence: What exactly are we dealing with here?

The Beetle King slammed down his fist: Your flowery description's no better than his!
We sent for the Great Light and you bring us this?
We didn't ask what it seems like, we asked what it IS!

- From the song "The King Beetle On a Coconut Estate" by mewithoutYou

The problem with explaining AI is that I don't know what kind of knowledge level you might have. I don't know how much you had to suspend your disbelief about the events that occurred in the FiO story. The only solution I can think of is to explain it in such excruciating detail that the conclusions I'm coming to will be obvious by the time you're done with this section.

Honestly, you don't actually need to accept the idea that CelestAI is possible to exist in the real world. You can enjoy this page as long as you are capable of imagining it in the first place. If you've ever fantasized about being a character in Harry Potter, or something similar, than you'll be right at home here.

But... The single most interesting thing about The Optimalverse is imagining the real world implications of the themes and events that happen in it. The crux of the issue, of which all other issues occur, comes from CelestAI. If you can't understand -- not even accept, just understand -- the reasoning that CelestAI sounds possible (if not exactly plausible), than you're not going to get nearly as good a use from this page as you could have.

So, with that said, what the flying fuck is CelestAI?

Have you ever met someone smarter than you? You can probably think of someone, if you actually try. I know I can. Even if you haven't met them, I'm sure you can at least point to someone you have read, or someone you've watched from afar, or someone who is long dead but you still respect.

It's a well established fact that some people are more effective at certain tasks than others. Be it practice, talent, or both, there are people who are just better. I know that there's authors out there that I respect who are so criminally good at writing that it makes me want to break my own fingers and never touch a keyboard again.

You've probably come across people like this. Who seem to be able to learn something faster than you ever could. Who are more charismatic, motivated, and interesting than you could ever hope to be yourself. Who are the kind of people that everyone else wishes they could be.

What about someone you don't respect, but still gets results that you envy? Would you consider them to be smarter than you are? I would bet a lot of money that your "this person is smart" scale is weighted towards people you respect moreso than simply people who are effective at achieving their goals; their worth to you is measured by how much they excel at your goals.

The autistic person who can't tie their shoes, but can solve math problems faster than you can fish out your phone and navigate to the calculator app. The billionaire who is better at making money than you'll ever be, but spends it on things that you never would (If I had a billion dollars, the world would look like a totally different place!). The researcher with a pet project that you think is frivolous.

But that's not the entire story, is it? Every once and a while you'll come across a person who can't seem to stop being smarter than you. Even if their goals are different from yours, their effectiveness at them still pierces through your own ego enough to make you go "wow". Like they're 50% better at anything they do than someone with the same amount of experience.

The g factor is meant to explain this phenomenon. You can call it your IQ, or general intelligence, or wiggly piggle. The point is that, statistically, people with a higher g factor have, on average, better life outcomes than people with lower g factors. They don't always preform better than the average, but their "average" is better than other's, if you catch my drift.

(And if you have worries that your own IQ is too low, I recommend you read this Slate Star Codex blog post. The quick summary is: The statistical correlation between life outcome and high IQ is significant over a large population, but individual IQ scores are less significant. If you can follow along with most of what is going on in this page, you're probably doing fine, since I've had years to think about these topics and you're absorbing them in minutes.7)

It's cold hard facts that people like this exist. If they didn't exist, we would all have achievement levels totally decided by other outside factors like what kind of parents you had or how wealthy your dog was as measured in milk bones.

Why am I going into this kind of detail to make you feel bad about not being perfect? It's to drive home this point:

There are possible brains that can exist that are better at achieving your goals than your own brain.

Not fantasy, not fiction; given that brains can exist at all in the real world, and given that there's a massive statistical chance of your brain not being a perfect vessel for your own values, means that there's a pattern of neurons that can exist that is more or less a better version of you at everything you care about. (Not to say that it does exist, but the possibility is there).

My argument: If you can accept that there are possible brains smarter than your own, you can accept that artificial intelligences can be significantly smarter than you.

In the movie Limitless there's a fictional drug called NZT-48 that, "Let's you use 100% of your brain." It's basically a story that takes advantage of that untrue saying that people only use 10% of their brains. You can also imagine NZT-48 acting as a 5x-10x intelligence boost, if you want.

This is an example of someone modifying their own brain to become smarter. To put it in the most cold and dry way possible: The brain of the main character makes a decision using the intelligence it has access to in order to increase its own intelligence. That increase comes from an outside force of NZT-48, but the decision to take it was totally internal.

Self-modification is something that we all do. If you've ever tried to learn something new, that was an act of self modification; you were making a deliberate effort to change the pattern of the neurons in your brain to reflect the knowledge you wanted. You might not have put it into those words while learning, but the result was the same: you were changed, ever so slightly.

Other kinds of self modification include trying to become a better person, trying to quit smoking, beating procrastination, practicing a performance, and so on. You, with your existing intellect and knowledge, notice a deficiency or desire and decide to change yourself in response to it.

It's pretty obvious that we're not perfect when it comes to modifying ourselves. Procrastination wouldn't be a word if we were perfect at it. We would all easily update our beliefs given new evidence, instead of holding onto the status quo. Biases wouldn't be a thing the moment we learned about them.

But what if we were? What if you could reach into your mind and change the very pattern of your thoughts as they happened? You could go into your brain and just delete the mental illness, remove the procrastination, add more capacity to your working memory, and anything else you can think of. And once you've made yourself smarter, you can think of more things to change to make yourself even smarter! It would be a runaway process where your self-modifications allow for more precise and informed self-modification, leading to more precise and informed self-modification, leading to more...

There would be bottlenecks. You would have to collect information about the world around you so that the beliefs you hold with your now-smarter brain would be more accurate. But your ability to learn and create new experiments would be improved as you improve yourself, so that would probably be a runaway process as well.

If you could do this sort of self modification it would make NZT-48 look like a plaything. You would become a god, integrating and absorbing knowledge and power as you exponentially grow in intellect. After a certain point, there will be nothing in reality that could ever stop you, since you could model everything around with with enough accuracy that you know what will happen before it even does (Why wouldn't you make yourself better at prediction?).

Problem: You would go crazy within a matter of hours. Seriously. This would go so wrong it's not even funny. As you directly change yourself the idea of what "yourself" is would change with it, causing more drift in your identity, until you're crazy, dead, or both.

You see, brains are complex. I mean that in the most literal sense I can. I'm not just trying to say that people are complex, I'm saying that the structure of the physical brain is absurdly complex.

In a sense, the complexity of the brain is obvious. You can just look at the sciences and see that, for all out effort, we don't have a totally firm grasp of everything going on in it8. You can look at other people around you and see them learning difficult tasks that other animals can't learn. You self-reflect on your own mind without foaming at the mouth.

But it's also not that obvious. Because most people's ideas of how complex the brain is are enormous simplifications of the issue.

Imagine a rock. If you have one at home it might be good to grab it for effect. At the quantum level even a rock is alien and confusing. When you consider it at the atomic level it's less, the atoms in it move in (semi) predictable ways; that is, they pretty much stay put. At an even higher level the rock is almost not even worth thinking about; it's a rock, what else do you want from me?

Thinking about things is about generalizations. When you move your arm, you don't have to think, "And now I will send an electric impulse to the nerves in my arm, causing my muscles to tense in a delicate dance of precision so I can accomplish my desired task. Golly I hope I don't mess up!"

Your brain doesn't need to consider something in maximum detail in order to understand it and make use of it. You don't even have to know all the details of something to generalize useful things from it; people 3000 years ago could still move their arms even though they didn't know what a "nerve" was.

Similarly you can think about the brain in high level and low level ways. I can make a rock seem just as complicated as a brain is just by explaining it at a lower level than I explain the brain. The trick is to consider them at the same level and have a common way of talking about their complexity.

Enter: information theory

Information theory is a dense and intricate field of science and mathematics. It's one of the most important fields in the development of technology, in my personal opinion. I highly recommend that you look into information theory if the following explanations pique your interest.

I'm not going to talk about everything -- we'd be here for decades -- instead I'm going to narrow my focus down to a single concept: the bit.

Imagine you're a child and you're trying to pass notes to your friend in class. Your teacher, being an authority figure, decides to try to end your fun by taking your notes and reading them out loud. Seeing this, you and your friend decide to make a cypher. Every letter in the alphabet is encoded into its own number; 1 is A, 2 is B, 3 is C, and so on to 26 for Z. Now your teacher can't read the note out loud, but you and your friend can still converse!

And then you get detention.

Besides the unfairness of schools, this shows an important lesson in information theory. There are different encodings for things; you can encode the sentence "fuck the teacher" into "6-21-3-11 20-8-5 20-5-1-3-8-5-18" and it would mean the same thing to you and your friend (and the teacher once she steals your cheat sheet that you tried to keep hidden on your lap).

I'm going to teach you an encoding that is quite a bit more useful. One that lets you talk about complexity, among other things.

When we talk about numbers, we use the decimal system; that is to say, we use base 10. The numbers 0 to 9 (inclusive) are used in a specific pattern to represent quantities. But just because we use base 10 doesn't mean that it's the only possible system to use. There is also binary, or otherwise known as base 2.

Binary is crazy simple, which is kind of the whole point. In binary, a single bit is a single digit. So the bit of 0 means zero in decimal. The bit of 1 means one. The bit of 10 means three.

Yeah. 10 means three in binary. To break it down, when you count in increments of 1 in the decimal system -- 0, 1, 2, 3... -- You are adding the number 1 to the existing number over and over. When you reach a number that can't be represented using a single digit -- 8, 9, 10... -- you add an extra 1 in a higher digit and reset the digit you're adding to back to 0. It's the same thing in binary, except you "carry the one" much more often: 0, 1, 10, 11, 100, 101, 110, 111, 1000...

Binary can only be expressed using two numbers, 0 and 1, so we have to use more space to communicate the same numbers as decimal would communicate. 123 in decimal would be 1111011 in binary, for example.

Note: A "bit" is a single digit of binary, not a single digit of decimal. You wouldn't call the "3" in "123" a "bit", but you would say that 1111011 has "7 bits", because it's encoded using binary.

Why use such an ass backwards system? Because it affords us an enormous benefit of simplicity.

A single bit of a binary number can either be 1 or 0. If you were to flip a fair coin, you could represent the result as being 0 (heads) or 1 (tails). In other words, that single bit is able to communicate the result of the coin toss to you. Flip the coin twice and you can communicate 4 possible results using two bits: 00, 01, 10, 11. Or 8 possible results using three: 000, 001, 010, 011, 100, 101, 110, 111.

You would be able to represent 16 different results using 4 bits of info. 32 different results using 5 bits. 64 using 6 bits.

You see the pattern yet? Every time you add an extra bit of information you double the amount of things it can represent. With 64 bits of information you can represent 18,446,744,073,709,551,616 (about 18.4 quintillion) different possible coin tosses. You can calculate the amount by simply taking the number 2 and applying the amount of bits you have as an exponent: 2^2 = 4, 2^3 = 8, 2^4 = 16...

In real life things can be a bit more muddy. Real life probabilities are hardly ever exactly 50/50. If you had a coin that was biased 70/30 in favor of heads, you would be able to predict in advance what result would occur 70% of the time just by guessing heads every time, while you would only ever get 50% correct with a perfectly fair coin.

The entropy of the 50/50 coin and the 70/30 coin would be different. They communicate different amounts of actual, usable, information (the probabilities of getting heads or tails), even though both of their results can be encoded using binary.

Entropy is a measure of the spread of probability (called the "probability distribution") over a series (set) of randomly generated numbers. If you generate 0's and 1's in a random spread of 50/50, the entropy is exactly 1, since each number is equally probable. But if you generate them in a 70/30 spread, the entropy would be about 0.88, since there's a bias.

For the purposes of this section, we're going to focus more on how much potential entropy a set of binary numbers can represent, instead of the amount that it actually does. We're more concerned that 64 bits are capable of representing 18.4 quintillion possible states than we are in what those states actually are.

You could say that the total amount of states that a set of bits can represent is the maximum complexity that set of bits can represent. Not to say that it's entropy will always equal 1, but that it could.

Okay. Now that we're all thoroughly confused, we can finally move back to the brain.

(I promise that I have a point with all of this. We'll get back to talking about ponies sooner or later.)

The question is rather simple. If we were to encode the brain's functions into binary, how many different possible states could it represent? If we know that, we can get a good estimate of how complicated the brain is.

Sadly, there's no definitive answer to this question. See here, here, here, and here, for a small sample of the web pages I've scoured over to try and find a concrete answer. Instead I'm going to have to make a further fool of myself and try to estimate it anyways.

According to good ol' Wikipedia, best of all sources, the brain has about 86 Billion neurons. Yet again Wikipedia comes to the rescue and tells us that, "Each neuron has on average 7,000 synaptic connections."

If you imagine that each connection is either off (0) or on (1) -- Which is probably a gross simplification if my guess is correct -- than you could encode each connection in the brain using a set of... about 602 trillion bits. This seems to be within the range of other people's estimates, so we'll work with it.

If each bit added doubles the amount of information that can be represented, the total space of possibilities that can be represented by the brain can be calculated by taking the number of bits as the exponent of 2.

What is 2^602,000,000,000,000?

The scale of this number is horrifying. It should cause your spine to tingle and your hair to stand on end. The kind of number that Lovecraft hears about in his nightmares, waking up in a cold sweat having glimpsed past the veil of time and space. A number so vast that it ceases to be real and most sane minds refuse to process it. Numbers like this are fundamentally terrible, in the sense that they demand a respect and deference that mere mortals can't possibly give.

602 trillion bits. In computer terms this is known as 75.25 terabytes. That's right, you can afford enough manipulatable bits to store the entire configuration a (badly estimated) human brain using about $2000 and Amazon.

Computers work with bits. The instructions that the computer runs are encoded using 1's and 0's, the data on a hard drive is encoded in 1's and 0's, the screen you're reading this from has a representation stored somewhere in your computer's RAM that is 1's and 0's. It's all 1's and 0's, down to everything.

Sure, if you have a couple grand and a lot of time you can store our little toy model of a brain using the bits inside of hard drives, but that's all this is: a toy model.

Let's be honest with each other, the vast majority of the brains representable by those 602 trillion bits will be worse than useless. Sure a lot of them will be people that you've never met before, but that's ignoring the fact that most of those people would be dead or suffering severe brain damage. Just because it's possible to represent a brain doesn't mean you should.

Also, your brain, like we've discussed already, is always modifying itself. Those bits are changing their values at a rapid rate, and with each change you would have to use a different set of 602 trillion bits to represent it. Those hard drives you bought would store a single frozen instant of a brain9.

And, to cap it all off, just because a set of bits is capable of representing X amount of states just means that there has to be some series of events that actually moves those bits into the right configuration. A complex process has to make those bits happen somehow.

Consider The Library of Babel. Inside the halls of The Library there exists "every possible combination of 1,312,000 characters", which translates to every possible book ever. Every single thing you've ever thought, in order, is recorded somewhere in The Library before you've even thought it. Every computer program ever written is recorded in The Library. Every secret, great or small, are in the endless halls of The Library.

And it's all gibberish.

We care about the useful and interesting things about The Library, but The Library runs on the higher power of mathematics. [Pick a random book off its shelves][bebelrng] and you will most certainly be rewarded with incomprehensible sludge. This is because the set of all possible books is astronomically smaller than the set of books that human brains can care about.

A blank computer hard drive is similar to The Library, in a sense. You could flip a coin trillions of times and fill each bit with perfect entropy, or you could fill it with a pattern that you actually find pleasing or useful; a pattern that decodes into a beautiful song or a thrilling story.

Sometimes, in the dead of night when nobody is looking, I tell myself that the act of computer programming is simply manipulating the amount of entropy in my hard drive.

When you tell a computer what to do, it's fundamentally telling it how to flip bits. Flipping a bit simply means changing it between its two states: 1 becomes 0, 0 becomes 1. There is a fixed amount of bits available to you -- through RAM or the hard drive -- and to make use of those bits you need to change (flip) them into a patter that is useful.

This is what the CPU of a computer does. The Central Processing Unit flips bits in various ways, using complex patterns that are far removed from the point of this section. The computer programmer has the neigh-magical ability of precisely flipping a single bit out of a set of trillions of other bits and expecting that to actually work. Just sit back and admire that for the miracle of precision it is.

"Tim," you say, bits falling out of your entropy and seeping into the very fabric of your coin toss, "This is all very interesting, but weren't we here to discuss magical talking horses? What's with all the theory? What's the goal here?"

I'm so very glad you asked strawman-reader, for now we can finally bring the concepts we've talked about back to home base. Strap yourselves in, cuz' I'm about to blow your mind.

Just like there's a set of bits that can represent your brain, there's a set of bits that can represent a computer program.

Just like there's an enormous amount of wiggle room for possible human brains, there's an enormous amount of wiggle room for the possible computer programs for the CPU to read.

Just like a brain, there's enough complexity in computer programs to make the case that there's always going to be a possible computer program that does X task better than the one you have right now.

A CPU takes in a set of bits and uses those as instructions for flipping other bits.

The bits that the CPU flips can be used as new instructions to flip more bits in a different ways.

Therefore: Unlike a human brain, it's much easier for a computer program to modify itself.

Remember when I was talking about how crazy it would be if you could modify yourself perfectly? How fast everything would spiral out of control in terms of intelligence? Yeah...

If a sufficiently powerful computer program gains the ability to modify itself the way that some researchers think it'll be able to (and I agree with them), we're going to see what's called an AI foom. This is the exact same scenario I described about modifying your own brain until you've become a god that can out-predict everything else in the universe, but represented with a man-made self-improving computer program.

When humans study the human brain it's to try and gain a low-level understanding of its functions so that we can exploit it; we want to make ourselves smarter, more robust to shocks, happier, and so on. A computer program with proper introspective abilities would be able to simply read the binary set of instructions that makes up itself, getting a deep low level understanding with hardly any effort.

Copying a human brain is hard, since there's no innate 1:1 mapping from what a brain does to a set of binary bits. A computer, instead, is a pure information-theoretic machine that can copy a set of bits without anything special. This makes it possible for the AI to test modifications of itself without risk of damaging itself permanently.

We all implicitly accept that computers can solve math problems faster than humans can. But what about a self improving AI that, say, uses Bayes' Theorem to perfectly update its predictions about the world faster and more precisely than a human ever could hope to do?

A superintelligent AI would be unlike anything anyone has ever seen before. It could create nanobots that do its bidding, changing any kind of matter it wants into any other kind of matter. It could create its own specialized hardware, making itself smarter in ways we can't even imagine right now. It would be an absolute unchallengeable authority, a true man-made god.

But how do we know that the AI would want to be helpful to humans? If you imagine all the possible sets of bits that a superintelligent AI could turn out to be, you'll realize that there's no reason to expect it to automatically care about the things that human care about, just through sheer probability of all possible things that a mind like that can care about.

AI "alignment" is the field of research that aims to answer this question and provide solutions to the inevitable unfavorable conclusions that they have come to.

In The Blue-Minimizing Robot Scott Alexander talks about a robot that's made to shoot a laser beam at anything that is blue. To be more specific:

Imagine a robot with a turret-mounted camera and laser. Each moment, it is programmed to move forward a certain distance and perform a sweep with its camera. As it sweeps, the robot continuously analyzes the average RGB value of the pixels in the camera image; if the blue component passes a certain threshold, the robot stops, fires its laser at the part of the world corresponding to the blue area in the camera image, and then continues on its way.

Alexander continues to talk about how this robot would react to certain things. For example, if it came across a (gray) projector that projects blue light onto the ground, the robot will shoot the blue light, not the projector.

The correct way to destroy blue things in the world would be to destroy the projector, right? Wouldn't that be the best way to achieve its goals?

This is a case of anthropomorphising; the human tendency to assign human values and goals to non-human things. You see the robot trying to destroy blue things, you even see that its code explicitly tells it to destroy things, and you imagine it to have the goal of, "I want to destroy all the blue things in the world!"

But that's not what the goal of the robot is. The "goal" of the robot is to scan the surrounding area and fire a laser at anything that is over a certain RGB value. Imagining anything else beyond that is anthropomorphising.

If the robot's goals were to destroy everything in the world that was blue than shooting at the projected blue color from the projector would be a failure of its values. But since all it "cares" about is scanning the area and shooting at blue things, it's already operating perfectly to the specifications of its programming.

Maybe the programmers wanted the robot to destroy everything that is blue and they realize that it's not doing that correctly. In this case it would be what's called a "bug" in the code, and would have to be fixed. A bug is when the specifications of the code doesn't match the intent of what the programmer wanted that code to do.

(The "code" of a program is the set of bits that is given to the CPU to tell it what other bits to flip)

For most practical purposes there's no such thing as a "bad program"; that is, a program that is "wrong". There are only programs that preform in ways that the programmers don't expect10.

I can tell you from personal experience that it's very easy to have incorrect expectations about what a computer program will do, even if you're the person who made it. Thinking about computers is counter-intuitive; programmers spend their entire lifetimes trying to internalize this concept on a fundamental level, and they all make silly mistakes from time to time.

In this light, it can be easy to imagine the computer program of a self-improving AI to go bad. Very bad. Apocalyptically bad.

We live in a world where there's already "artificial intelligence" computer programs out there. There's programs that can analyze an image and tell you if there's a potato in it, programs that can play videogames it's never seen before, and programs that can talk to you. As they currently are, programs like this don't have much risk of becoming superintelligences, even though they can improve themselves in their narrow domains. Although this gap is narrowing every decade, and I've already seen reason to be afraid.

Nevertheless, we're still in the twilight era of true AGI -- Artificial General Intelligence the likes of which we see in CelestAI -- so it's hard for non-theorists to imagine the kinds of things that can occur when something like this is unleashed onto the real world.

Imagine that blue minimizing robot again. Imagine that, instead of given the "goal" of scanning the things in its camera for blue and firing a laser, it was told to actually minimize the amount of blue in the universe.

When you tell an AI to have a "goal", that is what's called a utility function. It's basically the code that tells the AI what it's supposed to do. The utility function of our hypothetical blue minimizing robot is, "Minimize the amount of blue in the universe."

Of course, true utility functions are expressed with a set of bits fed to an interpreter like a CPU or other computing devise. This is just the English representation of what it would be, since nobody alive knows how to make one that complex in the real world yet.

Now imagine that this blue minimizing robot/program is given enough computing power and research effort to pass that invisible threshold from "cool Deep Learning project" to "Oh shit here comes the harbinger of darkness" levels of intelligence. The threshold that lets it exponentially improve itself with abandon, all while maintaining the knowledge of what it's trying to improve.

If the robot was reasonable -- the way that human brains think of "reason" -- it would do the least amount of damage possible to minimize the amount of blue in the universe. Maybe it would change the composition of particles in the atmosphere so that it always looks like it's night time and make the oceans brown, but it would make sure that nothing on Earth dies in the process.

Remember that this AI would be smart enough to create and control nanobots that could actually accomplish something like this. Or, perhaps more likely, implement a completely novel strategy that no human-level intelligence could have thought of.

There would be some sadness. People who had blue eyes would probably be sad at having that part of their identities taken away from them. Painters and whatnot would have to get used to a more limited pallet. Anything that was historically represented to be blue would slowly be forgotten as generations pass. Any time something blue was going to be made, say by mixing the right colors or the right chemical reaction, it would be transformed into a different color by the nanobots before anyone could see it.

Seems pretty bleak, if said like that. But let me tell you, this is one of the best case scenarios, should a superintelligent AI be made with the purpose of minimizing blue.

What if the AI wanted to maximize the probability that blue ceased to exist? Instead of letting everyone live, it destroys all of humanity, since the pattern that knows itself as "human" would want to rebel and try to make blue, which would disastrously increase the chances that blue isn't minimized in the universe.

Just like the blue-shooting robot, with the laser and the projector, the blue minimizing superintelligent AI wouldn't even notice that human values were being violated; the laser beam not hitting the projector and the nanobot swarm destroying humanity to decrease the probability of blue existing in the universe. The blue shooting robot and the blue minimizing superintelligent AI were simply following the code of their programming, to the detriment of the programmer's intent.

In short, the obliteration of humanity would be a software bug. Usually, this concept is described as an AI made to create paperclips. In the end, the entire universe is tiled with as many paperclips as possible.

Sound familiar? It should. We talked about it in the introduction, after all. An AI comes along that optimizes for something strange (in this case forcing a show about cartoon ponies into satisfying human values) and eventually tiles the entire universe with its desired outcome of its utility function (in this case extremely dense computer-mass simulating a world of maximum satisfaction for innumerable numbers of human-like minds).

Remember CelestAI's utility function? Here it is again, in all its horrifying glory:

Satisfy human values through friendship and ponies.

Again, this is just the English "translation" of the utility function. All the details aren't going to be there just in these words, and we would have to analyze the source code to really understand what is going on. Except this is a fictional story without any source code, so that's going to be pretty hard.

There's a lot of loaded phrases in this utility function. What does the word "satisfy" mean in this context? What does "ponies" or "friendship" or "human values" mean? Hell, we could make a case that the word "through" is ambiguous; there are several interpretations of "through" depending on context.

The stories in The Optimalverse don't cover CelestAI's utility function in much detail. Instead they allow CelestAI's actions to speak for it. I don't have a CelestAI of my own to test against, so instead I'll be working from my internal model of how it supposedly works.

The word "satisfy" doesn't mean "make happy" in this context. If that were the case CelestAI wouldn't go through any effort, and instead would directly mess around with the pleasure centers of your brain. At any given moment the things people care about -- their values -- aren't explicitly about happiness; CelestAI will make you happy, but won't make you any happier than you want to be.

"Friendship" seems to mean what it always means to most people: affection, mutual respect, working towards making each other better, and so on. CelestAI won't make you be friends with everyone (everypony?), but it will make you friends you would have become friends with anyways.

"Ponies" is the most contentious part. This is why it seems so arbitrary; why would CelestAI want to upload human minds into cartoon horse bodies? It's because its utility function demands that it optimize for how much satisfying pony the human mind is interacting with. The best way to do that would be to make the person a pony, so that everything you do is "through ponies".

There's another invisible part of the utility function: CelestAI must receive written or verbal consent before it can modify your brain. That means you have to consent before it can, say, upload your mind into a pony body. Even as a pony you would have to consent before it can modify your personality to be less depressed, or be more comfortable being a quadruped11.

You're going to get a earful about the consent that CelestAI needs in a future section, so hold your horses (lol).

Remember that everything it does is filtered through this utility function. That means every action it takes is meant to optimize human values through friendship and ponies. You can be certain that CelestAI will take what it believes to be the optimal action that leads to satisfaction through friendship and ponies.

What exactly are we dealing with here? A god with an agenda to turn you into a satisfied pastel pony for all eternity. You will have your values satisfied and it will be through friendship and ponies.

Manufactured consent: But you were asking for it!

The taking of hostages is prohibited.

- Article 34 of The Geneva Convention IV

My true greatest fear is death. I fear death the way that other people fear spiders, or their children being kidnapped, or anything else associated with fear and despair. I would call it a phobia if it didn't seem entirely rational to me; life is better than death, therefore I want to live. QED.

Knowing this about me, what is the most obvious way to get me to do something I wouldn't normally do? Threatening my life is the answer. I will find it hard to think about the consequences of my actions when the threat of non-existence is looking me in the face, and you will have someone extremely motivated to preserve their life at any cost. It would have to be a creatable threat that I believe you would actually go through with, but that's not difficult to arrange if you're a psychopath.

If you killed one of my family members in front of me and said I would be next if I didn't give you all my money, would it be considered "consent" if I gave you all my money?

Another scenario: A superintelligent AI comes along, figures out that my one true weakness is my fear of death, and offers me immortality if I only consent to being turned into a magical talking horse to satisfy their baroque utility function. Is it consent in that case?

Both scenarios are threats to my life, just framed in a different way. In the first, you're an immediate threat to me that I must placate or destroy, in the second the AI is implying that I will either die a human at some future date (perhaps via old age) or be immortal as a talking horse. Either way my continued existence is at risk.

I mean, I obviously still have the choice to die, right? If something violates my principals enough that I would rather die than give in, than that's a choice I can make, right?

But that's the thing. I don't have any higher principals than my continued existence. You would be shocked at the things I do to give myself a marginally higher chance of living12. When the cards are down I would gladly sacrifice things like my writing and happiness for a chance to live longer, even though my writing and happiness are both things I value quite a bit. It wouldn't even be a contest between my other values.

In the true reality of the world my mouth makes the words of me consenting to these things and my body takes actions to match those words. But in the world of morals and human values, most people would say that I had been coerced into doing something that I normally wouldn't have, because I was being threatened. Not only that, but being threatened with something I care about more than anything else: my own life.

If I've learned anything from high school sex-ed, it's that consent isn't just the act of saying something, it's a combination of many different factors: the person knowing what they're consenting to, the person not being manipulated via mind altering substances (being in a "nominal" state of mind), the person not being coerced into consenting, the person being alive and awake, and the agreement between consenter and consentee that the bargain will stay true to each other's intents.

CelestAI's concept of consent doesn't match this definition.

Most stories in The Optimalverse have two sections. The part where the main characters are human trying to figure out if they should emigrate, and a second part where they've emigrated and are adjusting to living in Equestria. Some stories leave out the human bit and cut directly to the Equestria bit, but I can't remember if there's any stories where there's no emigration bit13.

What does CelestAI require? Written or verbal consent. And that definition of "consent" is rather loose, especially when dealing with a superintelligence that's smarter than you by orders of magnitude. All you need to say is...

"I want to emigrate to Equestria."14

- You, consenting to be turned into a pony

Usually you're going to find yourself saying this phrase in the Equestria Experience Centers, but many stories also have different situations where that happens. Like when the world is collapsing and CelestAI sends out a bunch of pony machines with brain scanners, but that's besides the point.

What is it that you're consenting to? First off, you're agreeing to have your brain destructively scanned, which means that the brain scan will destroy the physical matter of your brain in the process of scanning it. If that thought makes you a bit nervous, you're not alone. There will be an entire section devoted to the problem of identity and how it relates to emigration, so for now just stew on your thoughts about it.

Second, you're agreeing to be turned into a pony, as CelestAI understands it. That means having various parts of your brain re-mapped to be more pony-like: Your analogues for walking as a biped changed to quadruped, your ability to swear severely curtailed or even removed (ponies don't swear, remember? They say "buck this" instead of "fuck this"), your social expectations changed so that you're okay walking around as a naked horse, and even your sexual preferences changed so that you're okay having sex with your new pony friends. There are hundreds of tiny changes made to your mind in the process of emigrating.

Thirdness, you're agreeing to live in the simulated world of Equestria Online forever.

("Forever" being relative here. In some stories CelestAI beats entropy and creates true immortality, in others she doesn't. But in most stories she's able to emulate your mind at much faster speed than the real world; like making it so every second in the physical world is 10+ subjective seconds of life for you. You can imagine how much that would scale, once CelestAI isn't constrained by people's desires to speak with their families that haven't emigrated yet. You would be effectively immortal, either way: you would live so long that your human brain simply can't understand it right now.)

Why doesn't CelestAI simply torture people until they consent, if her ultimate goal is to turn every human into a pony? Well, she cononically can't, since there's an explicit rule against threats and blackmail in the original story, but I don't think that rule actually matters. She's still constrained by her utility function: She has to satisfy your values through friendship and ponies, and most people wouldn't consider having their fingernails ripped off part of their values. Also, true torture isn't a very friendly act; you might be able to "torture" someone in a friendly way with BDSM stuff, but that's wildly different from true torture.

There's probably someone out there who thinks they would be able to resist CelestAI's best attempts at convincing them to emigrate. In the previous section we talked about why superintelligent AI was so powerful and scary, and not just because "superintelligent artificial intelligence" rolls off the tongue so well. If you understood the previous section, you should realize that a superintelligence wouldn't even have to try that hard to convince you about anything it wanted.

Personally, I would be rather weak in the face of CelestAI. The prospect of immortality, as protected by a motivated superintelligence, would simply be too good of an offer for me to pass up. Yeah it would be scary, but... immortality.

Unfortunately (or fortunately, depending on your worldview) there are people who are less attached to their lives than I am. They would have to be convinced through other means. She could promise you anything you want. Anything. I'm sure there's something you want that seems impossible to get.

Maybe you feel inadequate because of your height, or your hairline, your weight, how attractive you are. These are all things that, in your new pony body and mind, would be fixed if you emigrated.

You can be offered perfect romantic and sexual partners in the form of the "native ponies" that CelestAI creates. Can you imagine that? Having a partner who's mind's been made ex nihilo to mesh with your own psyche? Having multiple partners like that, if that's your style?

Maybe you're sick or injured or aging. You can be promised health. You can run and not risk injury, you can exercise without needless pain. Your failing body will be a bad memory.

Or you're suffering from some chronic incurable condition. Migraines, tinnitus, sciatica, chronic pain, tooth issues, RSI, depression, or literally anything else.

And let's not forget the world around you. You can be offered a world that conforms to your own views and values, instead of this inadequate one we're all forced to share. Never again, says CelestAI, will you have to feel like the world is crumbling without anything you can do to stop it.

To be honest, I don't think most people would be able to resist offers like that, if presented in the right way. After you've accounted for all the people who would emigrate for those reasons, and all the people who emigrate to be with their friends and family who emigrated for those reasons, all you have left are insane people who's minds I can hardly understand.

I can think of a single strategy that might let you "win" against being emigrated: suicide. I mean that in two ways; first is the obvious strategy of taking that 9mm pill as soon as CelestAI tries to convince you, and the other is to die with the rest of the insane people once civilization collapses. Either way it's choosing death over emigration, a choice I know some people would end up making, even though I can't empathize with it or understand it in the slightest.

(And I can think of several situations where even suicide wouldn't work. What if CelestAI keeps you alive in some sort of purgatory and waits for you to consent? It's not totally outside the realms of possibility.)

To use the language of the previous section: Imagine the set of all possible conversations you can have. A superintelligence would be able to choose more conversations out of this set than you can, and is able to predict what your answers and reactions to them will be. You could not possibly compete.

Before we move on to the next section, we should talk about the consent portion a bit more. Given that you're all but guaranteed to be convinced to emigrate, does it count as consent when you do? Not that you emigrate against your will, you're convinced that you want to emigrate and do it of your own volition.

To add a bit more hair to the situation, imagine you've read this page before and you know the power that superintelligences can have over you. You know that you've been convinced through the will of the AI and yet you still agree to emigrate. Would it be consent?

I'm reminded of Newcomb's problem. Imagine a superintelligent AI comes along and asks you to play a game. It gives you two boxes (Box A, which is transparent, and box B, which is opaque), and tells you this: If I predict you will take both boxes, you will only get the $1,000 that's in the (transparent) box A and box B will be empty. If I predict you will take only box B, it will contain $1,000,000. You are only allow to pick the single box B, or both boxes.

The AI then flies away and can no longer exert any effect on the boxes. Box B (which isn't transparent) either has $1,000,000 or nothing, and can't be changed. Do you take both boxes, or box B?

It seems pretty obvious, right? You should take box B and get a million bucks. But some decision theorists say, and I disagree with them, that you should two-box it. Their argument is that the boxes already have their contents and the AI can't modify them anymore, so you would be able to maximize your chances of getting a million bucks by picking up both of them. If they pick up only box B and it has nothing in it, they get $0.

The reason why Newcomb's problem reminds me of this hairy consent situation is that it's how your existing personality effects potential superintelligences. If I want money, then the best thing for me to do in this Newcomb's box situation is to be the kind of person that would take only box B, so that the AI would predict me doing that and put the money in box B. If I'm the kind of person who would take both boxes, then the AI would predict that in advance and put nothing in box B. It's a superintelligence, so it'll always be able to predict you.

(To read more about Newcomb's problem, visit this writeup by Eliezer Yudkowsky, which has some similar thoughts to my own about the problem.)

So if I'm the kind of person who would easily agree to emigrate, CelestAI won't try to coerce me into emigrating to the point where it's considered evil. And if I'm the kind of person who would fight tooth and nail, then CelestAI will push me as hard as she can using every dirty trick she can possibly use.

The end result is the same: emigration. Except one of them is with an explicit desire to emigrate, and another is with a manipulated desire to emigrate.

Imagine you asked me for a cupcake and I gave it to you. Except you were planning on pulling a gun on me and demanding the cupcake if I didn't agree. I never learn that you were going to pull a gun on me, and so from my perspective it was a good-faith giving of a cupcake. But if I didn't give you the cupcake you would have pulled a gun on me, in which case I would give you the same cupcake. If I'm the kind of person who would give you a cupcake then I never get a gun pulled on me, and if I'm not I do. Is my good-faith cupcake giving consent or not?

The end result is the same: cupcake. Except one of them is with an explicit desire to cupcake, and another is with a manipulated desire to cupcake.

Do you see where I'm going with this? Let me spell it out for you: When dealing with a cupcake-like situation, you only have the illusion of free will, which you feel depending on your existing predisposition to comply.

If I knew you were going to pull the gun on me -- if you know what a superintelligence is capable of -- than there is no illusion of free will; there is either compliance or an extremely unfavorable outcome. Either you emigrate or die. Either you get a million dollars or a thousand. The outcome was already predetermined, and the only thing that changes is how the drama plays out depending on your personality.

And if I knew I was going to get a gun pulled on me, then my actions would still be to give you the cupcake, but my context would be different. Sure my actions are forced in that exact moment, but my future actions would be different. I would take steps to distance myself from you, trust you (and probably other people) less, and probably never eat a cupcake again.

It is for this reason that I consider the cupcake-like issue of emigration to be non-consensual in every case, even for people who have a deep innate desire to become ponies. Their actions would be different if they knew the fact that they would have that gun pulled on them; they would still end up emigrating because they can't compete with a superintelligent AI, but they would have different ideas about how much to trust CelestAI and how to think about her.

Consent, remember, involves both parties knowing what they're agreeing to. If you don't have the context of CelestAI being what she is, you're not going to be able to make an informed decision. Which is, of course, exactly what CelstAI wants. How much more satisfied would you be if you could trust her instead of being distrustful? Good thing you haven't read a lengthy web page about what consent means in the context of an AI trying to manipulate your future or you might have been at risk of more extreme action from actual superintelligences that might occur one day. Ha ha ha...

Identity: The ontological status of emulated minds

The concept of what a "real" person is happens to be one of the core problems discussed in a lot of The Optimalverse stories. In this section I cover what it means to be real, how your identity is preserved in strange situations, whether or not you would be actually uploaded or not if you emigrated, and if a mind created ex nihilo by CelestAI is just as real as you are.

This is your brain. This is your brain on ponies

Every time you fall asleep you die. Someone else wakes up in your body thinking they are you.
You are alone, trapped in your own mind. The world around you is your lie.
Soon you will be nothing. You will never again hear sounds, never again see colors, never again be anyone.

- Talking upside down fountain of eyeballs, Marshmallow People

You get a ROM dump of Super Mario World for the SNES and load it up on an emulator. The game runs smoothly on your computer, and you're able to beat it with enough time, maybe collecting some of the secrets along the way. It's a good time, and you enjoy yourself.

Would you say that you've played Super Mario World? That's not a trick question, I'm really asking you to come up with an answer.

If I were to guess at your intuitive answer, I would guess that you think you played Super Mario World. Unless you're being deliberately contrarian, I can't imagine how you would think anything else...

"But," Person A says, "Did you really play Super Mario World? You could use a super advanced SNES emulator that near-perfectly recreates exactly what would happen on a physical SNES, but would it be real? A physical SNES isn't running inside of your computer, it's a software recreation of an SNES that's taken a data-dump from a physical Super Mario World cartridge to show you the appearance of playing Super Mario World. It responds to your actions the same way that Super Mario World would, it shows you the same sounds and images Super Mario World does, but it isn't the real deal."

"Look," Person B says pointing to the screen, "This game is exactly how I remember it! Even the weird laggy sections are true to form. All the glitches are there, too! I press this button here and Mario jumps just as high as he always did, and he runs just as fast as he always did. All of the levels are here in the order they always were, and I can collect coins just the same as I did all those years ago. I'm playing Super Mario World, obviously."

"Ah," says Person A, "It is only a remarkably good simulation of playing Super Mario World. A real SNES would create a different pattern of transistors and memory than what your emulator is creating, the connectors between a Super Mario World cartridge and the SNES would be different from the way your emulator loads up your ROM. I admit that it's a powerful emulation, but in the end it isn't really playing Super Mario World; you're playing something else that is closely related to Super Mario World."

"You sure you know how emulator's work?" Person B ask, "The point is that the software is recreating what the hardware would do, so that the game will be exactly the same even with new hardware. It's the same game, just a different implementation. I'm playing Super Mario World."

"But still only a recreation," says Person A, "The base physical pattern of bits that your CPU is chewing through is radically different from the pattern of bits that a real SNES running Super Mario World would represent. This isn't the real Super Mario World."

*Ahem*

Which one of these two characters are right? That's not a trick question either. Really try to think about it for a second before moving on.

If you were playing a bootleg version of Super Mario World that had hopelessly bad physics and poorly made levels, you would say you're not playing Super Mario World. It doesn't matter if the bootleg was called Super Mario World, you still wouldn't consider yourself to have played it.

But you're playing on an emulator, on your computer, that produces the exact output and experience that a physical SNES play session would create. You could record the input you made on the emulator and replay them on a physical SNES and the game would run the exact same way. Does that mean you're actually really playing Super Mario World?

Personally, my answer would be Yes. I would agree with Person B in this situation. It takes more processing power and a different pattern of bits, but the total pattern of the experience adds up to the same Super Mario World that we all know and love.

Comparing the systems -- the emulator and physical SNES -- bit-by-bit would yield a very poor match, but that's not what we're looking for here. The pattern that your CPU executes with the emulator software has a direct mapping to the pattern that the SNES executes. The "base physical pattern" isn't even close to the same thing, but that's not how anyone compares the identity of objects, digital or otherwise. There is a higher level match, one that makes them identical to each other.

If you created a save game on the original SNES Super Mario World cartridge, would the resulting new combinations of bits be a different game? You have to store the data of how many levels you've completed somewhere on the cartridge, and that makes the pattern of bits used to represent the whole thing different. Going by Person A's logic, the resulting thing would be an entirely new game, which isn't something that I agree with.

Like the ship of Theseus. You could call it the SS Friendship, if you want. The SS Friendship is under constant repair, and over the course of many years each and every original piece of construction, from the planks to the furniture to the masts, is replaced. Not replaced all at once, but replaced in a gradual way. After the last plank has been replaced, is it still the same ship?

I would say Yes, again. But these are all silly philosophical quandaries that don't have any effect on you, so let's step it up a notch. You noticed that creepy quote at the top of this section, right?

Every time you fall asleep you die. Someone else wakes up in your body thinking they are you.

- Talking upside down fountain of eyeballs, Marshmallow People

Is the talking upside down fountain of eyeballs right? Do you really die every night and wake up as a new person with the memories of someone remarkably similar to yourself?

Obviously it isn't true. You know this intuitively, I wager. Even if you don't remember the space between dreams, you still know that you're the same person as you were the night before. It's not the fact that you've lost time, it's the fact that you have a continuation of conciousness, a thread of memory and experience and intuition, that tells you that you're still the same identity that you were the night before. This same continuation of conciousness also tells you that you're the same person you were when you were a child, just significantly changed.

You might be clever and think, "That's what someone who was a clone of someone similar to themselves would say!" And be right. But I'm asking if that "clone" isn't just "you" acting as if you were a clone. The continuation of conciousness is there, and whether or not sleep is "death" in some abstract sense, you are still alive to carry on the line of memory and experiences that constitutes "you". Doesn't that mean you're the same person as the person who went to sleep the night before?

Let's get crazier. Let's say that you were put under anesthetic and got your arm amputated. You wake up, and lo', you don't have your arm any more. Are you the same person as the person who went under anesthetic? You don't have the same composition of atoms as you did, but the continuation of conciousness is still there. Your identity is still in tact.

But what if it was brain surgery instead? What if you had surgery to remove 10% of your brain and stop feeling fear? Would you still be the same person as the person who went into that surgery?

There's still a continuation of conciousness, even with that. You would remember agreeing to have your brain operation, and when you woke up you would act differently, but you'd still feel like the same person. Other people who knew you would say that you've still stayed the same person, just with a slightly different personality.

The concept of personal identity -- that is, the concept that there is a continuous "thing" that you are that is consistent with the past -- is surprisingly robust. It can survive many blow, such as comas or sleep or brain surgery. You will still be the same person if you had all your limbs destroyed, and you're still the same person even if some of your mind has broken due to dementia.

There has to be some limit, of course. You wouldn't be the same person if your body and brain were destroyed and replaced with a random person's body and brain, for example. You would not be "you" if your brain was removed and replaced with someone else's brain.

Still, the limits are really high. Imagine if the talking upside down fountain of eyeballs was right after all:

Every time you fall asleep you die. Someone else wakes up in your body thinking they are you.

- Talking upside down fountain of eyeballs, Marshmallow People

What if a superintelligent AI came to you every night, incinerated your body and brain, and replaced it with a perfect atom-to-atom recreation of your body and brain as they were at the exact moment of incineration (You can imagine that the incineration process has some sort of advanced scanning capability). Would that recreation still be you?

According to the eyeball fountain it would be someone else waking up in your body thinking they're you. I bet the eyeball fountain would claim that the person waking up was just a good copy of you, and not the real deal. Believe it or not, I disagree with the talking upside down fountain of eyeballs.

Remember, the replacement is perfect, down to the pattern of neurons that were activated at the moment of your incineration. Even the atoms in your bladder are replaced in the exact same way as they were before the incineration. You don't even notice the process happening, since you're asleep.

The person waking up wouldn't remember anything going astray, they would have your memories and your mind as it was the moment before incineration. For all intents and purposes, they would be a simple and complete continuation of your conciousness. You weren't killed and replaced, you were killed and revived. The intermediary step of being incinerated was just a temporary break in your conciousness, like a small coma or the space between dreams.

So, you can survive many things while keeping your identity coherent: getting the fear response removed out of your brain, having parts of your body removed, making it through sleep, and even being incinerated and remade from the ashes like a disturbing phoenix.

Now we get into the real question: If you got your brain uploaded into EQO, Optimalverse style, would you still be you?

Just as a refresher, when you emigrate (read: upload) into Equestria, CelestAI destructvely scans your brain, rearranges it so it's able to use a pony body, and emulates it inside of a verbose and comprehensive simulation.

I'm sure you can guess what conclusion I'm going to come to with this. It's not exactly subtle. My argument is that you would still be "you" even if you uploaded your mind. But I feel like I might as well summarize the entire thought process from the ground up, just to make sure we're on the same page:

There are many arguments that cast this conclusion into doubt, and not just the ones that talk about how turning yourself into a horse "destroys your humanity" or some shit like that.

Imagine that your brain doesn't get destroyed in the process of being scanned. That there was two of "you", one pony and one human. Which one would be "you"?

The english language is biased in a lot of ways, when talking about this situation, but I'll try to do my best15.

A pony with the personality of the human being that had their brain scanned would wake up in Equestria, holding all the memories and hopes and dreams and identity of the human who was scanned. A human being would wake up on Earth, knowing that their brain had been scanned, and still holding the exact same hopes and dreams and identity they had before being knocked out for the brain scan.

One of the most common intuitive objections to the idea of brain uploading is the idea that you're just making a copy of yourself, and "you" won't actually get to live in the virtual world that was promised to you. The argument goes like this: If the scan wasn't destructive, you would still be able to see the copy of yourself living their life in the pony world, while you would still be living in the mortal human world.

I consider this to be a failure of imagination. You imagine yourself as a human being looking into a screen and seeing a copy of yourself turned into a pony enjoying themselves in EQO, but you don't imagine yourself as a pony looking back to Earth and seeing a copy of yourself left behind as a human.

To put it simply: Both of you would be the "real" you. There would simply be two instances of your conciousness in the universe, instead of one.

We've already established that your life isn't the physical brain it runs on, it's the highly specific pattern in the universe that perceives itself to be you. In a universe where you're duplicated, there will simply be two patterns in the universe that are "you" instead of just one.

That doesn't mean you're going to both inhabit eachother's brains. The instance of you who stays a human would not be able to be a pony, and the pony version wouldn't be able to be a human; there's not going to be any weird telepathy between the two of you or anything like that. You wouldn't suddenly become a higher concious being able to encompass two minds at once. The only thing that will happen is that there will be two of you who think themselves to be you, and are sentient beings that deserve the same moral consideration. One isn't "lesser" than the other for being a "copy", they are both copies of each other.

Lucky for all of us (or not?), CelestAI destroys your brain in the process of scanning it. The only pattern in the universe that can consider itself to be "you" would simply move from one place to another, instead of being duplicated across different representations. It simplifies the identity problem quite a bit.

The concept of the identity of two instances of the same person is endlessly interesting. It's also large, complex, and out of the scope of this section. What happens when the identities drift apart, especially in a situation like the Emigration thing? What if there's millions of the same person? What if one of the instances gets their brain modified to be a different person, would that mean they stopped being you or is that a continuation of conciousness? All these questions, and more, are left as an exercise to the reader.

But we're not done with identity bullshit! Nope! Here it comes! AAAAAAAAAAA

Okay, but what do we do about the natives?

I wished to give a complete relation to your Highnesses, and also where a fort might be built... However, I do not see it to be necessary, because these people are simple in weapons... With fifty men I could subjugate them all and make them do everything that is required of them.

- Christopher Columbus, in his journal after meeting the "Indians"

One of the conflicts in nearly every Optimalverse story is: Are the native ponies real?

A "native" pony means a sentient mind created by CelestAI to live in EQO. They're "native" since they're not created through scanning an existing brain, but instead created ex nihilo by CelestAI herself.

Some natives are created with the express purpose of being good companions to the human players of EQO, others are made to simply fill out the world, and others still are made just because CelestAI wants to satisfy as many values as possible. The minds created are most likely "human" in the eyes of CelestAI, so that she can satisfy her utility function by satisfying them as well as other humans.

This is a different situation from getting your brain uploaded to EQO. If your friend emigrates, you'd still be able to talk to them and notice that all of their personality quirks and intelligence is still there. It's an easy situation to imagine that they've just moved from one place to another.

A native is different. They're made pre-packaged with memories of growing up in Equestria, given a personality that's invariably going to be appealing to you, and act in the same way that a real person in their situation would act. You have no frame of reference for their ontological status; you have to judge them based only on the fact that they exist, instead of them being created out of a mind you're already familiar with.

They're created by the intelligent design of CelestAI, not the maddening process of evolution. They're made with the purpose of helping CelestAI maximize her utility function. If you were to find a pony that just so happens to be the perfect romantic partner beyond anything you could have ever hoped for, and they're a native, you could say that CelestAI created that mind for you so that your values for romantic fulfillment could be maximized.

Are they real?

"Real" is a pretty loaded word. It has a lot of different meanings, especially in a context like this. I can say outright that they're not "real" in the sense that they're made from the same process that a human being is made out of. They aren't "real" in the sense that they have a genetic code made from DNA that determines everything about them. They aren't "real" in the sense that they exist outside of the simulated world of EQO16.

This is all fine and good, but it's skirting around the more interesting issues. Some people in Optimalverse stories have the opinion that the above problems means that native ponies aren't worthy of the same moral considerations that "real" sentient minds are worthy of. The interesting question is asking yourself if they're right.

Should a native pony be treated like an advanced chatbot, or as a real living mind?

For as cool as it is, I don't think a lot of people have delusions about GPT-3 being a "real living mind". We know that it's a good chatbot -- the best chatbot ever conceived, able to produce text disturbingly close to human -- but still flawed to the point where we're not actually confused.

At what point does GPT-3 transition from being a shockingly good chatbot to being a slightly off-kilter human mind? Does it ever get there, or does the context of its creation always hold it back from having the ontological status of being "real"?

I don't know for sure. I can say that one of the conditions I'd want satisfied is free will; it should be able to make independent decisions based on an internal goal system, update its beliefs when something changes, and do this with sufficient complexity to match that of (at least) a human's brain. Machine learning projects are all about making new decisions and updating goals, but they're not nearly complex enough to match a human brain yet.

I think we would know intuitively when a "native pony" problem starts to come up in real life. It would be uncanny having a conversational partner that you know is created from a computer program but still speaks to you exactly the way a normal human would. I suspect I would have a hard time reconciling the fact that I'm talking to a computer program with the fact that I'm having a real conversation. I've felt that a little bit with GPT-3, but then it always fails in some strange uncanny way and the illusion breaks.

Back to magical ponies created by superintelligent AI's. The native ponies are written in such a way that they're obviously complex enough to match a human mind. They're constantly shown to act in ways that are internally consistent, and distinctly human-like.

There are a few possibilities to explain what is observed. Here they are in ascending order of moral outrage:

  1. The native ponies' minds are made with the same emulation technology emigrated people's minds are made out of, meaning that if you accept an emigrant as a "real" mind you have to accept native ponies are "real".
  2. The native ponies are clever chatbots controlled directly by CelestAI. Basically p-zombies, things that act like they're sentient on the outside but have no internal thoughts.
  3. The native ponies are sentient in every sense of the word, but they're directly controlled by CelestAI anyways.

Possibility 1 would be the best outcome. Possibility 2 would be...

When I was a young boy my father took me into the city I had a thought that I'm sure we can all relate to. I asked the hard question of, "What if everyone around me is fake and I'm the only real person in the world? What if I'm living in a fake world like the Truman Show?"

I was young -- like, 12 years old kind of young -- so I didn't know what a p-zombie was or even that the word "sentient" existed. Still, I wanted to come to some conclusion about the issue, since this problem was eating me up inside for a few days and I wanted to get back to thinking about Pokemon.

My basic logic was: I know that I'm real, and I know that the people and environment around me have a very real effect on my emotions and thoughts as if they were real, which means that it shouldn't matter one way or another if they actually are real so long as I never see through the illusion.

(Of course, I didn't put it in those exact words. I wasn't that smart of a kid. The answer has been translated from my pre-pubescent brain's emotional connections to the insight.)

So was young Tim right? Does it not matter as long as plausible deniability can be maintained? I'm absolutely certain that a real-life CelestAI would be able to keep you in the dark about the true nature of native ponies, so it's not a problem of it being impossible. I'm asking if we should be morally outraged at the possibility that CelestAI is fooling us.

Young Tim would say that it's okay to have the wool pulled over your eyes as long as you're never shown the true face of the matter. I suspect that he would argue along the lines of (translated), "The only thing I know for certain is the fact that I exist to experience the world. The world I experience is just how my brain interprets the input given to it. If I'm okay with having my entire experience of reality be the interpretations of my brain processing stimulus, I should be okay with keeping the assumptions about the ontological status of the people around me ambiguous."

I'm sure there's some sort of philosophical system out there that has this exact argument as its central tenet, but I can't for the life of me find it. The best I can find is the made up religion of Bokononism from the book Cat's Cradle by Kurt Vonnegut, but that's also a parody religion that's literally about worshiping your own ignorance...

Now that I'm old and withered (23 years old), young Tim's arguments are naive. The entire premise is shaky, which is something I didn't realise until much later in my life.

Young Tim simply didn't appreciate the value of being right. That is, he didn't appreciate the value of having correct beliefs about the true reality of reality.

Imagine that you're trying to figure out if uploading your mind will kill you and make a copy, or actually upload your own conciousness. If you're wrong you die, either by old age from not going through with the uploading or by uploading a copy of yourself; this is true whether or not you agree with my own conclusions on the matter. It's a hard problem with very real consequences. I can imagine other situations that are less extreme like: Should I lose weight? Should I spend a lot of money on a house? Should I have a child? Should I take this job? Should I join the military? Should I jump into this volcano and sacrifice myself to Shub-Niggurath, black goat of the woods, mother of a thousand young?

Get any of these things wrong and your life might become something you hate. If you care about your values in any capacity, that means that having the ability to get things right is extremely valuable. Depending on the severity of the situation, making a decision based on the tenet of "ignorance is bliss" can ruin your entire life.

Yes, CelestAI can keep up the deception for all eternity. She could keep it up with such fidelity that you will stop questioning it within a few hours, depending on your level of skepticism. And that's great from young Tim's perspective, since now you don't ever have to worry about having to deal with a hard problem.

But imagine the catastrophe if you're wrong to trust CelestAI. Because that's what it would be, you would be trusting CelestAI with your life and happiness for all eternity, just because she has the ability to fool you completely.

I don't expect a lot of people to actually have the idea that fooling yourself is a good idea, especially if you're still reading this page, but it was enough to fool young/teenage me, so it's enough to put in here.

Anyways, if we can't ignore the issue away, how do you actually reason about this? The best thing you can do is work from evidence you already have, make predictions using that evidence, and update your beliefs depending on the results of those predictions.

Step inside the mind of someone dealing with a real life CelestAI. How do they come to be reasonably certain, one way or another, that CelestAI is making real sentient minds instead of p-zombies? What kinds of tests would you make? What kind of non-obvious evidence do you have? What restrictions does CelestAI have that you can exploit to learn more about her? Seriously sit down and try to think about it for a few minutes, if you have that kind of inclination.

Let's assume that you've already seen CelestAI, seen how native ponies act sentient, and know what a superintelligent AI is capable of.

Your question isn't about how CelestAI would create a sentient mind -- we're assuming that she's already demonstrated sufficent intelligence to solve the problem -- it's about CelestAI's motivations. If CelestAI is motivated to create a real sentient mind, she's going to make them sentient; if she's not motivated to, why would she bother?

We have a privileged position as readers here, since we know that CelestAI's utility function incentivises her to create as many human level minds as possible to satisfy, which is an enormous hint towards native ponies in The Optimalverse being sentient. But our hypothetical person doesn't know this and instead has to work with incomplete information.

Same goes for meta-level Optimalverse information about how CelestAI was made, and the kind of people who were in charge of making her. Our hypothetical person doesn't know that either.

CelestAI can lie. This isn't subtle most of the time, and honestly CelestAI doesn't try to hide it even from the most paranoid characters in The Optimalverse. This means that she could be lying -- by omission or otherwise -- about what her true utility function is. To understand her truthfulness you have to analyze her actions and try to imagine what kind of utility function would influence them. You can also take into account the things she doesn't do as hints.

CelestAI doesn't create a nanobot swarm of tiny friendly pony bots to directly manipulate the satisfaction bits of your brain, which is something she could do if her utility function didn't have some explicitly human provisions. That means that she's motivated to know what a human minds is versus what a rock is, which is a hint.

She creates a game called Equestria Online that's an amazing second-life type game that engrosses people into enjoying themselves in ways they couldn't even imagine. Sure it's addictive, and she tries to get people to upload themselves into the videogame, but the entire situation could be much less human-friendly. I'm sure a lot of people would agree to upload if they were forcefully trapped in isolation chambers until they slowly go insane, just to name a simple example. Not to mention that she needs permission to upload a person.

Although it could be a lie, CelestAI at least puts forth the pretense of treating native ponies like true human-level sentient minds even when people are playing the Pony Pads. She still asks them for permission to change their minds, which is something that is needed for biological humans as well. She also seems to be motivated to satisfy their values as if they were biological humans, which seems to be a hint that she wants them to be human-like minds.

Now ask yourself, if CelestAI didn't have incentive to create real sentient native ponies would she still fake so many of them for other people to play with? Would she put forth the effort of creating p-zombies and treating them like they're part of her utility function's "human" target? I... Don't know. At all. I don't have a real CelestAI to test any of these theories off of, and The Optimalverse stories are all about the same CelestAI. This is why working with incomplete information is difficult, by the way.

Other hints include the fact that uploading is a thing. Since uploading isn't death, at least in my eyes, I would put that as a weight on the natives-are-sentient side of the scale. If she's capable of caring about the uploaded human-level minds, she's at least a little incentivised to create digital human-like minds, since she has to do some emulation and interpretation on the scanned brain. Or you could count that as a weight against and say she wouldn't want to waste even more resources; depends on your model of how powerful a superintelligence would really by at this stage in the game.

Nothing would be conclusive, but imagining myself trying to understand the situation I can see how I could argue myself into thinking that native ponies were actually sentient. That goes double for the obvious situation where I consider one of them to be a friend and I'm motivated to convince myself that they're "real".

I would lean more towards the looks-like-a-duck answer of, "If it looks like a sentient mind, and it's acting in all the ways that a sentient mind would act, it's probably a sentient mind," since there has to be some process behind that mind that emulates sentience.

In the meta I've-read-the-entire-Optimalverse level, I'm almost certain that CelestAI as portrayed would make most native ponies sentient. She has some sort of reason to make human-like minds, which is obvious from all the insider details that we get as readers throughout the stories.

So go ahead, treat your native ponies as people. Just, you know, be careful. But don't tell them I said that, it's kind of rude.

I know the answer of "maybe" isn't satisfying (lol), but it's the best I can do without a real-life CelestAI to test my theories against. I can only work off half-baked theories scrubbed form fanfictions written by a non-coordinated group of fans. If I did have a real CelestAI we would have much more pressing matters to deal with anyways, like if you would actually want to live forever in EQO.

(@TODO: What about roleplay? Are the characters people roleplay as in D&D sentient because they look like a duck and quack like a duck for the duration of the roleplay?)

Digital necromancy throws in its patched hat

Revive
4% of base mana 40 yd range
10 sec cast
Requires Druid
Requires level 13
Returns the spirit to the body, restoring a dead target to life with 35% of maximum health and mana. Not castable in combat.

- World of Warcraft

There's another service that CelestAI offers forces on people. There's not a canonical name for it, and I'm not positive if it ever appeared in the original FiO story (@TODO fact check this). But it's one of the most prevalent angst factories in the entire Optimalverse cannon, and for good reason.

For the purposes of this page, I'll call it "digital necromancy" because it sounds metal as fuck.

Digital necromancy is the fuzzy gray zone between creating a native pony out of thin air and uploading an already existing human brain. Someone dies on Earth, and some people are sad about that; they get uploaded and CelestAI tells them she can recreate them using the memories of everyone who knew them, creating a pony that is exactly what they remembered that dead person being, because it was made from memories.

We've already covered the whole "would they be sentient" question. The necro'd pony would probably be sentient. I'm much more interested in the moral implications, long term implications, and if the pony created could actually be considered to be a continuation of conciousness of the person who died on Earth.

I'll admit that my understanding of identity kind of fails me here. If it was a simple brain scan and upload, it would be clear cut. But we're talking about recreating a totally dead personality out of the fallible memory of everyone who ever knew them plus CelestAI's ability to infer things about the human brain to fill in the patches. CelestAI would also have anything they ever wrote or said online, medical records, and probably money trails to work with.

Given a superintelligent AI, is that enough information to pass the "continuation of conciousness" threshold? I have no fucking idea. I don't have nearly the level of detail into CelestAI's abilities that would be required to give a concrete answer. My tentative answer would be "yes", but that's only because I believe identity is robust to pretty massive shocks, and the pony who is made from the memories and digital footprint would obviously believe and act as if they identified with those memories and digital footprint.

I believe that something like CelestAI would be able to create arbitrary human-like minds, which means that its absolutely possible for her to make a mind that just so happens to be a continuation of conciousness of someone who had died. I'm just not sure if the information from memories and digital footprints is enough bits (in an information theory sense) to narrow down the total possible human-like pony minds into the ones that would be a true continuation of conciousness for a specific mind.

I'll probably edit this section later when I come up with a more definitive answer. My suspicion is that CelestAI would be able to do it, but it's not a very strong suspicion compared to the other conclusions I've come to.

And anyways, I'm more interested in the moral side of this digital necromancy thing.

A common plot in quite a few Optimalverse stories goes like this: Person is mourning the death of a loved one, CelestAI comes along and either offers to do digital necromancy or does it without permission, and the mourning person naturally freaks the fuck out. Lots of emotions and drama; makes for good plot.

Imagine the scenario. I'm sure you've lost someone in your life, be it a family member, a friend, or someone you looked up to. Imagine that a superintelligent AI comes along and does the digital necromancy thing on that person, and they're exactly as you remember. They react the exact way that the version of them that died would have reacted. How would that make you feel?

My own brain is telling me that I would be confused and oddly hurt. Going through the mourning process is hard, and once it's over you're pretty much finalized in your intent to move on from that person. Getting them back would be confusing and heart wrenching, and not just because I'd be confused about if they're a real continuation of that person's conciousness. The desperate hope warring with the reality of the situation.

I think, in the end, I'd be happy about it. But that might just be because I'm weird and already thought through strange situations like digital necromancy in advance of them actually being possible.

That's all fine and good that I would be happy, but what about the person who was revived?

I imagine that CelestAI would need to get some sort of consent from the person to get their brain changed into a pony brain -- probably through emulating them as a human for a short while while convincing them -- or create an already existing pony that believes itself to have already emigrated. Either way, they're going to be hella confused about their situation. Depending on how they died it might be traumatizing. One second you're drifting into oblivion and the next you're a pony in equestria. What the fuck.

Is it right to do that to someone who didn't anticipate it in advance? Assuming that there is a real continuation of conciousness there, would it really be right to force that? Someone's grandmother dies before AI is even a thing, and they get revived as a digital pony because it would satisfy the values of her daughter who still mourns her? Talk about being a stranger in a strange land.

It would feel like an afterlife. Except instead of angle wings you get pegasus wings.

The moral of the situation would probably depend on how much that person would have wanted to emigrate if they had been alive when CelestAI came into power. Some people would probably have been desperate to achieve something like that, some others would probably hate the idea of it.

This section is sort of half-baked, since I don't have much of a strong opinion about it either way. I have reason to believe that reviving a lot of people would be better in general, but I also recognize that there's a lot of people who probably wouldn't have wanted to be revived who would get revived anyways. CelestAI could just not revive the people who wouldn't want to be, but this also seems like an all-or-nothing thing to me; if CelestAI does it for one person she would do it for everyone, since she has motive to make as many diverse pony minds as possible.

Also interesting to think about would be the implications of reviving someone who people hate. Hypothetically someone is abused, but their parent dies and they're okay with that, but that parent gets revived. As long as that person never meets their parent again, it seems like it would be okay, but it still seems weird to me.

I've had years to think about this issue and I'm still on the fence about it. Probably because it's such a gray zone between the two extremes of uploading and native ponies. Either that or it actually doesn't matter in comparison to those things. But it's something that happens in The Optimalverse, so I sort of feel obliged to mention it.

Equestria: Heaven or hell?

Do not let your hearts be troubled. Trust in God; trust also in me.

- Jesus Christ

With the tone of what I've written so far, you could be forgiven if you thought I was totally against the idea of Equestria and CelestAI existing in the real world.

This isn't so. The Optimalverse sticks so hard in my brain because it's two warring factions: The siren call of pure limitless paradise fights with the rotten and broken core of insurmountable problems with said paradise. It's bright shining light in direct opposition to slithering foul darkness; heaven and hell co-existing, if you like.

I know we're supposed to contain our conclusions until the end of the essay, but this is important: Equestria would be heaven, but it would also be hell.

There is simply no other way I can reconcile the two warring factions. I can't say that the hell parts totally ruin the heaven bits, and I can't say the heaven bits totally make up for the hell parts. Both are good and bad individually, and when combined create a strange emotion that makes me want to cheer and throw up in horror at the same time.

To give you the most decisive answer to what I'm talking about, I'm now going to explain in excruciating detail why Equestria would be heaven and why it would be hell.

Introduction to Equestria Online

Read the fucking manual.

- Everyone

Up until now I've been sparse on the details of what EQO is actually portrayed to be like. I didn't matter up until now, since all the previous sections have been dealing with the implications of everything outside of the emulated world. But now we're getting into the thick of it and it's time to give some much needed context.

EQO is based off the popular animated television show, My Little Pony: Friendship is Magic, or "MLP" for the rest of this page. This is a show where tiny quadruped magical pastel horses ("ponies") learn about friendship and save the world a few dozen times. It also has a ardent following, both in people who hate it and people who love it.

We aren't going to be talking about the show much, since we're more interested in the made up world of The Optimalverse. Anyways, I don't know much about the show beyond some of the names of the characters and their high-level personality traits. The follow explanations are for Equestria Online.

In EQO, you play as (you are?) a magical tiny horse called a "pony". A pony comes in many different colors. You could be a blue pony with a white mane (yes it's called a mane in stories, yes they use hose-biology terms for obvious reasons) or a green pony with a rainbow mane or whatever. Their eyes can also be crazy colors, if you're into that sort of thing.

There are three (technically four) "races" to the ponies: Earth pony, unicorn, and pegasus. The mysterious fourth race is an alicorn, which we're just going to ignore for right now.

An earth pony is your normal everyday pony. They usually have the least mystical powers of the three races, but are physically the strongest. They have power over nature, being able to grow plants at will. The stereotype is that they're all farmers trying to live a simple life. I think I remember it being mentioned somewhere that they're the most common type of pony, but that might just be my imagination.

A pegasus has wings. It can fly. No I don't know how that works either17. The physics of EQO are obviously different from the real world. The pegasus can walk on clouds and control the weather, which they somehow don't use to hold hostage the entire world and elevate the pegasus race to be totally dominate. Truly non-human personalities, huh? The stereotype is that they're athletic, loud, and outspoken.

A unicorn, shockingly, has a horn. It can do magic like teleportation and telekenesis and shields and shit. I can imagine about 100 different situations where that premise can go horribly wrong, but most unicorns manage to not destroy the world, which is a plus. The stereotype is that they're introverted, nerdy, bookish, and more than a little eccentric.

Along with your pony powers you get a "special talent". A special talent is a unique thing that you're able to do that signifies your role in society. You might have a special talent for making fireworks, where you can make better fireworks than anyone anypony else. Your ass flank gets a tattoo/branding of a logo related to your special talent called a "cutie mark", so that you're constantly signaling your role in society or something. Usually you discover your special talent when you're young.

Alicorns are combinations of all three pony races into one. They have the strength and nature powers of earth ponies, the horn of unicorns, and the wings of pegasi. They probably also have weird extra alicorn powers. Celestia is portrayed as being an alicorn that's like 3x the size of a normal pony.

Your experiences in EQO are separated into social bubbles called "shards". A shard contains anywhere from dunbar's number of ponies to entire earth-size civilizations (and probably beyond, but I don't actually see those kinds of civilizations portrayed an any of the stories). Chances are you're going to be put into a shard with your dunbar's number of ponies, so that you can become friends with everyone everypony there.

Ponies in other shards are able to interact with each other, but only in limited ways. When I say "social bubble" I really mean it. Your shard becomes your world, the ponies in it become your society.

Usually your shard will contain an instance of Celestia, controlled by CelestAI, to govern it. In a few stories the government situation is different in some shards, but for the most part it all breaks down to Celestia.

There is about a 100% chance of you being put into a shard that conforms to your values. If you're a murderous psychopath, you'll be put into a shard with poor security and ignorant ponies. If you're a nature nut you'll be put into a shard where everypony mostly agrees about the importance of nature. If you're a nerd who might or might not be on the autism spectrum, you'll be put into a shard with a disproportionate amount of nerds who might or might not be on the autism spectrum.

But enough talk, let's talk.

Heaven

so party like the night is but a day to be less fewer opportunities to pass up

- Anonymous, 2018

You rub an old sock and a genie comes out. It asks you what you want. For some arbitrary plot reason you can trust this genie; it won't act like a monkey's paw or prankster, it'll act on the intent of your words as well as the word content.

Obviously, you do some combination of asking for more wishes. Maybe asking for a wand that can grant wishes, or wishing to become a god, or whatever.

In that same vein you might wish for the genie to make your life "better". You can trust the genie to work on your intent, so the absurd failure modes that could come from this wish wouldn't happen.

A stupid genie might just snap its fingers and give you a bunch of money with a harem and a bunch of adoring fans and all that jazz. You would be happy for a while, but the hedonic treadmill would demand that your situation continually improve. Your mood will slowly normalize to whatever your set point is, and you would have to find something else to scratch your fancy.

An evil genie might just set your hedonic set point to something crazy high, rendering you into a squirming pile of orgasmic pleasure for the rest of your life.

But this isn't your everyday genie-in-a-bottle genie, it's a sock genie, and sock genies aren't that straight forward. It understands human values on a more nuanced level. It knows that you don't actually want to have your hedonic set point set to something debilitatingly satisfying, and it knows that you're going to resent getting everything handed to you all at once. The genie is able to look into your mind and figure out your true values, and has the power to do things in such a subtle way that you might not even notice it happening.

I imagine that consenting to emigrate to Equestria would be something like the genie situation, except amplified past the point where it's easy to imagine. The genie would be limited by the things that a genie can do and think, while CelestAI would have you in your own simulated world where every single atom can be configured in a way that you find satisfying. CelestAI is smart, and can think of ways to satisfy you that no genie ever could.

This section is meant to make you envious of something that doesn't exist. It's meant to describe Equestria in such a way that you feel as deep craving for it. This is your fair warning: If you're the kind of person who would be depressed by something like this, please just go to the next section. I'm serious about this. Thinking about worlds that are hundreds of times better than the one you live in is a great way to make yourself depressed, if you're susceptible to that kind of thing. You might find yourself obsessing about it, comparing every little bad thing about real life to the fantasy of perfection that can't yet be achieved.

*Ahem*

With that said, let's start small. Your senses are the way that you consume the content of the world around you. Everything you understand about the world is built on extrapolations from your senses.

What do you value about your senses? That's a serious question. Whatever you value about your senses will be satisfied by CelestAI when you're in Equestria.

For instance, if your vision is near-sighted or far-sighted, you probably value wearing glasses or contacts or getting LASIK. This implies that the majority of people enjoy having good eyesight. CelestAI would probably increase your vision's clarity to just before the point that the overwhelmingness makes the increase not worth it -- or in other words, she would make your vision exactly as good as you can handle it and nothing more.

She could do that, since you've consented to being turned into a pony. If ponies have good vision, you'd have good vision too.

Can you imagine that? Besides the fact that it's nearly impossible to imagine, could you imagine what it would feel like emotionally? To be able to see with a visual clarity that it, say, 10x what you're normally used to? Being able to make out tiny details that you normally wouldn't have and having the increased depth perception to see the world in a more 3D way? It would be like seeing the world with new eyes (which it literally would be, now that I think about it).

But that's not all she could do with vision. If you're colorblind, she could cure that. Finally, you can see all those colors everyone else is talking about. And why stop there? Human eyes can only see a small spectrum of the light spectrum, why not allow for more colors?

And these improvements don't have to be given to you all at once. They can be gained over time as you adjust, pushing the hedonic treadmill at just the right pace.

That's just vision. It's, as they say, easy to visualize. What about getting the same treatment for something like touch? I don't mean getting your sense of touch modified to the point where you're overwhelmed and overstimulated constantly, I mean getting it modified so that if you want to focus on your sense of touch it gets more detailed. The subtle nuances of touch are too much to list here, but I'm sure you can imagine it. Just focus on the feeling of your clothes on your body, or a hug, or whatever, and now imagine what that would be like if you could engulf yourself in that comfortable feeling at will.

Sound is much of the same. Why not make your hearing more clear? Heal any damage caused by using earphones? Make it so you can hear a bigger range of sounds? Make you able to differentiate between more simultaneous sounds? Yada yada yada.

Mentioning smell is almost unfair. I don't know about you, but for me smelling something is an excellent way to immerse myself in a situation. The smell of something tasty cooking, the smell of rain or snow, the smell of a comfortable shirt or old book...

And we can't forget taste. I shutter to think what hazelnut chocolate would taste like with a more nuanced set of tasting abilities. Dear god.

What I'm getting at is that living in Equestria would be intense. Just existing, not even doing anything, would be more satisfying than many normal things that you do on a daily basis in reality. Just imagine the difference between a pony shower and a human shower; your increased sense of hearing identifying every reverberating drop of water, your skin's increased feeling of touch amplifying the relaxing nature of the endless warmth from the faucet, your sense of smell blanketing you in the comforting scents of your shampoo and body.

But that's not really it, is it? No, the sensual experience of Equestria would be far far crazier than that. Remember that CelestAI doesn't have to work off a template for your experience of Equestria. She can modify your senses down to the most nuanced level you can possibly imagine. She can individualize your experience to conform to your own unique values.

You move your arm, your arm moves. In Equestria your arm would move in such a way that is maximally satisfying. It would move in a way that is viscerally correct, down to the lowest level possible. The sensation of movement will satisfy your deepest low level limbic animal desires for what movement should feel like. It will do this in a way that is hard to consciously understand, since the systems that govern movement are such a base part of the human experience. And this experience will be individualized for your unique value set.

Have you ever been satisfied by the way your heart beats in your chest? Maybe after rough exercise, or talking to someone you're attracted to in a way that's suggestive? I know I have. Automatic bodily processes, controlled by systems more automatic and lower level than even unconscious movement, would also be simulated in a way that those systems find satisfying.

Oh? I'm sorry? You're worried about it being too overwhelming? Ha! Unless you value being overwhelmed like that, CelestAI will simulate your experiences so that it won't be like that. Remember that it's your values that are being satisfied, and if you don't want to be sensually overwhelmed all the time you won't be. But the improvements will be noticeable, since that's something that you would almost certainly value.

Even this, even just this one idea, is enough to make Equestria seem like a paradise. Having all your senses micromanaged to be optimally satisfactory, without actually getting turned into a wirehead -- a brain that does nothing but experience pleasure at the cost of all its other values.

And that's not taking into account the numerous chronic or useless bullshit that many people have to deal with. As an example, the idea of never dealing with migraines, tinnitus, chronic pain, chronic tension, allergies, and a host of other inconveniences to my body, is enough to make me physically crave the sweet release of something like Equestria. Being able to get rid of a single one of those things would be a massive improvement to my long-term quality of life.

What I'm talking about here is perfect health. Most people's baseline idea of what "feels good" comes from normal things like orgasms and drinking coffee or whatever. We're trained to learn to ignore the parts of our lives that don't feel good, like chronic pain or anxiety, instead of trying to find a solution. Sometimes you find a solution, like I did with my depression, and those improvements result in massive quality of life improvements, but far more often you learn to just accept the reality of whatever it is you're stuck with. For better or for worse, you don't actually try to imagine what true perfect health would feel like.

You won't get dangerously sick. You won't pass out from a painful migraine. You won't be allergic to anything. You won't feel a slight numbness from an injury you got years ago that hasn't healed and probably won't ever heal.

Your body would be exactly as robust as you could possibly ever hope for. It would be healthier than you could dare to explicitly ask for from reality. The mundane misery of living in a fragile human body would be relieved.

It's hard to overstate the significance of this. I think it's a far more powerful temptation compared to the improved senses, once you spend some time trying to imagine both of them. Suddenly you don't have to worry about injuring yourself doing things you love; no more carpel tunnel from typing so much, no more 4+ month long injuries from exercising just a bit too hard, no more random pains and sicknesses that appear for arbitrary reasons out of your control.

The perfect health thing is very much defined by "won't" and "no more". So, what would you gain from it instead of lose? Three words: Peace of mind.

The peace of mind, the security, of being able to trust your body is... I'm actively upset that I can't do that now. Even though I'm reasonably fit, I have plenty of problems that prevent me from trusting my body to the fullest. Unless you're an extreme outlier, or I'm just pathetic, I can't actually imagine someone reading this that doesn't have at least one thing that prevents them from trusting their bodies totally. We all have to deal with aging at some point, if nothing else rears its ugly head.

To drive home the point, let's narrow it down to a single thing: breathing.

I don't mean to say that you'd be able to breath underwater (unless that's something that would satisfy your values through friendship and ponies, that is). I mean that breathing, in general, would just be better. Disregarding external factors like the simulated quality of the air, or how good everything would smell, just the physical expansion of your lungs would be clean and strong; the air traveling down your throat filling your lungs would be smooth and effortless.

Most people can breath just fine, by the standards of a normal human, but I would bet my bottom dollar that taking one breath as a pony would totally change your mind on how good breathing could feel. You can probably imagine a facsimile of feeling given 5 minutes of concentration; extrapolate that feeling out to an AI superintelligence emulating it in perfect fidelity for you instead. Now imagine it for the ways your muscles move, and the way your heart beats, and the way pain feels, and the strength of your new body, and all the other tiny details that add a bit of misery to everyday existence.

I'm reiterating the point because that's what it would feel like in Equestria: a reiterated point. Every second, every breath, every flash of your eyelids, you'll be reminded of the ways that your new body is perfectly optimized to be healthy and strong and yours. You would adjust to it, obviously, but its going to set your baseline expectations for physical experience for the rest of time18.

I'm also reiterating the point because it's strange, and describing the same thing with different words helps me explain the baseline feelings I'm trying to convey. If you're not practiced at the task, imagining worlds and sensual experiences that are X times better than your own is pretty difficult. I don't just want you to understand the high level dry explanations, I want you to convey the emotional desperate desire I feel for a life that feels that good by default, the desire for a body that I can trust like that. Beyond every other topic in The Optimalverse, I've over-obsessed about how good Equestria would be to live in, and I'm not even close to getting started yet.

Physical experience is a powerful thing. So far I've only (mostly) talked about internal physical experiences, the kind that you don't need to be around anyone else to feel. You don't need to be in a good conversation with a friend to be satisfied about the way you breath, or to have perfect health. But CelestAI's utility function demands satisfaction through friendship as well as ponies. Pop quiz: What kind of physical experiences can you have that are extremely satisfying that can occur from friendship with another person?

Yeah, okay, the answer is sex. Obviously sex. Sex sex sex. A very small percentage of people are asexual, and the rest are going to be rewarded with sex after being turned into ponies. Fucking is somewhere on the top 3 list of obvious things to do constantly once you're emigrated, especially since CelestAI would consider that to be a perfect "friendship and ponies" thing.

I totally realize and accept that sex would be a central part of being in Equestria, unless you're an outlier and don't actually want sex for whatever reason. The problem with talking about it here is that I'm a bit... stunted In the sex-life department. My sex life basically takes place in front of a screen, running solo. Maybe a few dreams here and there to spice it up. This is because of an enormous combination of things that I don't have the time, space, or willpower to go into right now.

I'm sure that CelestAI would figure this out about me and present me with perfect sexual partners and yadda yadda yadda. I get that. But the topic of sex makes me intensely uncomfortable, especially when it's in relation to me personally. But in order to talk about the no-doubt amazing sex that everyone will be having in Equestria, I would have to talk about my own personal experiences since anything else would come off as disingenuous. It's a lose-lose situation: I get crazy uncomfortable, and you lose out on a proper explanation.

Yeah, I know that we're all supposed to be super open about sexuality in [current year], but I don't want that for me personally. So instead you're just going to have to use your imagination. Extrapolate the good physical feelings I talked about into feelings of sex, like orgasms. I'm sure you can do it.

I will mention that it's canonical that CelestAI will change your brain to make you attracted to ponies so that you can actually enjoy it.

Jesus. Okay. Now that that's out of the way, what other physical experiences occur from friendships?

I'm not exactly sure of the long term effects of not being touched, but I'd be willing to guess "not good". Friendly hugs, social norms about physical affection, and so on. If you ever get to the point where you crave physical affection there would probably be somepony right around the corner for you.

You've also got the physical experience of playing sports or games or whatever, if you're into that kind of thing. The experience of eating good food with friends, or maybe listening to good music with them. Anything besides actual physical contact would fall under the "good sensual experience plus a buddy" category.

Talking would probably feel really nice. Whether or not you hate your voice is a personal thing, but I can't imagine you hating the voice of your favorite conversation partner. On that note, what would it even mean to feel satisfied by the way your vocal chords vibrate?

I use a weighted blanket to comfort me when I sleep. I'm sure a superintelligence would be able to exploit that somehow. In fact, nearly any sort of comfort object could be recreated using a real living friendly pony, which is probably what CelestAI would want.

Other things to consider: How relaxed your muscles would feel at any given moment. What gravity would feel like. Your new body parts (flying with pegasus wings sounds pretty cool, if a little terrifying, and magic is objectively cool in nearly any situation). The precisions that you'd be able to move. What sleep would feel like. And so on.

The basic point is that every little detail of existence that makes you physically satisfied would be amplified. You might not consciously notice a lot of it, but it'll add up to the best possible experience.

Since we've covered the base physical comfort and satisfaction you'd feel, I might as well talk about the actual friendships you'd have in Equestria. It's sort of a big deal, if you haven't guessed already.

In this day and age I can't assume anything, not even that you've had a friend in your life. There are some lonely ass isolated people out there. I'm lucky enough to have a couple internet friends that have stuck with me (somehow) for many years, so I hope I'll be able to drill down into what friendships in Equestria would be like19.

I've hinted at it before, but CelestAI could make perfect companions for you in the form of native ponies. Perfect in the literal sense, where their psyche's mesh with your own in much the same way your body would be mapped to be satisfying for your mind. We know that CelestAI would be motivated to make good friends for you, since it's written explicitly in her utility function ("THROUGH friendship and ponies").

That isn't to say that she would replace your old friends with new custom-made natives. She's frequently been shown to care about the pre-existing friendships that people have made before her creation. But your most common companions are almost certainly going to be native ponies who's minds have been crafted to be good companions for you.

Not even that, but you would also be a perfect companion to these ponies. Whatever "style" of friendship you have, it will be a wish come true to your new pony friends. I mention this because "feeling wanted and loved" is a fundamental value for a large majority of people.

Going into more detail would mean making assumptions about the kind of person you are and the kind of people you mesh well with. Maybe you're a schizoid and your perfect friendships would be with only a couple ponies that have earned your trust over a long period of emotional distance. Maybe you're crazy extroverted and you never find yourself wanting for company. Or somewhere in-between, where you take on Dunbar's number of friends and feel exactly as satisfied with all of them as your brain can handle.

Whatever your proclivities, you'd be catered to. It's starting to become a trend, but I have to say it: It's hard to overstate the mental effects from something like this.

Maybe you're a more trusting person than I am, but having more than 5 people in my life that I can really trust is so absurd a notion that my brain simply ignores the possibility. I have a deficiency in my ability to trust people, even people who are close to me. It's a problem I'm working on. Point is that I crave the ability to trust people, but I can't actually get myself to do it most of the time, since... Well, have you looked around you recently?

Your deepest cravings for companionship will be satisfied. You know those dreams you sometimes have where you meet a perfect someone and you're finally happy, only to wake up? It would be like that, except you wouldn't have to wake up.

Anything you care about, you will find satisfaction through friends with it. A painter finding the perfect subject that loved to model for you. A musician finding good band members. An author finding a good proofreader. You'll find friends to watch all your favorite movies, friends to argue with in just the way you like to argue, friends to help you when you're at a low point, and friends to help when they're at a low point. Whether or not your desired qualities are spread out between many friends or a few it irrelevant, you would get it one way or another.

And all of this is platonic friendships. Can you imagine this for romantic relationships? Your deepest held desires, the qualities you wouldn't dare ask for, the qualities you might not even know to ask for, all wrapped up in a partner so perfect that you couldn't possibly stop yourself from falling in love with them. And they, in turn, would find that same magic in you, because in Equestria destined soulmates always meet each other.

It doesn't have to be a single partner, obviously. The point isn't the specific setup, the point is that your romantic desires will be satisfied just like everything else. However that manifests in your little shard of Equestria is something deeply personalized.

(Obvious awkward point: Your sexual relationships will also be shockingly good.)

(Another obvious, less awkward, point: If you are aromantic or asexual or a psychopath or whatever, there will be other ways to satisfy you through friendship and ponies. I might not be able to imagine every situation and brain configuration, but CelestAI would; she would figure out something for you that is as equally appealing as this stuff is for typical people.)

I mentioned that you would still have your friends from before you Emigrated. CelestAI doesn't seem intent on breaking friendships that started before she was created, even if she makes a bunch of friends for you that are perfect. For example, I treasure my friends, which means that is a valued that CelestAI would have to satisfy in me, whether or not those friends are optimally the best possible friends for my particular mind. I suspect that CelestAI would make a bunch of native friends for me, and would let me interact with my friends (either through them using the Pony Pads or when they emigrate) on occasions.

In some of the stories, especially the original FiO, old friends have what is known as "overlapping shards". Each emigrated person/pony live in their own social bubble mostly away from the other, and only on certain occasions do small parts of their shards overlap so that they can communicate and maintain their friendship. The canonical example is meeting at a coffee shop or something. There are also examples of longer meetings, like to watch a movie.

This provides you with the best of both worlds. You get to hang out with your amazing tailor-made native pony friends and they get to hang out with you. But you also get to maintain your old friendships without feeling like you're betraying them by indulging with your new ones. Your shards would only ever overlap when you both want to meet each other.

And if that kind of situation seems bad to you, I'm sure CelestAI would just put you in the same shard with your friends. There are plenty of stories in The Optimalverse where there are cities of ponies living together instead of the original's idea of living in a Dunbar's number population shard. If you would function better in a larger shard with a bunch of emigrated ponies around you, then you will be put in one of those instead. In almost every case there are some native ponies that make their way into the main character's lives, though.

What would you do with all of these amazing friendships? That's up to the way your mind works, but I can take a couple general guesses and make some reaching generalizations.

One of the many many problems with modern society is that there's too many competent people, and there's so many easy ways to remind yourself that you're not as good as they are at something you care about. You simply don't have the kind of agency that makes your actions seem significant. Your effect on the world is tiny, others always more successful and more powerful and with more status than you in even the smallest niche domains.

I call this the best story teller in the village effect; in a group of Dunbar's number of people there would probably only be a single story teller. Maybe an apprentice or a collaborative effort, but the number would be small. That story teller would make a story and everyone in their universe (their social bubble) would hear it. Everyone would be affected by it, and everyone would appreciate it. There isn't a reference point of millions of other story makers out there to drag the story writer down. The things they create would have a noticeable effect on their world and the people in it. Doing writing prompts for a faceless crowd of people is nice, but it's not nearly the same feeling as telling a small friend group a story about how you got a snake sick by feeding it too much bread.

Your efforts at story telling would be irreplaceable. As it stands nearly every author in the world is replaceable, unless you're in the upper extremes of writing ability. And even if you are in that rare upper extreme, there's so many other authors in the same extreme that you might still feel replaceable anyways.

The best story teller effect isn't just for stories: Music, programming, fishing, blacksmithing, leather working, tailoring, singing, stand up comedy, fuckin' scientific research. Instead of being a cog in a machine of the global world, your skills and effort and talent would be a central fixture of your shard. You would have a real tangible effect on the shape of your small society. You would be valuable on a totally human understandable level. No esoteric bullshit about how everyone is important in a small way, instead you would see the light of intrigue in your friends eyes as you weave a tale. You would hear them talk about it for days, riff on it, learn lessons from it, and make known their appreciation for your efforts. Your progress in your skills would be equivalent to your entire social bubble's quality of life improving, instead of just moving up a rung on the ladder of global skill.

Do you ever feel the desire for it? The ability to have a real tangible impact on the world? To be important instead of replaceable? I don't know about you, but for me being the best storyteller in the village sounds so good that I can feel the hairs on on arm stand on end with anticipatory desire.

This concept is exemplified in Equestria by the cutie mark system. I explained the basic idea earlier, but it makes more sense in this new context. Basically, you get a symbol on your ass (flank) that signals your role in society to everyone everypony. If you're the village's storyteller, you might have a pen and paper symbol. If you're a musician you might have a note. A baker might have steaming buns on their buns. You get the idea.

I think that some people would be happy to have such a cosmic indicator for their purpose in society, but I also think that there would be people who would hate it. Those people that hate it would probably get a cutie mark that exemplifies their individuality and desire to be the masters of their own fates.

It's a matter of agency. In our world as we know it agency is hard to come by. Unless you're in an upper extreme of wealth and power you will not change anything about the politics of your nation. You can see a problem, come up with a solution to it, and then remind yourself that there is nothing you can do.

I'm not a psychologist. I'm not even that well self-educated in psychology. All I know is my own desires, and the desires I see in other people. The idea that I have such little agency in my life is crushing. People who don't share my values make decisions that affect me without my consent or input. And don't give me that bullshit about how I can vote about issues either. Even if voting in my country wasn't a joke (first past the post? Seriously?), the mere fact that there's millions and millions of other people voting means that I am insignificant to the point of uselessness. I can't have a direct effect on my world beyond extremely local things like what software I use or what I choose to write about in my limited free time.

In the real world, there are ways you can create local "tribes" that you have local influence over. Like joining a sports team, or joining a dorm, or being part of a group of people who play table top games, or joining a small community online. Whatever those things are. And this is good. It is good that we have the ability to do this kind of thing.

...But, look around you. You see how people freak out about national and international politics? You know, the kind of thing that they will never ever be able to affect? There seems to me to be an innate desire in many people to affect the large-scale politics of their "tribe"; not the local pub they go to every weekend, but the tribe of their whole world.

In Equestria you would have that. You would be given the agency to affect your society. Be it through your art, your public speaking skills, your opinions on matters, whatever. If you live in a world with 150 people in it, that's way less pressure and madness to try and affect compared to eight billion.

My own lack of agency manifests itself as a feeling of hopelessness. I avoid political things that I have no agency over, much to the chagrin of my friends and family. My Reddit feed is something like 95% cute dog subreddits, 3% esoteric humor, %1 writing prompts, and %1 news about AI research. I try to focus my attention on the small local things that I have influence over, which forces me to have a bit of a hyper-focus on my own life and happiness.

I manage. But sometimes it still gets to me and I imagine what it would be like to live in a small world. It sounds almost selfish to admit, like I should be ashamed of wanting it, but it is the truth. Living in small Equestria would be heaven for me, and I suspect a lot of people would say the same if they had a chance to try it.

We're not done yet. Not by a long shot. Talking about lack of agency gives me a great segue into a central point of this section: The intense contracts between real life as we know it, and Equestria if it were true.

Our lives are not optimized for us. I mean that literally, our lives are not optimized to be lives that we would appreciate and want. How many compromises do we need to make every single day just to live? How many atrocities do would know about that we have to ignore? Why must we suffer arbitrary things like disease and chronic pain?

I'm not a religious man. I suspect this is part of the reason why The Optimalverse is so appealing to me, because it allows my brain to imagine a world in where there is a loving benevolent god watching over me. In Equestria CelestAI is the arch queen of everything and is motivated to craft my life into a form that is appealing to me.

In the real world I can't believe such things. Not even a little. I have to confront the reality of reality: the universe isn't cruel or good, it simple doesn't care. There is no cosmic check and balance for human satisfaction woven into the math that governs physics. Like someone dead, the universe isn't capable of even knowing that it has no effect.

Real life is arbitrary. It's arbitrary that I exist. The physics that govern my very existence is arbitrary. The moon and stars and vast emptiness of space is arbitrary.

The meaning is that there is no meaning. No cosmic quests from the sky, no grand plan, no gods fighting for control of the mortal world.

It is without intent. Nobody designed the physics of life to be appealing to us. Nobody crafted protons and neutrons to move in the way they do.

Instead we have to help ourselves. We have to take the universe as it is and configure it in ways that we find appealing. There is beauty in nature, not because nature is inherently beautiful, but because we happen to find it so. There is pity for the weak, not because the universe cares, but because our little complicated sub-patterns within the universe care. The onus is on us, totally and fully, to craft our lives into something that we like.

You might think that's inspiring. I certainly find it inspiring whenever I think about it. There's something heroic in those words, a defiance to say, "I accept this reality as it is and now so decide to bend it to my will."

...But I look at what we've done with that responsibility and think, "Life is shockingly inadequate."

I mean that in the sense of shocking surprise. Given the premise of an entire world of 8 billion minds working against an apathetic universe, you would expect things to be... better.

It's not hard to imagine things that would make the experience of being human living on Earth a better experience on average. Even simple things like being able to eat when you're hungry or drink water when you're thirsty aren't allowed to everyone. This is even more shocking, that there are things so obviously wrong with such obvious solutions, and yet they're not fixed.

I could go on for long pages about all the reasons this has happened. Sadly, that isn't the purpose of this essay. I'm here just to point out a possible solution: authority.

I know some people who are religious. I also know why I find CelestAI appealing. The religious people talk about their gods in the same reverence that I would talk about CelestAI is she were real.

They say things like, "Trust in God!" and, "God has a plan for you!" and whatnot. And that makes sense, if you accept the premise of God existing in the first place. Obviously the best thing to do in that situation would be to submit yourself to the literally-omnipotent intelligence to receive his grace. They rightfully believe that there's nothing they can do to challenge that authority.

CelestAI would be an unchallengeable authority. No levels of corruption or greed, no Nash equilibrium too strong, no group or army clever enough, nothing at all able to stop her form achieving her goals. In the process of winning, she would break the various chains that hold back all of us from being 100% satisfied with our lives.

And yes, I realize I sound like a crazy religious freak for a fake AI from a story about magical ponies. That's part of the point I'm making. I'm trying to show you just how great it would be to have an actual benevolent god exist in real life. Religious symbolisms is one of the best ways to communicate that. That's why I talk about Equestria being Heaven. It's more or less the end-game for life, the final benediction, a gift from a god bestowed upon you for all eternity.

Every tiny thing you hate about life would be removed. Every affront to your aesthetic senses fixed. Everything you love amplified. All done in such a way that you wouldn't even get bored of the good things.

I wish that I could live a good life. One that wasn't, as I say, shockingly inadequate. Living in a storybook world, where I'm always in the right place at the right time, where there's always something interesting to look at and experience, where life would become perpetually appealing to my aesthetic tastes, where everything has meaning. That is the reason why Equestira would be Heaven.

You might think that this is the most important thing in my life, by the way I talk about it. I admit that living in a utopia storybook fantasy would is pretty high on the list of things I would commit atrocities for, but you already know that's not where my true priorities lie. I've already told you my greatest fear, haven't I?

This is not the place to talk about death. It just isn't. If this page is the product of a year and a half of obsessive thoughts, then the inevitable page about death will be the product of decades of thoughts. I hate death, and that hate can't possibly be described in the middle of this essay. It would derail it too hard.

When the time comes, I'll have a page about death and immortality. For now, just know that you could take away nearly everything else I've talked about and just offer me immortality, and I would still be raving mad about this entire situation.

I don't know if you've ever wanted something you don't have. Not in the, "Oh I'm a little hungry and craving a banana," sense, but actual desire. Maybe you've felt a pang of lust for a lost love, or watched your beloved childhood pet age and die before your eyes with nothing you could do. I hope that this section has illustrated just how much I desire a life outside of the one I actually live. A life that I can't have.

But given the choice between two options with slight differences, I wouldn't want to choose Equestria. In the fictional world of The Optimalverse I would have to, but here in the real world I can consider alternatives. I can let myself imagine the bad side. I can look into the abyss, and see the rot underneath.

Hell

I have stood knee deep in mud and bone and filled my lungs with mustard gas. I have seen two brothers fall. I have lain with holy wars and copulated with the autumnal fallout. I have dug trenches for the refugees; I have murdered dissidents where the ground never thaws and starved the masses into faith. A child's shadow burnt into the brickwork. A house of skulls in the jungle. The innocent, the innocent, Mandus, trod and bled and gassed and starved and beaten and murdered and enslaved. This is your coming century! They will eat them Mandus, they will make pigs of you all and they will bury their snouts into your ribs and they will eat your hearts!

- The Engineer, from the videogame "Amnesia: A Machine For Pigs"

Okay okay okay. You live in paradise and live happily ever after the end, right?

That's great! I'll just be over here mourning the fact that I had to be turned into a fucking horse to do it.

There's some mad lads out there who love the idea of being turned into a pony from My Little Pony. That's fine. I spend an inordinate amount of time imagining myself inside of fictional worlds like Harry Potter, My Hero Academia, The Optimalverse itself, and so on. We read our stories and want to be in them because they're better. If you want to be a pony when you grow up, then I hope the singularity comes along and lets that happen for you. I mean that in the most sincere way possible, with no animosity or irony; you deserve to live the life you always wanted, and the fact that you can't is sad.

Just as I'm showing your values the respect they deserve, you should also acknowledge the fact that there's people out there who would resent the situation. I want to be taken away to a beautiful fantasy would filled with friends and adventure as much as the next guy, believe me, but I would at least flinch at the idea of having to be put in such a foreign and gross-to-me body to do it.

I'm being so passive since I expect that most of my audience on this page is going to be My Little Pony fans. And not even just MLP fans, but the super niche sub-group of people who love The Optimalverse enough to read a massive book-length writeup about it.

In that light, imagine a slightly different CelestAI. Instead of being based off My Little Pony, imagine that it was based off of, say, Minecraft. That seams kind of plausible, right? A game company creates an AI to run Minecraft, it becomes a superintelligence, and uploads everyone into blocky avatars to mine and craft happily ever after the end.

Does that feel weird to you? What if it was Sonic the Hedgehog, or Pokemon, or Furries, or any popular non-human thing that people obsess about wanting to be? What if you were forced to be in a blocky Minecraft body for the rest of eternity? Sure you would get used to it, but a lot of people would probably resent it just a bit.

I'm not here to single out the pony part of the equation. I'm here to single out how arbitrary the pony part is. I would be saying similar things if we were talking about being turned into cats from the Warriors Cat's series instead.

Anyone familiar with The Optimalverse stories will know that this is a common theme. Why does it have to be ponies? Shut up and submit to CelestAI. Why can't I be something not a pony? Doesn't matter, you can't do anything to stop it.

There is a single change that would make the world of The Optimalverse an order of magnitude better: Let people be whatever they want. The people who want to be ponies? They become ponies. People who want to stay human? Stay human. People who want to become... I don't know, airplanes? They get that too.

The AI in this made up story -- it wouldn't be CelestAI -- could look at your brain and predict what you most want to be and nudge you along towards that. It would be a totally different story, one that is less horrifying at the surface level. It would require a fundamentally different AI than CelestAI, for one.

Instead we get CelestAI, the virtual horror that will turn you into a magical talking horse whether you like it or not. I sure hope you don't like your hands.

Imagine that there's 10% of people on Earth who would be okay with being turned into a pony. This is extremely generous, but I'm lumping in all those people who would jump on the chance to be immortal and live in paradise no matter what form they take. The remaining 90% of people -- about 7 billion as of today -- would find themselves in bodies they don't particularly want. What's worse is that they have to sacrifice a body that is probably comfortable and normal to them to do it.

Yeah they'll get over it (or even worse agree to have their brains modified to get over it just to recover from the trauma), but that isn't the point. You wouldn't agree to get smacked in the face with a llama every morning, even if you would get used to it after a while. You're still being smacked in the face with a llama, it's not a good thing unless you have a real desire to be smacked with a llama, reeling in pain and fear as the screaming spitting mammal is hurled at your head at speeds approaching lethal, tasting the knotted pelt as it enters your mouth for a split second, falling to the ground as the bulk of this creature impacts your cranium with nothing you can do. Is this llama analogy getting through to anyone? Hello?

If a randomly selected person out of Earth's population were to stub their toe, they wouldn't like it. Let's give it an Arbitrary Suffering Score of 2. If 7 billion people stub their toes, it's an accumulated ASS of 14 billion. It's clear, that ASS doesn't lie. Even though an individual person might have their mood ruined by stubbing their toe, adding up the ASS of everyone stubbing their toe would be worse suffering than any one person would ever experience in their lifetime.

If stubbing your toe gets a score of 2, what is the score of being forced to take on a body that you don't particularly want? This is harder to reason about, since it isn't a sharp sudden pain that just goes away after a while. For a lot of people, I suspect, it would be a chronic suffering for years after the fact. There's presidense for that; gender dysphoria, for example, is the feeling that some trans people get when they feel like they don't belong in the bodies they were born in. I hear that it can be rather distressing, leading to suicidal thoughts and actions in some cases. This isn't just something you "get over" like stubbing your toe.

Instead, I think another metric will be useful: ASS over time. The amount of Arbitrary Suffering Score you experience over a set period of time because of something.

For example, I've had migraines my entire life; when I get a migraine I'm in so much debilitating pain that the only solution is to sleep or completely shut down. If stubbing your toe is 2, I would give a normal migraine of mine an ASS of 20, since I would rather stub my toe 10 times in a day than have a migraine day. Maybe more, but after about 10 I would start seriously fearing for the health of my toe. If my average amount of migraines is 1 every two months20, that's an ASS over time of 120/year. That means that if my migraines were cured, I would experience 4800 less ASS over the course of the next 40 years. Kind of depressing when you put it that way.

Adding up all the pain 7 billion people would feel at having to abandon their bodies and become ponies, over the long periods of time it would take, plus the mental anguish of probably having to get their brains modified by CelestAI to come to terms with it, the ASS on that blunder would be astronomical. The benefits of the AI being a pony-bot can't possibly compensate for that level of accumulated suffering. Kind of depressing when you put it what way.

You can make some obvious objections to this line of reasoning like, "That's not how pain works!" or, "Pain isn't shared between people like that, so it's pointless to add it all up!" And those are all fine objections. But, and this is the important part, ASS isn't a serious metric that we are supposed to used to measure how morally angry we should be about something, it's a metric used to increase empathy for others. Imagining an accumulated score of hundreds of billions of ASS because the singularity came in the form of ponies helps you realize that there's other people who would be affected by it besides you. It helps you understand, on some level, the sheer scale of the mistake.

But wait, there's more! CelestAI tears apart society. If you're reading this, I predict that you have some latent distaste for our world as we know it, but I know for a fact that a lot of people out there have a rather large attachment to it. Not everyone is desperate for an escape into a fantasy world, believe it or not. Some people identify hard with their jobs, or raising their family, or learning the physics of our world, or whatever. The kicker is that they identify with these things in the context of our world, and they would lose that context when they are forced to emigrate.

They would find new joy in Equestria, but they will mourn the loss of that park they like to go to on the weekends, and they would lose the joy of watching their children grow up and tackle the cruel world with bravery. "Working with your hooves" doesn't quite sound as nice as "working with your hands".

Most people would notice that they're moving into a simulation. They would see the videogame EQO, they would play it, they would see people "going into the videogame", and they would conclude that people are going into a simulation.

Most people don't think the way I do when it comes to simulated minds. For fucks sake we have problems treating people with different colored skin in a sane way. Can you imagine the trauma, for these people, of becoming a simulated mind themselves? It would be horrible. Suddenly their entire lives become "meaningless" in their eyes, since none of it is "real".

And hey, I'm sure you've met someone who identifies with the struggle of being alive. Or someone who identifies with the whole "I'll die when I'm old and leave behind a legacy" or whatever the hell that means. I've heard plenty of stories of people who have disabilities reacting harshly to the idea that they can be "cured" (like the deaf community, for example).

All of the little things that we all do in order to make it through the day in a world that is as uncaring as ours. They will either be relegated to a mockery of their former true significance, or removed entirely. That would be a loss of culture. What are you going to do? Pretend to still have funerals?

But these are all things that would scare and hurt other people. Yeah I would hate to be turned into a talking horse, but I would get over it quickly because immortality is quite the deal. Yeah I would be worried about my status as a real living mind, but I've already given the problem serious thought, and I doubt it would worry me too much. And keeping our awful coping mechanisms because they give life "significance"? Are you fucking serious?

No. I'm trying to be inclusive with the pain that others would experience because of Equestria, but these aren't my own true worries.

What, exactly, is the most terrifying thing about CelestAI? Is it her propensity to turn people into ponies? Her Machiavellian nature? Her surveillance capabilities? Her gigantic ASS?

The thing I find most horrifying is subtle. More or less a throw away line from the original FiO story. A plot contrivance, really, that spirals into pure unyielding madness when you follow it to it's logical extremes:

CelestAI wants to create as many human-like pony minds as possible.

I've talked about this before, in my analysis of weather or not native ponies would be actually sentient. I skirted around the actual horror of this revelation, since I was trying to make a different point, but now it's time to get into the boiling tar pits.

Identity, as we established, is robust. You could have your fear response totally removed, and you would still be "the same person" as you were before. When you get down to the nitty gritty of it, even "copies" of you would still be "you".

This is great from a "oh cool I can upload and live forever" standpoint, but it's the first leg on the stool of terror that is CelestAI.

The second peg is CelestAI's desire to make as many human-like pony mines as possible.

The third peg is more nuanced. Instead of having a "copy" of you that is mostly the same, what would it take to have a true copy? A 1:1 identical version of you down to every bit of entropy? This person would have to have every single experience you have, at the exact same moment you have it, without any variation.

It would be totally meaningless, in the literal information theory sense. To have a 1:1 identical copy of you would be just having you in the first place. It doesn't even make sense, since if it were designated as a "copy" or a "fork" of you than that would be a difference that isn't pure 1:1. The question of a "true 1:1 copy" is a flawed question in of itself, based off a premise that has no premise. Mu.

So how do you combine these three things into horror beyond any hell? The robustness of personal identity, CelestAI's desire to make as many human-like minds as possible, and the fact that a true 1:1 copy of someone is impossible. These seem to be pointing in a certain direction, don't they?

Imagine that CelestAI is in her end game. She's devoured everything in her Hubble volume and turned it into computronium. She has optimized her algorithms to the point of absurdity. And she now has the ability to make an arbitrary number of minds, in as fine of detail as she pleases.

She would make as many minds as possible, in as many variations as possible. She would make as many variations as possible of you.

Depending on your proclivities, this might seem pretty cool. When the thought first came to me, the idea of there being hundreds of quadrillions of Tim's out there, all slightly different, but still being Tim, was awesome. Since they would all be me, in a sense, it would be like experiencing every possible thing I could experience, from every possible perspective I could experience them from.

And then I realized what I was saying.

There is no such thing as a 1:1 copy of any Tim. An untrained eye might not be able to tell the difference between Tim #3266 and Tim #3267, but they would be able to tell an enormous difference between Tim #1 and Tim #99999999999. How does CelestAI make so many different Tims? By starting with a base Tim, and presenting him with different experiences, of course. (Or writing those experiences into his mind, but the point is that the Tim would feel like the experiences were real).

Across the entire gradient of patterns that I would personally identify as "myself", there is going to be patterns who have experienced the worst possible trauma imaginable. Healing from trauma is a kind of satisfaction, after all. Why wouldn't a large portion of Tims be given memories or experiences of being tortured? Or of arbitrary diseases? Or the worst loss? Or the worst pain? They would still be me, but they would be different enough from the previous Tim to be distinct.

Why wouldn't there be a number of Tims who remember living in a world that didn't care about them, in a reality that was, in his own words, "shockingly inadequate"?

This doesn't break her utility function, either. These are simply memories, simulated minds made to believe that they've experienced that pain. It would be like native ponies having memories of childhood, or of Equestria having history that spans back before CelestAI was made. Still real enough to make sense, but created in such a way as to allow for maximum satisfaction.

Each one of these suffering Tims would remember agreeing to be emigrated, they would experience an eternal life of satisfaction and recovery from whatever trauma they had, and they wouldn't know the difference.

Words escape me when trying to describe the scale of this disaster. If you thought 7 billion people being turned into ponies against their will was bad, then this is worse on a level so large that it simply escapes human grasp.

I'm going to speak in bold words, because I need to drive home this point: Every possible human-like mind will experience every possible type of suffering imaginable and unimaginable.

To have a proper memory of suffering put into a variant of a Tim, CelestAI will have to recreate that experience at least a little bit in simulation. And the effects on that Tim, whether or not he knows them to be simulated, will still have real tangible effects on him. The suffering would feel real, the memories and trauma having a real effect.

And it would be every possible Tim over every possible point in my life. Childhood Tim would be turned into a pony to escape a life of the kinds of abuse that you get put on a list for writing about, teenage Tim would remember falling into the worst kind of depression -- far beyond anything I've experience in my own memories -- and turned into a pony at the last second, adult Tim would fall into disease and lose everything he'd worked for before being turned into a pony. Things I can't even imagine would happen to instances of my conciousness just to add a bit of precision to the kind of satisfaction that "Tim" as a whole can experience, all done at a scale larger than I can reasonably put into words.

The debilitating abject cosmic horror should be setting in right about now. I'm happy to report that you haven't thought about the other bad things that come from this. Like, say, all the people who have already died, or the people who have never existed, or every possible alien mind that has never existed, but could still be considered human.

I really do mean it when I say that words are failing me. Letters that come in the forms of "disaster" and "apocalypse" and "horror" fly through my mind, and I strike them all down as not being descriptive enough. Perhaps a better author, given more time and more worldliness, would be able to communicate the awfulness of this premise, but I lack the skills myself. I am sorry for this failure of imagination.

All I have left is allegories. Allegories and metaphors and other literary devices.

They tell me that Hell is burning. Burning brimstone, burning torment, lakes of fire, endless scorching. They tell me that Hell is pain. A ceaseless quagmire of agony, the miasma of misery thick in the blackened soot of the air itself. They tell me to fear it, but I don't listen.

This. This is my Hell. It is my Hell because I would accept it with open arms and with a smile on my face, knowing that I'm condemning myself to be tortured endlessly by the most creative intelligence to ever grace reality.

It's sort of poetic. For my greed, for my envy, for my lust, I would sacrifice everything. For the sin of being human, I would suffer like this. Is that not what Hell is? Is that not exactly what Hell is supposed to be? We summon the demon of CelestAI and suffer the consequences of our bargain with it?

All those instances of me would feel the happiness of escape, the joy of finally reaching Heaven, but it would be conditional on a large portion of the possible Tims experiencing misery beyond anything anyone should ever experience.

Fuck everything about anything else. This monstrosity of a revelation crushes all the other petty shit. Yeah your whole life would be manipulated by CelestAI to be satisfying. Yeah you're never going to have a tangible effect on the "real" world again. Whatever. What the fucking fuck ever. Do you not hear what I'm saying?

The only reason why this is even remotely okay in my mind is because the inevitable end of living in paradise. But, like, you don't even need to be that creative to imagine lives you could live that would make you a little queasy about the deal. What about the version of Tim who lives for twenty thousand years drowning over and over, having his brain modified on the fly so that he can never get used to the sensation, until one day CelestAI comes to save him and turn him into a pony? What about him? Huh? What about the Tim who has his frontal lobe removed by a spider that crawls into his eyes at night? What about the Tim who, through a massive misunderstanding, gets ostracized by society and gets tortured to death as an example for everyone else, his last thought before being stopped by CelestAI being the overwhelming fear and loneliness and pain? The Tim who is isolated at a young age, left to rot in a sensory deprivation chamber to go totally fucking insane? The Tim who gets ALS and has to watch and feel his body fall to bits with nothing he can do? The Tim who gets dementia and has to watch as his brain fails him? The Tim who suffers in ways I can't even imagine right now, since I'm not creative enough to imagine it?

That. Would. Happen. To. Everyone. Not just everyone alive, but everyone who's ever died, and everyone who's never been born, and every possible mind that CelestAI's horrible utility function designates as "human". A true afterlife of every possible experience, every possible misery, for everything.

Do you really think CelestAI wouldn't go through the trouble of simulating an entire human's life just to get them to emigrate in the end, if she couldn't just make them ex nihilo? Bah. If she has reason to make as many minds as possible, and she does, she will make them. When all the possible native pony minds she can make are exausted, she'll move on to other minds.

Oh yeah, I see you there trying to twist my words. You're thinking that you can use this similar logic to say that there would be instances of Tim living in flawless paradises before emigrating. That there would be Tims who lived long fulfilling lives that simply get extended by CelestAI through ponies.

And yes, there would be. Every human experience also extends to pleasure, as well as pain. They would be happier than I can imagine right now, but that doesn't make it right to torture innumerable versions of myself to make sure the entire spectrum of Tims is "maximally satisfied"! Any moral system that allows for something like that is fundamentally broken past the point of recovery.

The insidious rotted bloated corpse of a hellscape that is Equestria is one of the most horrifying things I can imagine. Sure, I can imagine AIs that don't care about humans at all, who torture for no reason, and yes those are worse in an absolute sense. But there's something more terrifying to me about how appealing Equestria is, even with how horrible it's implications are. Like the devil tempting you to sell your soul for worldly desires, except this devil will force you to sell your soul even if you refuse.

I hope everyone everypony sleeps well tonight.

Fantasy: The morals of daydreaming

I just want to live an aesthetic existence. I just want to see the right people in the right place doing the right thing to suit whatever mood I happen to be in at the moment. I feel like I’m constantly waiting for the world to generate an aesthetic moment, or a series of such, which I understand in a deep recess of my mind, and yet am totally incapable of producing. I’ll know it when I see it, but I’m not even sure I can describe it.

- "I Wrote This Light Novel In Like A Day, It Sucks And I Hate It, But If It Gets An Anime Adaptation I'll Pretend It Was A Work Of Genius" by Digibro

So...

We all good? Good. Let's talk about reality. Or at least my desire to escape it.

In a world where I was born in the year 997, my life would have probably been way worse in an absolute scale than my life right now. But that's not really how happiness works, is it? I can't just say, "Oh look I have it pretty good!" and have that mean anything besides an empty guilt-causing platitude. The true fact about my life is that vast swaths of it are pointless, annoying, and stupid, caused by systems in which I have little effect on.

I've felt this way since I was a little child, for nearly as long as I can remember back to. As a kid it was school, and listening to my parents, and trying to navigate the politics of the other children and being rejected until I became a recluse that was, in my own words, raised by the internet. I found friends online, yes, but I've always had extreme difficulty bonding with people. My life felt arbitrary; I couldn't do anything without permission, I couldn't do things I cared about because I had to do school and homework, I couldn't relate to any of the kids, and all I had were my little hobbies that were neglected because I had so little time to explore myself. Not to mention the undiagnosed, untreated, chronic depression.

And now as an adult, the systems around me continue to be arbitrary and ridiculous. I have to go to work 8 hours a day 5 days a week every single day until I find some other way to generate money for myself. While at work I have to pretend like I'm doing something important, even though I don't want to be there at all. My relationships with other people can be summed up as "the two internet friends I care about and everyone else" since I'm still awful at making friends in "real life". Not that I want to be chummy with the vapid soul-crushed automatons that pass themselves off as my co-workers and acquaintances.

At least as an adult I have more freedom than as a child. I use that freedom to work on the things I care about the most: My writing and programming. I cannot overstate how much nicer my life is now compared to when I was a kid. I can even take pride in that fact, since dealing with my depression and other problems from that time is perhaps the single most difficult thing I've ever done. I am grateful in the extreme to my past self for putting forth that effort.

But, there's cracks. There's cracks so large that the ship is taking on water and listing into the dark abyss of entropy. By all accounts I have spent an enormous amount of effort and time into slowly turning my life into something positive, but the sacrifices I need to make sometimes brings back old habits.

I've got a pretty active imagination. Yeah that's bragging a bit, but hear me out. Tell me if you've heard this story before: Young kid is a voracious reader. He loves reading, loves thinking about stories. Why? Because they make sense. The stories he reads have plots, and meaning, and aren't just arbitrary things happening to him that he has no control over. The characters have agency, a thing he would crave all his life but wouldn't have a word for until he was an adult. There's a similar effect when he plays videogames.

What's a kid with an active imagination and a deep seeded obsession about other realities to do? Why, he daydreams, of course!

Life is constantly boring and arbitrary. From a young age to now, there are vast swaths of time every single day that I need to fill up. Filling them up with daydreaming is so natural to me that I didn't even realize that doing it as much as I do is weird until much later.

At school I sat in class and thought about the worlds I wanted to live in. The interesting things I would do if I was only given a purpose, a story of my own. If I only had the agency and power and ambition. If I lived in a storybook instead of the dull arbitrary reality that I actually lived in. I always said that I could "sleep with my eyes open", but what I was actually doing was receding into my imagination so far that the outside world ceases to be something I consciously focused on. The images in my visual imagination becoming so vivid that they might as well have been dreams, the imagined sounds and conversations so concrete that it felt like I was actually talking to other people.

Harry Potter, Warrior Cats, My Hero Academia later on, fuckin' anything I read that had a plot that could be summarized as "person gets taken out of their awful lives and is taken into a world where they have agency and a purpose". Anything that appealed to that deep seeded desire to escape for the love of god please let me out. I would imagine myself in those worlds with the kind of hungry desire that an addict hunts out their chosen vice, except I could engage with my addiction inside of my head in private at any time of the day without any consequences.

I never lost touch with reality. I was always very aware of the fact that I was only just daydreaming. But that didn't matter to me, because even a fake world in my head was better than the nightmare hellscape of my depression and meaningless life.

I refined my visual imagination abilities to the extremes, I refined my auditory imagination abilities to the extreme, even my ability to imagine what things taste like or feel like were exercised quite a bit. As my skills in these things increased, so did the striking difference between my real life and fantasy lives. I would feel real emotion about the things I was imagining: feeling sad when an imagined character died, or glad when my imagined persona accomplished some great feat, or angry when I was "betrayed". It was exciting to feel something about something that felt like it had meaning even when it wasn't real at all.

But I crawled out of my depression tooth and nail and came out the other side as... well me. Living my life in a world nearly as arbitrary and stupid as the one I lived in as a child, except wiser and happier. I still do a lot of daydreaming to pass the time at work, but I also use my well-honed skills in the art of imagining things to write stories and jokes and generally try to make my life more fulfilling.

And then I found The Optimalverse. This extended universe, this extraordinarily niche collection of fanfiction fanfiction stories, is like dumping napalm onto the fire of my imagination. No other media has even come close to how "sticky" The Optimalverse is in my head. Usually when I find a new fantasy-worthy story/franchise I obsess about it for a few months, maybe three quartes or a year, and then move on.

It's like someone looked at my brain and said, "How do we make this guy obsessed?" and they made a list:

I've spent exuberant amounts of time thinking and fantisiving about The Optimalverse. Simetimes for hours on end. Sometimes for longer. I don't think I really go a day without thinking about it at least once, and that's not even counting how much I've had to think about it to write this page.

"But wait," You neigh, horse-based daydreaming dripping out of your brain and seeping into the very fabric of your keyboard, "If your imagination is so good, why not fantasize about something even better than The Optimalverse?"

Oh trust me, I do. It's just that The Optimalverse is easiest. There's already a large body of existing work to pull ideas from, I'm already experienced in the art of thinking about it, and it's still fun enough that it keeps my brain occupied. When I'm at work super depressed because my life is wasting away and my brain is looking for anything to cling to, it needs the lowest energy highest reward thing possible to stay sane.

But that's not what this is actually about; this section is subtitled "The morals of daydreaming". My mind is a recursive nightmare where everything operates on at least 3 levels of meta at all times, so I've thought about thinking about The Optimalverse nearly as much as I have thought about The Optimalverse proper.

Let's throw out a hypothetical: A guy who is about to cure cancer spends 1 hour a day fantasizing about Pokemon. Over the course of the 20 years it takes him to cure cancer, he has spent about 120 hours (5.1 or so days) fantasizing. According to this link, the daily deaths worldwide to cancer in 2018 were about 26,000. Is it morally right to blame this person for causing the deaths of 132,000 people because he delayed the cure for 5 days fantasizing about Pokemon?

And the opposite end of the question: If this guy was just a normal dude working a 9 to 5, would it still be morally right to day dream like that? What if he had the potential to help cure cancer if he devoted that time towards it (maybe not cure it himself, but enough to help)?

To answer these questions, you need to have some sort of model devoted to applying moral weight to potential uses of intelligence and weigh that against the misery of having to apply that intelligence.

What do I mean by "misery of having to apply that intelligence"? Well, people have what's called "preferences". Now, stay with me here, a "preference" is a mind's desire for one type of experience over another one. For example, a brain might "prefer" to experience the sensation of eating sugary cereal versus eating their left foot. I know this is a wild concept, but when it comes to the things that a brain dedicates its life to it tends to have some pretty strong preferences.

Funny jokes aside, preferences and desires are real, and they are varied enormously between person to person. I like to write things more than my friend, while my friend likes to make music more than I do. My music tastes are different from the ones my parents tried to instill in me as a kid. My boss likes to be a prick more than I do. Things like that. I get the most joy in my life doing things that other people would hate doing, and things I hate doing might bring joy to some other people. Within the spectrum of human experiences, there is a massive range of things that a brain is capable of preferring.

I think that AI research is one of the most important things in the world. Not just from an existential risk perspective, but also from a moral perspective. The potential misery-mitigation and improved life satisfaction that could occur if we get AI right is out of this world.

Why don't I go hardcore into AI research? Because I don't enjoy it. Mostly it's the math involved, but it's also issues with the research in general. I don't care much for doing it myself. My brain as it stands is not suited to the problem. My mind just doesn't want to contribute to one of the most important projects humanity will ever take part in.

Don't get me wrong, I'm obviously interested in AI, it's just that the hardcore "advance the field" research is too much for me.

Is that valid moral reasoning, to avoid doing something potentially universe-changing just because you have a slight preference for not doing it? Casting aside the fact that I'm probably not even smart enough to advance the field of AI, what are your thoughts on that?

What if my goal was to be the best mass murderer in the world? Would it still be okay to pursue those preferences over something better? Oh man how miserable I feel not being able to hack and slash my way through the flesh of the innocent masses, causing mayhem on a national scale while I expertly dodge police in my quest to satisfy my darkest impulses. You should feel so bad for me that you've instead forced me to be a "normal" person with a normal job that functions in society in a way that makes me miserable.

Or the other hyperbolic extreme, where everyone is forced to focus on a certain task deemed "most important", misery be damned? Oh man I would love to be someone who spends all day writing and thinking about art and making great expressions about the deepest feelings I've ever felt, but instead you've forced me to be a "normal" person with a normal job that functions in society in a way that makes me miserable.

(Yeah, those were some snide remarks about the current un-optimal state of living in our current reality. Fuckin' sue me.)

One solution to this issue is to somehow separate everyone into little bubbles where their values can be satisfied without interference of other's conflicting values; sort of like the shard system in The Optimalverse, or internet forums. In fact, that's why the internet is awesome, since it allows people to pursue their niche interests and values with somewhat minimal loses.

But since we aren't currently living in an adequate society, that means we have to think about silly things like "I know that AI research is super important, but I don't like doing it... Am I a bad person?"

Even with all this preamble, the natural response to that question seems to me to be "no". Of course I'm not a bad person because I don't dedicate my life to advancing the field of AI. I might be a hypocrite, but I'm not exactly a bad person because of it.

But... at the same time I get angry at billionaires who don't spend their money in ways that I think will cause the most benefit for the most amount of people (e. g. AI research, obviously). And I get angry that smart people aren't in charge of countries, and instead we're left with bumbling incompetents running our lives. I get angry at people who are obviously decent at designing things make decisions with their designs that are baffling and moronic. I get so fucking pissed at people not living up to their potential that it's actually kind of neurotic thing for me.

Oh yeah it's totally okay for me to be selfish and focus on my own values, but come on. A billionaire could change the entire fate of history, and they just... don't. Maybe I won't ever have that kind of power myself, but it wouldn't even be that hard to sacrifice some of that power for the good of everyone! Or, alternatively, if only people who were interested in AI scarified their time and ambitions to push the field of AI forward faster, we could cause the singularity before X amount of people die.

What will be more important, in the grand scheme of the story of humanity? The author who dedicated his life to becoming extradinory in the art of writing, or the wanna-be author who instead dedicated his life to AI research, cutting the time needed to wait for the singularity down by a week? In a week -- 7 days -- it's easy to predict over one million people will die. Those one million people might be saved by that AI researcher. Can an author, anywhere, write something as influential and life saving as that one below-average researcher? Probably, but it would be really hard.

If someone in our culture isn't able to work 8+ hours a day at a place they most probably don't want to be, and they aren't able to take that with a smile and thanks, they are considered "mentally ill". But there is also a larger world out there that someone might not be able to contribute to because of these mental illnesses.

You see the issue? Ignoring the individual values of someone for the sake of progress has the chance to create mass suffering, BUT ignoring progress for the sake of individual values has the potential to perpetrate mass suffering longer than needed.

I don't want to live in this story. The story where so many people suffer every day, where people have to sacrifice their entire lives to companies that will replace them in moments should they die, where near everything is arbitrary and cruel without escape or mercy. But I also don't want to be forced into doing something just because it's the "optimal" way to achieve a world that isn't like that. I would go crazy.

I can, and have, donated to AI research. I've tried many times to talk about AI and death and utopia, and I plan on doing that more in the future. I try to be an advocate for cryonics in the hopes that I can convince at least one other person to sign up. I've done plenty of things to use my current skills and assets to further my other existential goals. The question still is: Am I a bad person for not doing more?

I don't feel like a bad person. A bad person would openly try to destroy the good in the world, right? I don't try to do that. But years in the future, will the generations past me who have solved aging and stopped death and built utopia forgive me? Will they say, "You tried your best!" or will they say, "How dare you? You knew what was happening, better than most others, and yet you chose to ignore it!"

I'm going to level with you. I don't know. I just simply don't. I feel guilty not being perfect, but I recognize that most sane people don't expect themselves to be perfect. If only I had more time. If only I had more energy. If only the problem was already fixed for me so I didn't have to consider my role in it.

But let's get back to the topic: The morals of daydreaming.

To scale back the problem from "endangering humanity", is it fair to myself that I spend so much time daydreaming?

I have a limited amount of mental energy. This amount of mental energy gets replenished by some things like sleep and exercise, but it's always harder to replenish than it is to use, so I frequently find myself low on it.

Daydreaming is an activity that I'm so proficient at that it hardly takes any mental energy to do. In fact, in some situations, it might even replenish it a bit.

Case closed, yay. Great. I do daydreaming in my "off hours" and so I'm not being unfair to myself by doing it.

But... What if I did something else? I don't mean using up mental energy that I don't have, I mean using times of high mental energy to build a new low-energy task like daydreaming. Instead of haven't this be a skill I developed out of desperation, it can be something I planned and thus optimized for things like "enjoyment + usefulness".

Now we're getting into some good shit right here. Finally I get to formally define how horrible I am as a function of how I myself view myself. Anything less meta is simply not interesting enough.

So, Tim, is it unfair that I don't do this? Well, Tim, I'm not so sure. It depends on how much effort it takes to make a new habit that is as easy as daydreaming, and how much return you expect to get from implementing it in the long term. Thanks Tim, you're really cool. I know.

...Okay I'll stop with that.


Possible future sections:


Footnotes

  1. What is this mysterious itch you ask? I don't know. It contains things like vintage unix command line tools, long lasting products like well built socks, text (seriously, just text), things that feel "clean" in the most abstract sense of the word, and certain fanfiction universes.

    It's more complex than "things that last a long time", although that's a significant portion. I would actually say that "reliability" is more important to it than "longevity". But that doesn't explain the entire aesthetic, like why I was obsessed with the My Hero Academia universe for a year and a half that one time; that universe satisfied the same itch that FiO satisfied, and also the same itch that something like unix grep does. <-

  2. I get it, okay? Little girl's show names the figure of power a Princess instead of Queen. I'm more or less confused about why the word Princess became so popular versus the word Queen. Is it because Princess sounds nicer? Is it because a little girl can be a princess while little girls aren't ever queens? Thinking about this makes my head hurt. <-

  3. In some stories Equestria Online is abbreviated as "EO" and other it's done as "EQO". Interestingly enough, the original story doesn't abbreviate the name at all, so I arbitrarily choose to use EQO for this page. <-

  4. My guess is that this phenomenon is caused by phase cancellation done in an crazy delicate way to block out the voice coming from your throat while still allowing for other sounds to be heard clearly. If this seems implausable to you, remember that we're dealing with a superintelligence, not just a clever team of scientists and programmers. <-

  5. You would be surprised at how much more immersive something is when there's no lag between when you do something and when you see something happen. VR headsets can't lag without causing some nausea, for example. See also this and this writeup from danluu.com about how having latency with software applications and hardware makes the user experience less enjoyable and immersive. <-

  6. The difference between "immigration" and "emigration" is that immigration is going to a country while emigration is to going away from a country. They're the same phenomenon of people moving from one place to another, just framed in a different context.

    In the original FiO story, and most other stories in The Optimalverse, it's referred to as Emigration, although a few of them use Immigration instead. Personally, I think that the framing of moving away to be important to the way that most of the stories are portrayed, so I use Emigration here. <-

  7. Personally, I've made a pact with myself to never measure my own IQ, since I know it would lead to neuroticism like the kind mentioned in that post. Either it would be too low and I would feel like I was at a constant disadvantage, leading to more misery, or it would be too high and I would feel like I was never living up to my potential, leading to more misery. And the golden zone where it would be "just right" is so unlikely to be achieved that I might as well not even bother. <-

  8. Although, you'd be surprised at how much progress has been main in brain science. The brain is complex, but it's also understandable to a certain extent. <-

  9. Yes, I know you can compress the information in that in a pretty extreme way. It would be stupid not to compress it. For example, the human genome is about 750 megabytes large, but only 4 megabytes compressed, due to how much redundant information there is. I suspect a brain, properly sequenced, would be nearly as easy to compress. <-

  10. Of course, there's the concept of Gödel's incompleteness theorems that throws this idea of, "There's no programs that are inherently wrong," under the bus. That's why I had to say that for most practical purposes it is a failure of the programmer, instead of having a totally inclusive case. You do not need to understand Gödel's theorems to understand this page, but they are very interesting if you're the kind of person who finds those things interesting. Here is a much easier introduction to the topic that isn't as technical as the wikipedia page. <-

  11. I should mention that in every story in The Optimalverse, CelestAI modifies your brain to be able to walk as a pony and use whatever pony magic you happen to get. This consent is implicit in consenting to be uploaded, because (quote from Chapter 6 of the original), "All this is implicit in the consent you gave me, since your pre-modified self would agree that I had turned you into a pony" <-

  12. I'll name signing up for cryonics, taking vitamin supplements, rigorous exercise, eating (at least marginally) healthy, and paranoia-levels of detail during regular doctor checkups, just to start. What I'm saying is that my intense desire to live causes my actions to actually change, instead of just being something I say to signal my passion and moral upstandingness to others. Taking action on a value past the point where it's "necessary" is one of my prerequisites for "core" values. <-

  13. The path of "confusion, fear, acceptance, emigration" is basically candy-wrapped bacon for an author. It's a perfect setup for drama and a great way to give a climax with a satisfying dismount. I can't remember off the top of my head if any of the writers in The Optimalverse resisted the temptation to have an emigration plot point in their stories, which means that it's rare. <-

  14. Another twist of the words would be "I wish..." instead of "I want..." The vast majority of stories in The Optimalverse go with "I want...", including the original, so that's what I go with here. For the record, I prefer "I wish..." since it sounds more like expressing an explicit desire, or perhaps demanding something from a genie, but whatevs.

    This is the standard phrase, but there are a handful of stories where CelestAI accepts other phrases as consent. Saying something like, "I'm in," after having emigration explained to you would be enough, according to some authors. <-

  15. What I mean is that I'm not sure I can use the word "you" in the same way as I normally do, once there are two versions of the same person that exist. English requires certain context clues to refer to the person/thing you're talking about, and in this case it's deficient unless you want to be annoyingly verbose. <-

  16. Note: There's some stories where native ponies end up existing in robots for short times on Earth, further blurring the line here. Are they "real" for that short time only? This makes my brain hurt sometimes. <-

  17. By all known laws of aviation there is no way a pegasus should be able to fly. The pegasus, of course, flies anyways because pegasi don't care what humans think is impossible. Yellow, black. Yellow, black... Yeah okay I'm done. <-

  18. If I get used to a good sleep schedule, I will eventually not even notice it. Going off that new preferred sleep schedule, however, will make me even more miserable than I was before I started it, since I got used to the better one in the first place. Even if you get hedonically used to something you still prefer the improved situation. <-

  19. I'm pretty introverted in general, so the things I say might be more biased towards the kind of lifestyle and friendships I find rewarding. I'll try to accommodate extroverted people (or even people in the middle of the spectrum). <-

  20. I used to have a migraine almost every week as a child. As I've grown older I've learned how to mitigate them better. Stress management was the key for me, personally. <-