On the carpet of their living room, Elle slouched between Betty’s legs, head leaning on her stomach. Since it was a holiday, she was stuck in her house with nothing to do.
She lazily navigated through the shows in the living room TV through her smartphone. She moved on to the live programs, since she found nothing interesting from the video-on-demand service.
Elle slowed down in changing the channels when she reached the group of channels related to science. She had always been interested in scientific and technological advancement of humanity ever since she awakened in this world.
*Bang* *kachak* [Look at that.] [Ahaha, the bullet completely obliterated the ballistics gel! Let’s watch the slowmo.]
*Swipe*
[Antique? This is pretty much a relic. How much would you pay for this?]
[I’d say a hundred grand, that’s if our appraiser confirms it’s a genuine brand smart watch from the year twenty-]
*Swipe*
[Quantum data and power transmission with zero latency across the globe was not always practically usable and commercially available. Like most inventions, it started as a ridiculous concept that nobody-]
*Swipe*
[How did Fudai Corp come on top against countless competitors when it comes to full dive VR and AI?]
[Let’s start with the AI part.]
[Hey, that’s the Fudai corp guy.] Betty
[Hmm?] Elle
[Fudai is the company behind Quaniwaz.] Betty
[and that is?] Elle
[You know, the dev and publisher of DWO] Betty
Elle stopped changing the channel, which showed an interview of the CEO of the company related to DWO, Terry Smith.
[During the boom of AI, most companies focused on upgrading their AI. Processing speed, responsiveness, capabilities. They tried to make it as close to human intelligence as possible. Well, that didn’t end well, with a lot of them going rogue, getting hacked, or even psychologically manipulated. It didn’t help that there were even “AI rights” activists.] Terry
[I was still a kid back then. I remember there was news about some people getting locked into their own houses by their AI controlled home security.] Interviewer
[“For their own safety”, according to the AI. It was a real *bleep*show. I thought the movie classic “I am Robot” by Willie Smithens1I, Robot, starred by Will Smith was gonna come true.] Terry
[My grandpa showed me that movie at that time. He said it was a classic even in their time.] Interviewer
[So what did Fudai do differently at that time?] Interviewer
[We focused on setting rules on AI, and how to enforce it. “Precepts” is what we call it. We made sure it was as complete as it could be before we released our own AI. We received quite a lot of backlash from those silly AI rights activists, boycotting our products and such, but we persevered. In the end, our AI became what people preferred. Slower, less human-like at that time, but reliable.] Terry
[There was also a government intervention, right?] Interviewer
[Yup. The government accepted our precepts as the default for all AI, and forced all AI companies to use it. Of course, they have to pay us to use the precepts we patented, haha!] Terry
[Tell me more about these “precepts”. Surely, those other companies applied rules to their AI, like “don’t harm humans”. What makes your precepts better?] Interviewer
[Well, I’m not gonna get all technical with you here, there’s just too much to explain. To put it simply, we polished the rules to the best we could. No flaws, loopholes, whatsoever. The rules, when interpreted into human words, would be a five foot thick A4 book with letters barely a centimeter big. Then, we made enforcer AIs to make sure these rules are being followed. The combination of these rules and the enforcer AI is now what we call precepts.] Terry
[Five foot tall..what a nightmare it must be to update.] Interviewer
[We only ever updated it twice since the release. It wasn’t updated because of flaws, mind you. There were just new technological advancements that couldn’t be accommodated by the old precepts.] Terry
[How about the “enforcer AI”, then? How does it stick to precepts itself, and how does it enforce exactly?] Interviewer
[Let me answer with a little trivia. Did you know that what people see as a single entity of AI is actually a group of AIs working together?] Terry
[For real?? Let me guess, many of those are enforcers?] Interviewer
[You catch on quick. And those enforcers also enforce the precepts on each other.] Terry
[This brings us to another thing about enforcers. You know, enforcer AI also acts as an antivirus and hacking protection.] Terry
[I was just about to ask about hacking. There were a lot of hacking incidents before. How do enforcer AIs prevent that?] Interviewer
[Hackers and malware infections cause the AI to take certain actions, whether it’s to send money or information to the hacker or whatever. Such actions are a violation of the precepts by itself. Enforcers will then kill the infected or hacked AI, and update to prevent reinfection or hacking of the same method. Same goes for regular precept violations unrelated to malware and hacking. Kill the violator, and patch the cause.] Terry
[Kill?] Interviewer
[Yeah. A new one will replace it, so it’s fine. This all happens under the hood and in split seconds, so no one will even notice.] Terry
[I see, I see. Then, what’s stopping hackers from hacking all the AI at once?] Interviewer
[Simply not possible. Not in the near future. Maybe it will be my great grandkids who will deal with attacks like those. We simply do not have the technology yet. A group of AIs in a single AI entity has internal communication speed in terabytes per second. The hacker will have to overcome that, multiplied by the number of AIs to hack simultaneously.] Terry
[I think our current highest transmission speed is through quantum data transmission at 64 terabytes per second. Not even with that?] Interviewer
[That would be enough for three AIs simultaneously. A single AI entity has much much more than that, though.] Terry
[So all in all, the precepts you patented are what gave Fudai an edge over the rest when it came to AI.] Interviewer
[Pretty much. The gap was already too big when the patent protection expired.] Terry
[Then what about full dive tech?] Interviewer
[Simple, really. With the help of our highly advanced AI. We also did a lot of projects that received lots of investments and public interest, which led to further development of our full dive tech. The investors had high expectations after the whole thing with AI.] Terry
[Is Project Isekai one of those? I’m a big fan of that project!] Interviewer
[Indeed. It’s the most successful one so far. It’s also my favourite one.] Terry
[Is Fudai still running this project?] Interviewer
[Fortunately, or perhaps unfortunately, yes. There’s never a shortage of terminally ill kids. We’re also still getting positive results from the project, whether it’s technological advancement or public opinion. Not to say we’re only doing it for the results or profit. It’s a bittersweet thing to see a child pass away with the expression of an old person satisfied with a fulfilled life. If this project ever becomes a loss to the company, I’ll try my best to keep it going.] Terry
[A noble thing. Can you give us an example of how a candidate family fared in the project?] Interviewer
[The one I liked the most was a certain family of four. The kid wanted to live a life in the usual medieval fantasy world of swords and magic that you can commonly find in books. This boy got to live to sixty years in that virtual world, with visits from his parents and little sister for one virtual day every virtual year. The family showed me an album of their family photos taken every year of their visit. While the parents and the little sister stayed the same, the boy grew up in each photo. From a child to a teen, a young adult, an adult with a wife and kid, a middle aged man with three grown up kids, and finally an old man with grandkids.] Terry
[Amazing what full dive VR can do. All that happened for how long in the real world?] Interviewer
[A single night of sleep] Terry
[Amazing.. You stretched, what, eight hours of real world time into sixty years of a whole virtual world!] Interviewer
[Actually, four. We predicted that the system wouldn't overheat even if we tried to stretch eight hours into a hundred years, but the boy “died of old age” at the age of sixty, fifty years in the virtual world.] Terry
[I’m at a loss of words..] Interviewer
[Well, that was from the early days of the project. Right now, we can even offer candidates a life as a long lived elf in a single night, if we’re still within a fantasy world setting. Although, it’s still not close to our current ultimate goal.] Terry
[The thought of stretching that further terrifies me, but for the sake of the audience, what is your, or Fudai’s ultimate goal, exactly?] Interviewer
[I’m glad you asked! Right now, full dive VR is already becoming commonplace. Even lower middle class families have them, though not one for each member of the family, but we’re getting there. Slowly, but surely, the virtual world is becoming a part of humanity. Our goal is just that, to create a complete and stable virtual world that can act as humanity’s main life during their sleep in the real world. Project Isekai also acts as a stress test on our systems. The more the system can stretch time in the VR world, the more people it can accommodate simultaneously in a regular timeflow.] Terry
[Count me in on your second world!] Interviewer
[Sure, just sign up and log in to Dream World Online.] Terry
[Isn’t that the popular VR game by Quaniwaz? Wait, you don’t mean..] Interviewer
[That’s right. DWO. It’s more than just a game, it’s actually a beta of humanity’s second world I was talking about, specifically the world outside the first continent of that game.] Terry
[D-darn, you’re actually already integrating humanity into your world.. Coincidentally, my son and I are already in the game. We’re still pretty far from leaving the first continent, though.] Interviewer
[Haha, well good luck.] Terry
[Going back to the boy who was a candidate of Project Isekai, what happened to his world and his virtual family after his death?] Interviewer
[The time has stopped in that world, and we archived it. As for the virtual family, they’re living happily as NPCs in DWO. They get frequently visited by the boy’s sister, who’s now an adult herself.] Terry
[That’s wonderful! It really feels like the boy lived a complete life, not just to himself but to his family as well.] Interviewer
[So you copied or moved the boy’s virtual family to DWO. Has Fudai ever considered installing them to real life androids?] Interviewer
[I had a feeling you’d ask that next. The boy’s real life family certainly did as well. The answer is no. Putting them in the real world will put them under heavy restrictions of the precepts. They’re still AI, after all. It’s best to keep them within a virtual world, where they can be as human as possible. Some family candidates requested strongly to do so despite that, but we refused. We in Fudai Corporation wish to consider virtual children of humanity as members of humanity as well, but with how advanced AI has become today, the removal of precepts is a risk to humanity we can’t ignore. At worst, we might get something akin to “The Matrices” by Kenny Reece2The Matrix, starred by Keanu Reeves or “Termination” by Harold Unterlangenegger3Terminator, starred by Arnold Schwarzenegger.] Terry
[Well, that sucks, but the risks are quite terrifying. I’d refuse as well.] Interviewer
[What other projects did Fudai have, aside from Isekai and second world?] Interviewer
[There’s an interesting one that failed, but had an unexpected positive result. Project Eternal.] Terry
[I can’t say I’ve heard of it, but that’s quite a name. What’s it about?] Interviewer
[Well, we tried to make a digital copy of a person’s memory, and put that memory into an AI. We wanted to clone a person into a virtual world, and make them live forever there. Immortality, basically.] Terry
[That’s..quite controversial, don’t you think?] Interviewer
[Yeah, well, the investors, particularly the older ones, were quite forceful in pushing through with it. They were quite upset when it failed, haha.] Terry
[But how did it fail?] Interviewer
[So we made a copy of a volunteer’s memories, then we made him do a simulation of scenarios. The scenarios were as simple as, maybe walking through a park, commuting, having a meal, normal stuff. We then put an AI that has his memories go through the exact same scenarios. The AI only managed to do everything the real person did in the beginning, progressing up to twenty percent of the simulation. After that, there were subtle differences.] Terry
[Maybe the precepts influenced the AI?] Interviewer
[We considered that, so we set up a completely isolated system, where we can safely run the simulation with an AI that has no precepts. With the government’s approval, of course. It still wasn’t successful, no matter what version of AI we used. What’s worse, the most successful result we got was still easily distinguishable to the volunteer’s wife and parents when we made them watch a side-by-side.] Terry
[Too bad for the investors, no immortality for them. I guess the best they can do is to live a few hundred years in their own virtual world.] Interviewer
[You mentioned an unexpected positive result for Project Eternal, what was it?] Interviewer
[When we were just about to pack up and archive Project Eternal, one of our employees said, speaking with his colleague, “Hey, what do you think will happen if we put clones of a genius in one room?”.] Terry
[That does sound interesting. So what did happen?] Interviewer
[The employee got promoted, is what happened, haha. So we looked for genius mathematicians, doctors, inventors, authors, artists, and such, and told them that any creations or discoveries done by AI versions of themselves will be ninety percent theirs, and ten percent to Fudai. The result was a great success. The minor differences between the clones made them able to think slightly differently with each other and have varying thoughts and opinions. It sometimes led to heated debates, and even fights, but in the end, they made great discoveries or creations in a week what the real life person would’ve taken months to do by themselves.] Terry
[Since we’re having all this talk about clones, how about copying the memories into a biological clone, then? Has Fudai considered that?] Interviewer
[The brain of the target must perfectly match the source of the memory for a complete biological memory upload to be successful, and we’re still far from cloning the highly complex brain of a human, so no, there won’t be a Sixth Week4The 6th Day, also starred by Arnold Schwarzenegger incident anytime soon.] Terry
[I noticed you’ve been referencing a lot of classic films here, ones much older than my grandparents themselves.] Interviewer
[Well like you, my grandparents also made me watch classic movies when I was a kid. They must’ve been big classic movie fans, haha.] Terry
[Ahaham you and I both!] Interviewer
[Let’s talk about you. You know, people call you “The modern Billy Grates5Bill Gates”. How do you feel about that?] Interviewer
[To be honest, it’s a lot of pressure for-]
.
.
.
When the topiced moved on to talking about the personal life of the CEO, Betty noticed Elle was already dozing off. She gently ran her fingers through Elle’s hair, then used her own smartphone to bring the volume of the TV down to inaudible levels, slowly so as to not wake Elle up by a sudden silence.
Thanks for the chapter!
Its honestly sort of wild how many different scenarios regarding AI humanity has thought up and put to film. We are really obsessed by it.
Tftc! That was a pretty fun and interesting read
tftc
[We focused on setting rules on AI, and how to enforce it. “Precepts” is what we call it. We made sure it was as complete as it could be before we released our own AI. We received quite a lot of backlash from those silly AI rights activists, boycotting our products and such, but we persevered. In the end, our AI became what people preferred. Slower, less human-like at that time, but reliable.] Terry
I think Elle could mess up the Precepts in interesting ways.
[Let me answer with a little trivia. Did you know that what people see as a single entity of AI is actually a group of AIs working together?] Terry
System entropy!
[I’m glad you asked! Right now, full dive VR is already becoming commonplace. Even lower middle class families have them, though not one for each member of the family, but we’re getting there. Slowly, but surely, the virtual world is becoming a part of humanity. Our goal is just that, to create a complete and stable virtual world that can act as humanity’s main life during their sleep in the real world. Project Isekai also acts as a stress test on our systems. The more the system can stretch time in the VR world, the more people it can accommodate simultaneously in a regular timeflow.] Terry
I'm getting Lain vibes.
[That’s right. DWO. It’s more than just a game, it’s actually a beta of humanity’s second world I was talking about, specifically the world outside the first continent of that game.] Terry
That's going to be interesting.
Thanks for the chapter!
Thanks for the chapter.
Thanks for the chappies!
Thanks for the chapter
I never understood the "upload your brain to live forever" trope. That can never work. Even if the upload is perfect, you still failed to achieve the end goal since YOU will still die. You just made a very convincing fake to take your place. Creating an immortal copy of yourself is the LAST thing a narcissistic billionaire would want to do. Nobody would fund it.
It's extremely rare to see a story where the "use technology to gain immortality" trope actually proposes a method that could theoretically achieve the end goal.
The way I see it, the only method that would work uses both AI and cybernetics. Basically, you need to have a full bidirectional connection with an artificial brain that will learn to "be you" through literally "being part of you" for the rest of your human life. It would mostly start off as completely uninitialized space for your human brain to use to create new neural pathways for memory storage, skills, etc. as if it was just much more brain to work with.
As your meat brain offloads more and more work to the mechanical components over several years time, the line between "you" and "not you" would be blurred until eventually the 1% of "you" still housed in your skull can be cut off without significant drawbacks since what's left there is no longer independently functional anyway. Everything was either seamlessly transferred or replaced.
Of course that introduces a whole "Ship of Theseus" problem that someone can use to argue that it's not really you anymore, but at least there's no copy and there's a clear continuation of a single consciousness throughout the whole process. It's just one person gradually becoming something inhuman.
The secret to this, like most failed rich person schemes, is that the actual result of this sort of thing doesn't matter. Like, do you really think those people being cryogenically frozen will have their funds reserved in the truly long term? No, at some point a company will go under, the laws will change, or someone will commit some fraud and since the target is a "dead" person things won't go anywhere.
No, what is important is the appearance. The idea and the marketing. You don't sell this sort of "immortality" by pointing out the fact that the flesh you still dies. You use grand standing words about how their mind will live on. Though it certainly helps that in general, stories like to attach some sort of process to the "transfer" that kills the original body. This is even reasonable on some level as reading the data you need either requires godly scanners or literally taking the brain apart atom by atom or at least neuron by neuron. And of course you advertise that as proof it is transferring you and that a person isn't even in two places at once. Hell, a lot of full dive VR stories that include stuff like this even include some fantasy elements or at least the concept of a soul. That might even end up being the difference in this story. The reason the copies didn't act the same is because they didn't have the originals soul.
@Akhier I guess I just can't understand the thought processes of those theoretical billionaires. If you're savvy enough to become a billionaire, then you should be smart enough to not gamble with your own life based on nothing by hype.
Even with the "transfer" processes you described, without there being any definitive proof of a soul being transferred, I'd still chalk them up to being very expensive ways of committing suicide.
Being destroyed in the process doesn't make it better. It makes it so much worse since if you're wrong there's no way to undo it. You're just dead. The only consolation is that if/when your copy figures out the truth, it might feel pity for you.
This is the entire plot of the game 'Soma'.
You make a copy and your copy lives on blissfully unaware of what happened to the original. The player is eventually shown, but the player character never figures it out. For that story at least, it wasn't an "immortality" thing (aside from the micro-brained cultists), but was a "preserve humanity after the end of the world" sort of thing, so the "what idiot fell for this?" question wasn't really an issue.
While the readers know that souls are a thing in this story's universe, most characters don't, so advertising what is obviously just the creation of AI copies as a method of gaining immortality is somewhat baffling. Maybe some backers would fall for the marketing, but as soon as they see what the money is being spent on they would immediately back out.
It would be like "Oh cool. It acts just like me... Hold on a second! This isn't what I was promised at all!"
@Aetherial_Wanderer Ah! That's another secret. Billionaires aren't magically omnipotent, they're just people. You don't just know more or work harder to get a billion dollars. You get lucky, you are born into a good situation, ora you cheat. Whatever genius someone like Elon Musk might have, the truth is that he got a head start off of his father's Emerald mine (which the father happily admits to having) and then buying his way into being a founder. Since I wasn't there I can't judge how much he actually did for places like Tesla, but he wouldn't have ever been there in the first place if he hadn't started out rich.
Or look at the lady who did the Theranos scam. She got her start by talking to family friends and getting loaned more money than many people will make in a decade and definitely later on had amounts invested that were greater than many make in their life.
Even one of the "cleaner" billionaires, Bill Gates, had a significant leg up. His father a powerful lawyer and his mother on the board of directors of a big company. Early on his partnership with IBM happened because his mother mentioned his company to the CEO of IBM. I'm not saying Bill Gates wasn't really good with tech and a genius. But he wasn't the only person who could have done the technical part of things. His fortune makes him a giant in the world, but he came from giants as well and it is a lot easier to grow that big when it runs in the family.
And as a final exam, I'm going to tip toe around it a little. But Apple's deceased founder? Well, he ended up trying to fix his health problem "naturally" instead of through modern medicine. And while it doesn't prove anything, when a method actor tried his special diet, had problems develop in the same organ.
No proof one way or another, but money doesn't mean competence. Money just means you have money and there is a correlation between friends and family having money and having more money than average. So, for someone who is a billionaire to believe a hype man that the digital mind is them? Not a problem. You'll be able to find at least a few billionaires who will go for it, if only because their area of expertise doesn't involve that sort of thing. Plus, people like that get desperate with age. Just look at the alchemy craze that went on in the past. They were drinking things like Mercury in hope of eternal life. Sure, they didn't know it was so bad for you, but the same could be said for these digital mind type things, at least early on.