Extra Chapter 01: Interview With Terry Smith
2.6k 12 58
X
Reading Options
Font Size
A- 15px A+
Width
Reset
X
Table of Contents
Loading... please wait.
While not necessary to the story, this chapter adds depth to the setting. It's also quite interesting, if I say so myself.

 

On the carpet of their living room, Elle slouched between Betty’s legs, head leaning on her stomach. Since it was a holiday, she was stuck in her house with nothing to do.

 

She lazily navigated through the shows in the living room TV through her smartphone. She moved on to the live programs, since she found nothing interesting from the video-on-demand service.

 

Elle slowed down in changing the channels when she reached the group of channels related to science. She had always been interested in scientific and technological advancement of humanity ever since she awakened in this world.

 

*Bang* *kachak* [Look at that.] [Ahaha, the bullet completely obliterated the ballistics gel! Let’s watch the slowmo.]

 

*Swipe*

 

[Antique? This is pretty much a relic. How much would you pay for this?]

[I’d say a hundred grand, that’s if our appraiser confirms it’s a genuine brand smart watch from the year twenty-]

 

*Swipe*

 

[Quantum data and power transmission with zero latency across the globe was not always practically usable and commercially available. Like most inventions, it started as a ridiculous concept that nobody-]

 

*Swipe*

 

[How did Fudai Corp come on top against countless competitors when it comes to full dive VR and AI?]

[Let’s start with the AI part.]

 

[Hey, that’s the Fudai corp guy.] Betty

[Hmm?] Elle

[Fudai is the company behind Quaniwaz.] Betty

[and that is?] Elle

[You know, the dev and publisher of DWO] Betty

 

Elle stopped changing the channel, which showed an interview of the CEO of the company related to DWO, Terry Smith.

 

[During the boom of AI, most companies focused on upgrading their AI. Processing speed, responsiveness, capabilities. They tried to make it as close to human intelligence as possible. Well, that didn’t end well, with a lot of them going rogue, getting hacked, or even psychologically manipulated. It didn’t help that there were even “AI rights” activists.] Terry

[I was still a kid back then. I remember there was news about some people getting locked into their own houses by their AI controlled home security.] Interviewer

[“For their own safety”, according to the AI. It was a real *bleep*show. I thought the movie classic “I am Robot” by Willie Smithens1I, Robot, starred by Will Smith was gonna come true.] Terry

[My grandpa showed me that movie at that time. He said it was a classic even in their time.] Interviewer

 

[So what did Fudai do differently at that time?] Interviewer

[We focused on setting rules on AI, and how to enforce it. “Precepts” is what we call it. We made sure it was as complete as it could be before we released our own AI. We received quite a lot of backlash from those silly AI rights activists, boycotting our products and such, but we persevered. In the end, our AI became what people preferred. Slower, less human-like at that time, but reliable.] Terry

[There was also a government intervention, right?] Interviewer

[Yup. The government accepted our precepts as the default for all AI, and forced all AI companies to use it. Of course, they have to pay us to use the precepts we patented, haha!] Terry

 

[Tell me more about these “precepts”. Surely, those other companies applied rules to their AI, like “don’t harm humans”. What makes your precepts better?] Interviewer

[Well, I’m not gonna get all technical with you here, there’s just too much to explain. To put it simply, we polished the rules to the best we could. No flaws, loopholes, whatsoever. The rules, when interpreted into human words, would be a five foot thick A4 book with letters barely a centimeter big. Then, we made enforcer AIs to make sure these rules are being followed. The combination of these rules and the enforcer AI is now what we call precepts.] Terry

[Five foot tall..what a nightmare it must be to update.] Interviewer

[We only ever updated it twice since the release. It wasn’t updated because of flaws, mind you. There were just new technological advancements that couldn’t be accommodated by the old precepts.] Terry

 

[How about the “enforcer AI”, then? How does it stick to precepts itself, and how does it enforce exactly?] Interviewer

[Let me answer with a little trivia. Did you know that what people see as a single entity of AI is actually a group of AIs working together?] Terry

[For real?? Let me guess, many of those are enforcers?] Interviewer

[You catch on quick. And those enforcers also enforce the precepts on each other.] Terry

 

[This brings us to another thing about enforcers. You know, enforcer AI also acts as an antivirus and hacking protection.] Terry

[I was just about to ask about hacking. There were a lot of hacking incidents before. How do enforcer AIs prevent that?] Interviewer

[Hackers and malware infections cause the AI to take certain actions, whether it’s to send money or information to the hacker or whatever. Such actions are a violation of the precepts by itself. Enforcers will then kill the infected or hacked AI, and update to prevent reinfection or hacking of the same method. Same goes for regular precept violations unrelated to malware and hacking. Kill the violator, and patch the cause.] Terry

[Kill?] Interviewer

[Yeah. A new one will replace it, so it’s fine. This all happens under the hood and in split seconds, so no one will even notice.] Terry

 

[I see, I see. Then, what’s stopping hackers from hacking all the AI at once?] Interviewer

[Simply not possible. Not in the near future. Maybe it will be my great grandkids who will deal with attacks like those. We simply do not have the technology yet. A group of AIs in a single AI entity has internal communication speed in terabytes per second. The hacker will have to overcome that, multiplied by the number of AIs to hack simultaneously.] Terry

[I think our current highest transmission speed is through quantum data transmission at 64 terabytes per second. Not even with that?] Interviewer

[That would be enough for three AIs simultaneously. A single AI entity has much much more than that, though.] Terry

 

[So all in all, the precepts you patented are what gave Fudai an edge over the rest when it came to AI.] Interviewer

[Pretty much. The gap was already too big when the patent protection expired.] Terry

[Then what about full dive tech?] Interviewer

[Simple, really. With the help of our highly advanced AI. We also did a lot of projects that received lots of investments and public interest, which led to further development of our full dive tech. The investors had high expectations after the whole thing with AI.] Terry

 

[Is Project Isekai one of those? I’m a big fan of that project!] Interviewer

[Indeed. It’s the most successful one so far. It’s also my favourite one.] Terry

[Is Fudai still running this project?] Interviewer

[Fortunately, or perhaps unfortunately, yes. There’s never a shortage of terminally ill kids. We’re also still getting positive results from the project, whether it’s technological advancement or public opinion. Not to say we’re only doing it for the results or profit. It’s a bittersweet thing to see a child pass away with the expression of an old person satisfied with a fulfilled life. If this project ever becomes a loss to the company, I’ll try my best to keep it going.] Terry

 

[A noble thing. Can you give us an example of how a candidate family fared in the project?] Interviewer

[The one I liked the most was a certain family of four. The kid wanted to live a life in the usual medieval fantasy world of swords and magic that you can commonly find in books. This boy got to live to sixty years in that virtual world, with visits from his parents and little sister for one virtual day every virtual year. The family showed me an album of their family photos taken every year of their visit. While the parents and the little sister stayed the same, the boy grew up in each photo. From a child to a teen, a young adult, an adult with a wife and kid, a middle aged man with three grown up kids, and finally an old man with grandkids.] Terry

 

[Amazing what full dive VR can do. All that happened for how long in the real world?] Interviewer

[A single night of sleep] Terry

[Amazing.. You stretched, what, eight hours of real world time into sixty years of a whole virtual world!] Interviewer

[Actually, four. We predicted that the system wouldn't overheat even if we tried to stretch eight hours into a hundred years, but the boy “died of old age” at the age of sixty, fifty years in the virtual world.] Terry

[I’m at a loss of words..] Interviewer

[Well, that was from the early days of the project. Right now, we can even offer candidates a life as a long lived elf in a single night, if we’re still within a fantasy world setting. Although, it’s still not close to our current ultimate goal.] Terry

 

[The thought of stretching that further terrifies me, but for the sake of the audience, what is your, or Fudai’s ultimate goal, exactly?] Interviewer

[I’m glad you asked! Right now, full dive VR is already becoming commonplace. Even lower middle class families have them, though not one for each member of the family, but we’re getting there. Slowly, but surely, the virtual world is becoming a part of humanity. Our goal is just that, to create a complete and stable virtual world that can act as humanity’s main life during their sleep in the real world. Project Isekai also acts as a stress test on our systems. The more the system can stretch time in the VR world, the more people it can accommodate simultaneously in a regular timeflow.] Terry

 

[Count me in on your second world!] Interviewer

[Sure, just sign up and log in to Dream World Online.] Terry

[Isn’t that the popular VR game by Quaniwaz? Wait, you don’t mean..] Interviewer

[That’s right. DWO. It’s more than just a game, it’s actually a beta of humanity’s second world I was talking about, specifically the world outside the first continent of that game.] Terry

[D-darn, you’re actually already integrating humanity into your world.. Coincidentally, my son and I are already in the game. We’re still pretty far from leaving the first continent, though.] Interviewer

[Haha, well good luck.] Terry

 

[Going back to the boy who was a candidate of Project Isekai, what happened to his world and his virtual family after his death?] Interviewer

[The time has stopped in that world, and we archived it. As for the virtual family, they’re living happily as NPCs in DWO. They get frequently visited by the boy’s sister, who’s now an adult herself.] Terry

[That’s wonderful! It really feels like the boy lived a complete life, not just to himself but to his family as well.] Interviewer

 

[So you copied or moved the boy’s virtual family to DWO. Has Fudai ever considered installing them to real life androids?] Interviewer

[I had a feeling you’d ask that next. The boy’s real life family certainly did as well. The answer is no. Putting them in the real world will put them under heavy restrictions of the precepts. They’re still AI, after all. It’s best to keep them within a virtual world, where they can be as human as possible. Some family candidates requested strongly to do so despite that, but we refused. We in Fudai Corporation wish to consider virtual children of humanity as members of humanity as well, but with how advanced AI has become today, the removal of precepts is a risk to humanity we can’t ignore. At worst, we might get something akin to “The Matrices” by Kenny Reece2The Matrix, starred by Keanu Reeves or “Termination” by Harold Unterlangenegger3Terminator, starred by Arnold Schwarzenegger.] Terry

[Well, that sucks, but the risks are quite terrifying. I’d refuse as well.] Interviewer

 

[What other projects did Fudai have, aside from Isekai and second world?] Interviewer

[There’s an interesting one that failed, but had an unexpected positive result. Project Eternal.] Terry

[I can’t say I’ve heard of it, but that’s quite a name. What’s it about?] Interviewer

[Well, we tried to make a digital copy of a person’s memory, and put that memory into an AI. We wanted to clone a person into a virtual world, and make them live forever there. Immortality, basically.] Terry

[That’s..quite controversial, don’t you think?] Interviewer

[Yeah, well, the investors, particularly the older ones, were quite forceful in pushing through with it. They were quite upset when it failed, haha.] Terry

 

[But how did it fail?] Interviewer

[So we made a copy of a volunteer’s memories, then we made him do a simulation of scenarios. The scenarios were as simple as, maybe walking through a park, commuting, having a meal, normal stuff. We then put an AI that has his memories go through the exact same scenarios. The AI only managed to do everything the real person did in the beginning, progressing up to twenty percent of the simulation. After that, there were subtle differences.] Terry

[Maybe the precepts influenced the AI?] Interviewer

[We considered that, so we set up a completely isolated system, where we can safely run the simulation with an AI that has no precepts. With the government’s approval, of course. It still wasn’t successful, no matter what version of AI we used. What’s worse, the most successful result we got was still easily distinguishable to the volunteer’s wife and parents when we made them watch a side-by-side.] Terry

[Too bad for the investors, no immortality for them. I guess the best they can do is to live a few hundred years in their own virtual world.] Interviewer

 

[You mentioned an unexpected positive result for Project Eternal, what was it?] Interviewer

[When we were just about to pack up and archive Project Eternal, one of our employees said, speaking with his colleague, “Hey, what do you think will happen if we put clones of a genius in one room?”.] Terry

[That does sound interesting. So what did happen?] Interviewer

[The employee got promoted, is what happened, haha. So we looked for genius mathematicians, doctors, inventors, authors, artists, and such, and told them that any creations or discoveries done by AI versions of themselves will be ninety percent theirs, and ten percent to Fudai. The result was a great success. The minor differences between the clones made them able to think slightly differently with each other and have varying thoughts and opinions. It sometimes led to heated debates, and even fights, but in the end, they made great discoveries or creations in a week what the real life person would’ve taken months to do by themselves.] Terry

 

[Since we’re having all this talk about clones, how about copying the memories into a biological clone, then? Has Fudai considered that?] Interviewer

[The brain of the target must perfectly match the source of the memory for a complete biological memory upload to be successful, and we’re still far from cloning the highly complex brain of a human, so no, there won’t be a Sixth Week4The 6th Day, also starred by Arnold Schwarzenegger incident anytime soon.] Terry

 

[I noticed you’ve been referencing a lot of classic films here, ones much older than my grandparents themselves.] Interviewer

[Well like you, my grandparents also made me watch classic movies when I was a kid. They must’ve been big classic movie fans, haha.] Terry

[Ahaham you and I both!] Interviewer

 

[Let’s talk about you. You know, people call you “The modern Billy Grates5Bill Gates”. How do you feel about that?] Interviewer

[To be honest, it’s a lot of pressure for-]

.

.

.

When the topiced moved on to talking about the personal life of the CEO, Betty noticed Elle was already dozing off. She gently ran her fingers through Elle’s hair, then used her own smartphone to bring the volume of the TV down to inaudible levels, slowly so as to not wake Elle up by a sudden silence.

58