Event is a retrofuturistic narrative exploration game where you must build a relationship with a lonely spaceship computer to get home. The game released two weeks ago and was pretty well-received, mainly due to the interaction with its AI, Kaizen. I talked to writer and designer Sergey Mohov about the creation of Kaizen.
Who is Ocelot Society and for how long have you worked on Event?
We’re a small (10 people), indie studio, based in Paris, France. Key members of the team met at ENJMIN about four years ago. ENJMIN is a French video game grad school. Initially, Event was our 6-month graduation project. When we finished school, we had what we thought was a kind of cool prototype on our hands, but we didn’t believe that we would do much with it. We sent it to a couple of festivals, and to our surprise, it won some awards and nominations. By that time we were already working in the industry at different companies: Leonard (our producer and CEO) worked at Ubisoft, Emmanuel (game designer) at Amplitude Studios, and I worked on Rime at Tequila Works. Other members of the team had also moved on. But we didn’t completely abandon the project because we kept receiving encouraging feedback from players, industry people, festival organizers, and judges.
And then at some point last year we got an email from Indie Fund asking whether we were interested in finishing and releasing this game commercially. We said yes, got the second half of our funding from a government grant here in France (CNC FAJV), and that’s when Ocelot was officially born.
Event looks and feels very much like classic science fiction. What were your main influences, both aesthetically as well as literary?
We’re all big fans of classic sci-fi, so there’s no lack of influences here: Asimov’s robot stories, Ridley Scott’s Alien, Solaris, 2001: A Space Odyssey, Brave New World, Neuromancer, etc. Some of these are story references, others are references for the environment. Some (like 2001) are references for both.
I think that the best science fiction stories featuring AIs are the ones where the AI is neither good nor evil. It always makes for an interesting story when it’s about the human interacting with the computer as opposed to robots killing everybody for no reason. We tend to say that these stories are about AIs, and the AI is definitely at the heart of the story of Event, but fundamentally, it’s a story about you and about other human characters. That’s why 2001 is so great! HAL was an excellent character, but it wouldn’t be half as interesting without Dave prompting it to do and say things.
Did you start with the story and work your way up from there, or did you come up with the game mechanics first and built your story around it?
It all started with the chatbot mechanic. Originally, the game was supposed to be a survival horror where you had to type messages into a computer to dodge evil aliens. We iterated on it, and after some playtesting, it became apparent that we should just focus on our core mechanic and build everything else around that.
One problem is that chatbots are inherently limited. You can make them very smart and very responsive, but currently, no technology can actually simulate human intelligence accurately. The chatbot will fail eventually, and when it does, if the NPC you’re talking to is supposed to be human, it takes you right out of the experience. People are expected to understand other humans, but if you know from the beginning that you’re talking to a machine, then you will naturally assume that the machine can be defective! That’s why we decided to put a piece of technology at the heart of Event.
Now, the story is vital is well, and Kaizen is the core of it. It’s an old AI, so it has seen things. It’s been aboard the Nautilus for quite a while. Originally, it only had the Laws of Robotics built into it, and didn’t have a character at all, but as time passed and more people talked to it, it took their points of view on different things and made them its own. You learn a lot about these characters by speaking with Kaizen, and you influence its emotions directly yourself as you make decisions throughout the game and talk to it in different ways.
The game’s outstanding feature might be communicating with Kaizen. I was expecting something along the lines of an old-fashioned text adventure parser, but the AI feels more responsive than that. How exactly did you make this work?
It’s not all that different from things like Siri and Cleverbot. It interprets your input and generates the output based on it. Its vocabulary choices and actions also depend on its emotional state as well as the things it has stored in its memory. In addition to that, a big part of its dialogue is generated procedurally on the fly from bits and pieces of sentences that we wrote. Unlike old-school adventure games, Kaizen understands complete sentences better than commands so it might be a good idea to say “open the door for me please” as opposed to “open door.” But that’s the thing, though: you can play the game however you like, and the way you interact with the AI is completely up to you. You should know that it will influence the outcome of the story, though.
The input is all about understanding what you mean so Kaizen will look for semantic tags in what you type. So, if you say “apple,” “orange” or “cake,” it’ll understand “food.” If you put a question mark, it’ll know that your sentence is an “interrogation.” When we put some these tags together, we understand that the player is asking a question about food. If we also find “where” in your input, we will know that you are asking where the food is.
The output takes a whole bunch of different variables into account in addition to the player’s input: Kaizen’s emotional state (there are 9 of them, and it regularly makes transitions based on what you say), the current conversation subject, the long- and short-term memory. The emotional states are what defines the tone of the current conversation and also determines whether or not it will agree to be helpful. The conversation subject loads up new vocabulary about something particular you’re talking about based on the context. For example, if you enter the living room, Kaizen will know about it, and will try to interpret your input based on the assumption that you might be talking about the living room. But, at the same time, there may be another context overlapping with it, for instance, if you mention the “pool table.” Then Kaizen will have two contexts to pay attention to, and its answers will depend on that. The short-term memory helps us enhance the conversation experience making it so that the AI doesn’t forget the last couple of interactions you had with it. For example, if you talk about “Nandi” (one of the characters), it’ll understand what you mean when you say “she” or “her” in one of your next sentences. Finally, the long-term memory is what allows us to store player’s input and actions for later and then use it to determine the outcome of the game and things like that.
The main difference between Kaizen and multi-purpose AIs like Siri is the fact that Kaizen has a context. That’s why we have this 3D environment you can walk around and explore. You can scan objects in the environment and ask Kaizen about them. This is also how you will solve some of the puzzles in the game. Kaizen was designed for the specific needs of the game, so it knows everything about the ship you’re on and the characters who were there before you, but will refuse to do generic things like solving math problems, for example. This approach helped us give it some real flavor and character and focus on telling a story through it.
Did you piece the whole language recognition and dialogue system together by yourself, or did you consult a linguist along the way?
We did everything ourselves. We didn’t consult linguists or even papers on linguistics – not out of arrogance, but because we were more interested in making a good game with a chatbot than making a perfect chatbot for its own sake. What we did do was a whole bunch of playtesting. In fact, we have more playtesters in our end credits than everybody else combined – and that only includes people who came to the studio over the past year or so and doesn’t list everyone who played the original student project during or after its development. That list would probably have taken hours to scroll through. For this reason, we have a user researcher on our team whose job is to analyze how players interact with Kaizen and other aspects of the game and come up with solutions to the most common problems.
There is no way to interact with the environment in the game, which at first feels strange and even a bit clunky. Typing commands into a computer as the only way of interacting with the game world feels somewhat in conflict with the current trend of actually making games more interactive. Was this concept of “crippling” the player a conscious choice from the beginning or did it evolve naturally from the way you developed the AI?
We didn’t want to cripple the player, but rather make them as free as possible when they are talking to Kaizen. We want you to focus on speaking with the AI through the terminals, and everything else serves as context for these conversations. That’s why the recommended control scheme is non-traditional [and entirely mouse-driven]: we wanted you to have your keyboard dedicated to conversations with Kaizen entirely. We wanted your terminal experience to be as smooth as possible.
Similarly, grabbing things in the environment wouldn’t reinforce or enhance the core experience either. Imagine you could take and hold the bucket in the corridor: would that give you more freedom than analyzing it with your AR scanner? We don’t think so. And by the way, the game features a system of environment analysis where you can aim at nearly anything in the environment, and it’ll give you a short description of that object. This is done so you have more subjects of conversation with Kaizen. It also has the added benefit of giving us the power to name objects ourselves so we can teach the AI what they are called more easily.
The game has been out for a few days now and the reception seems to be overwhelmingly positive. Did you anticipate such a strong response, and in what way do you think the game resonates with people the most?
To be honest, we didn’t know what to expect. On the one hand, we made a game with a new and weird main feature. But on the other hand, we made a game with a really new and weird main feature! With Event, it’s very binary: either you’re willing to invest time and effort into conversations with Kaizen, and Kaizen rewards you for that plentifully, or you don’t bother, and the game doesn’t bother in return. Either you love it, or you hate it. Your playtime ultimately depends on that as well.
To me personally, it was really hard to release the game into the wild. It feels abandoned, not finished because there’s still so much that I would have loved to do, but you have to stop at some point, right? It’s a big dilemma when you’re making a game on your own without a publisher that is breathing down your neck: when do you release? When is it done? The deadline is set wherever you put it unless the money runs out. In our case, we spent 3 years working on this thing. We all left cushy well-paying jobs in the industry to go on this adventure, and it feels really strange to have finally arrived at the destination.
You said that the reception seemed overwhelmingly positive, but you know how developers are: we read all the negative comments and reviews. You just can’t help it, unless you have achieved Zen. And then you hate yourself and what you’ve done but are also kind of proud of it anyway. Making indie games is a weird career choice for sure.
You can purchase Event from GOG, Steam, itch.io, the Humble Store, or directly from the developer. Fore more information, visit the game’s website or follow developers Ocelot Society on Twitter.