This content was created in association with Improbable. To find out more visit https://www.improbable.io/ims.
How much is a minute worth? What about thirty? When it comes to free-to-play titles that rely on micro-transactions and other in-game revenue streams to stay afloat, taking a game offline even briefly can result in a huge amount of lost revenue. There’s a reason studios have mitigation strategies in place for unexpected crashes: because downtime is a last resort.
Regular game updates are crucial in a competitive market
But those same studios – especially those operating games as a service – rely on frequently updating live games to improve gameplay and make sure they run smoothly. As early access becomes a key part of the development process, there’s an increasing focus on iterating quickly in real time based on player feedback. Plus, as competition grows, it’s becoming more important than ever to release regular new content throughout the lifetime of the game to encourage gamers to keep playing.
The question is, how do you update your game without impacting player experience? Once upon a time, it meant tearing your game down and booting it up again. Aside from the dent in revenue caused by the store going offline, you had to hope gamers wouldn’t lose interest and go elsewhere – either temporarily, or for good. (Of course, updating thousands of servers takes time, so there was never any guarantee.)
Zero downtime patching protects player experience and revenue
Then along came zero downtime patching. The basic premise of this relatively new technology is the ability to update your game servers quickly and easily, without impacting player experience or revenue. The added speed comes from an ability to upload the difference (that is, only updating the changed components), rather than replacing the whole image – thus speeding up your iteration time. There are various implementation options for zero downtime patching, depending what you’re trying to achieve.
- Do a rolling update for updating just the server or client
Say you’re only updating the game server binary with a change that’s transparent to the client – for example, a hot fix after launch: zero downtime patching allows you to do a rolling update of your game servers by automatically defaulting to the newest version when a new game server is spun up, or for those servers in idle.
To avoid impacting players, your game server operations solution needs rolling update capability – specifically, it needs to be able to identify and only update game servers not in use.
- Use the red green strategy for updating both server and client
If you’re rolling out an update that impacts both the server and the client, your best option is a ‘red green’ strategy to coordinate the update of both. This allows you to run two versions of a game in parallel and funnel players to the version they’re currently playing. Once their session is over, or they update their client, you can funnel them to the new version. Gradually, you drain the old version of the game and spin up more instances of the new version. (Obviously, both versions of your game need to be able run in sync without impacting each other.)
- Run different builds in parallel for testing
Build management that allows you to run different builds in parallel is also useful when you’re A/B testing with a live audience, or simultaneously testing different changes to the game, or if you need to run custom versions of your game in different countries due to regional regulations. As you observe player behaviour or analyse feedback you can quickly and easily iterate and update your game, then monitor the impact of your changes – all without players noticing. This makes for a smoother experience and more reliable results.
Take an example: parallel testing during the development of Scavengers meant Midwinter Entertainment could deploy builds in clusters without impacting other builds. By maximising the number of iterations in development, they were able to test multiple scenarios simultaneously and fix things faster. Not only that, it was also more cost-efficient because the same machine was used for various tests.
How RETO MOTO avoided launch chaos with parallel testing
RETO MOTO needed to be sure they could accommodate a surge in demand come launch day for Heroes & Generals, serving matches at expected rates and avoiding queue build-ups. So they partnered with Improbable Multiplayer Services (IMS) to build a test environment to duplicate the live Heroes & Generals game stack.
This mirror environment of the existing backend allowed them to conduct scale tests and playtests without risking the live player experience, pushing the game’s scale threshold to twice the predicted capacity to test the resilience of the new backend prior to launch. The test environment proved so useful that the RETO MOTO dev team asked to keep the environment running so that they could continue to experiment with new gameplay features and scenarios in the future.
Game server orchestration technology from Improbable Multiplayer Services (IMS) helps you maximise revenue and maintain player experience with zero downtime patching. We understand iteration speed is a key factor for studios. Having recently increased our patching speed by 17% (a x2.4 upload and x30 post-upload improvement), our team of backend experts will constantly review and improve your technology – so you can focus on what you do best.
To find out more visit https://www.improbable.io/ims.