How we ran unity servers on AWS EKS. Part 2 — Implementation
In the Part 1 we discussed how we designed a multiplayer for our first game — Foxy Arena.
Now, when you know how we came to the eventual platform design, let’s talk about implementation details. Let’s take a look at the final diagram:
As soon as 2 players are in the matchmaking queue, they get information about the available pod with the Unity server inside (e.g. IP address and port).
When unity clients have this information, they configure its NetworkManager to directly connect to the pod.
Docker image
First of all, we need to pack our Unity server as a docker image. For this reason, we need to build a headless version of our game. The easiest way to do this in unity 2021.2 is the “Dedicated Server” platform. Don’t forget to include this component during unity installation
For the sake of simplicity, I will include a working Dockerfile example:
That’s it. Now, you can run your unity application as a container in any container-based system (with a docker runtime ofc.)
Kubernetes Pod
Now we came to the interesting part. As you should remember from the previous part, we have a python-based controller which should deploy a maintain the desired amount of Ready pods.
Another important moment is that clients are connecting directly to pods when they have connection info. This means, that pods are supposed to listen to a dedicated port on a node. Thus, we need to deploy pods with host network:true and different pods.
To address these requirements we packed pod definition into a configMap :
DedicatedPort values will be changed by the python controller during the deployment.
At this point in time, we can deploy a unity server to any k8s cluster and it will be available from outside.
Python Deployer
Now, let’s talk about the Python controller. This controller does a lot of things:
- Checks for available nodes
- Store available ports on these nodes
- Track pods’ state
- Deploy new pods
- update the Redis database with the current info
I am not going to reveal the entire python script, however, I would like to show a part with deploys actual unity server:
Above you can see that this controller is assigned a port to this pod and bind the pod to the specific node.
Thus, we know where this pod is running and which port it listens to. As soon as we have 2 players on our matchmaking queue, we can extract this info and return it to unity clients.
These Unity clients can set up their NetworkManager to connect directly to this pod.
Unity Client Connection
In this part, I would like to talk a little bit about how Unity clients are actually initialized connections to these pods.
Below you will find a piece of code responsible for querying for connection info:
As you can see, starting from row 29
we extract the connection info and initialize the actual connection to the above-mentioned pod.
Conclusion
I am pretty sure that the above-mentioned information is enough to Run a unity-server inside a remote k8s cluster and initialize a connection there from a unity client.
The infrastructure implementation completely depends on your requirements. However, if you are interested in more details regarding our infrastructure, please, let me know.