No description
  • C# 97.7%
  • Batchfile 2.3%
Find a file
2026-03-31 13:08:01 +02:00
.vs Addede: Existing project files to Git 2026-03-31 13:02:13 +02:00
automation Addede: Existing project files to Git 2026-03-31 13:02:13 +02:00
src Addede: Existing project files to Git 2026-03-31 13:02:13 +02:00
tests/DeXPChargedBackend.Worker.Tests Addede: Existing project files to Git 2026-03-31 13:02:13 +02:00
DeXPChargedBackend.sln Addede: Existing project files to Git 2026-03-31 13:02:13 +02:00
Readme.md Fixed: Readme to have it readable online 2026-03-31 13:08:01 +02:00
Testing.md Addede: Existing project files to Git 2026-03-31 13:02:13 +02:00

Hello

This document is a log of how I was proceeding with the test task. Its more a log of how I got to the decisions. Youll find a detailed time log at bottom. Also, I would dedicate some limited amount of my life time there - Id like to have it tracked.

Sources of information

  1. PDF-file with the task itself.
  2. Job-description page at LinkedIn
  3. Pre-interview call

I.Task PDF

It states that a backend service should be created. 3 parts should be implemented:

  1. Registration of user with unique ID and name
  2. Leaderboards
  3. Battle engine

II. Job at LinkedIn

Following technologies mentioned: RabbitMQ, AWS S3, Load balancing Kibana.

III. Call

  1. Oriented CCU - millions of players
  2. Mentioned technologies: gRPC, HTTP versions
  3. AI is prohibited. However, googling, luckily, not.

First thoughts

Registration of player main outcome is not a standard page I made in 2010. Its millions of CCU, so up to hundreds of millions of registered users.

google: biggest MMO amount of registered users - RuneScape, 254 millions

Standard one-table DB query might be slow in such conditions. However, the data structure mentioned is really small, up to ~10kb of data. So another solution from 2010 could help - use some in-memory caching DB like Memcached or so (redis is recommended in the task). Dump the backup to the drive for case of power outage or reboot. Its up to 1 TB of RAM, which is doable nowadays on a single server machine. Also, it might be some scalable database running somewhere for some slow operation, like showing a full profile and so on. Redis is recommended in the task. However, I have no experience with it. But MSSQL server has an in-memory mode for caching the tables. Is it good enough?

google: MS SQL in-memory cache vs redis speed comparison example

Seems that Redis has much better performance on reads - it should be used.

The battle engine mentioned is very simple. However, my recent Vigor experience shows that its rarely the case. Highly depends on the game, but game servers could require physics, simulation, prediction, in-memory rendering etc. It definitely should not be a part of data-backend infrastructure. The simplest case would be a single-player, no-server game - zero cost server support, no online cheaters etc. However, its not the case by design. Another extreme is the game fully running on server, client has just the view and sends the inputs to server. Again almost no cheaters. But the server costs and latency are extreme. So the truth is always in the middle. Also, even if each server does just “X+Y” - we have millions of active players. SIngle server OS will run out of available sockets on a kernel level. So the game servers have to be distributed. Distribution means orchestration (to which server should I as a player connect?) and communication between game servers and data-backend. Also, the communication between game client and server is always required. The task has a requirement to show a battle log to the player. Also, Kibana is mentioned in the job description. Could it be related? Vigor hasn't had advanced game-server vs DB service communication. Basically, it was a 30-minutes match for players, then a bucket of changes required to do in player profiles was sent from the server to DB. It is the case for the task as well. However, the game sessions could be shorter, its way more players - so way more data. So the infrastructure have to be prepared for more communication. Also, cases of players disconnects, bad network, or network cheats should be handled. That means, sensitive data cannot be computed just on a client - it has to be at least verified by the server.

Leaderboards are a completely different beast. For the case of AAA platforms, they usually provide their own solutions (PlayStation Leaderboards, Xbox etc).

google: multiplay game hosting leaderboards

The good options for this task would be Microsoft PayFab or Unity Leaderboards. Vigor was using platform leaderboards for a long time. Profits: best integration with the console and players challenging experience - it was easy to compete with the friends. The downside -it was too complicated to support multiple implementations on the game side. Microsoft PlayFab couldve solved the issue since its cross platform. However, it did not exist at the time of Vigor creation.
Unity Leaderboards seems to be a good option for the game, made on Unity. Itll have the least possible integration pain, the code support would be cheap. Also, it is an offloading of the responsibilities - Unity will have to support their product, keep an eye on a servers stability and at the system's health in general. Downside: If Unity got a knock-out - our product will be out as well. I would probably not get into the leaderboards theme anymore for the task - its too time consuming to have a proper implementation. And the existing solutions on the market are pretty good and not so expensive.

Iteration 2

Additional info/approval via e-mail:

  1. The player cannot be in two battles at the same time.
  2. skipping a battle and approaching the next one in the queue should be possible.

The technology

C#, .NET Core - code is compiled into executable, which makes the app way more secure compared to scripted languages. Also, itll give more performance because of this. But still crossplatform. So “dotnet new webapi” could be used, it is a modern C# industry standard. Also, swagger could be used for endpoints documentation and testing, which is the point of current assessment. Logically, MSSQL is a good relational DB for connecting with C#. But for this assignment lets stick with just Redis. I used this Docker composite to spin up Redis:

services:
  redis-dev:
    image: redis:8-alpine
    volumes:
      - redis-data:/data
    network_mode: host
    restart: unless-stopped
volumes:
  redis-data:

No googling was needed for this docker-composite, since I already had some project running with Redis bundled-in - I just adapted it

Now I can access my DB via Redis Insight on redis://default@10.0.0.20:6379 - it is running on a weaker machine located in the same local network as a main development station. So network communication gets really tested during the development process. Its easier to track performance as well.

DB will store the data in Protobuf format - its not as good to read by human eyes as JSON. But its binary, so DB will take less space on the disk.

The main point of the application is the player. The player participates in the battles. The player gets to leaderboards. Which is logical - were “selling” the game to the player. So I started to write some code for storing the player in the database. Please read the comments in the code, from now on - most of the stuff will go there.

The next important entity is battle. Each game server (Worker) is a stand alone entity, Could process any battle. The initial idea is to register each server on a Backend API side. However, it will create additional load to the backend. Workers are allowed to have direct Redis access, read battle queue stream, and register to consumer group.

Some implementation details

Most of the details are covered in code, comments. The key architecture and technology decisions:

  1. Split up backend and game servers - make the code more complex. However, its a key decision to have a big playerbase.
  2. Worker communication was done through Redis Streams, not connection via API endpoint. Speeds up. However, workers need to have access to the database - its potential security risk.
  3. Battle processing is done asynchronously, based on the queue in Redis. It adds reconnection support, provides a good architecture.
  4. State is stored in Redis. Its fast for battle processing, but bad since limited data persistence is limited by TTL - not for long-term analytics.
  5. Protobuf is used instead of JSON for storing and communicating. It gives more performance, since the format is binarized. Downside is that its much harder to read by eyes (in Redis Insight for example)
  6. .NET stack provides a lot of built-in solutions: Swagger and XML comments for documentation, MSTest for unit testing, Kestrel as a web server with HTTP3 support, data validation by DataAnnotations, build-in authentication mechanism.

Most of the Redis-related decisions were googled on the fly. For example, I was surprised that there is no built-in way for creating 2 unique fields (not a problem for MSSQL) - needed a custom solution. I did hit the info, that there could be a lock on a DB level, which could be resolved by internal scripting. However, in this assessment I want to show off my fast learner skills, and decided not to get to the depths of internal scripting on a DB, which was previously unknown to me.

The battle engine is a concept. Calculations are very simplified. BattleEngine is an example of integration, not of the battle engine itself.

I haven t had the time to implement proper security. So instead of JWT it got secured by some constant security token.

Also, the infrastructure is not prepared for real-time battle updates on the clients - they need to poll. Properly it should be solved by the SignalR ASP.NET library, which is using WebSockets internally.

Testing and running

See Testing.md

Time table

18.03 9:00-9:20 - starting to write a mind log. First thoughts about registration. 14:15-14:55 - adding in-memory DB to registration. Start to investigate game-server (battle engine) and leaderboards.

21.03 13:50-16:50 - choosing C#, installing Redis, setting up C# main project and dependencies, choosing correct version of Swagger 18:10-20:10 - choosing where to use Protobuf and where JSON, splitting Update/Create player methods, add check for unique name

22.03 15:15-21:45 - check HTTP3, check how to run the app locally, implement a battle engine, and write documentation.

Total hours: 12.5