Minecraft screenshot Minecraft screenshot Minecraft screenshot Minecraft screenshot

Paxterya is a close-knit and dedicated Minecraft community playing on a Survival server. What we most enjoy is the freedom Minecraft has to offer. You can do anything on Paxterya: start big projects with others, construct a town, or go far away to found your very own empire. Let's be creative together!

Features

  • Minecraft 1.15.2 (Java Edition)
  • Vanilla Gameplay
  • Whitelist (Apply here!)
  • YouTube Channel (visit)
  • Current season since Dec 2019

Game tweaks

  • Dynmap
  • CoreProtect
  • One Player Sleep
  • No Phantoms
  • Coordinates HUD

How do I join?

    As Paxterya has a whitelist, every player has to apply before being able to join our server. That process is simple though! To join the Paxterya community, follow these simple steps:

  • Fill out our application form

    You wil be asked to authenticate with your Discord account. No worries, we don’t get access to your account. This is just to match our members with their Discord accounts.

  • Join our Discord server

    If you’re already on our Discord when your application gets accepted, you will be pinged by our bot. Else you’ll be notified via mail.

  • Wait for the answer

    If your application is successful, you will be whitelisted instantly. See you soon!

Our new Hardware (and software)

2020-07-09. Author: TxT

While I tried to write everything in a sort of understandable way, I have probably failed. If you want to know more or didn't understand something please ask in #technology in Discord or DM me (TxT#0001).

Our old server

Before writing about the new, I should probably provide some context by talking about the old.
Since about 10 months we are using a dedicated server with moderately performant internals. As a processor it is running an Intel Core i7 6700, which is now four generations old, but was pretty top of the line in terms of gaming performance when it launched back in 2015. For Memory we have 64GB of DDR4 RAM, which is more than plentiful for running "only" a Minecraft server.
Storage is also relatively important, but our two 512GB NVME SSDs are plenty fast on paper, although they seemed to have some problems and showed signs of aging. Most servers use multiple storage devices together in a RAID. This just means that you can pool storage devices together, in our case with a RAID 1 one drive could have died and you wouldn't even have noticed it. On the downside this also costs one drive worth of storage that you cannot use.
In summary, this was a fine server for its price, but big farms, tons of villagers and a lot of sheep in the spawn chunks demanded some more potent hardware. I also had to get creative to reduce the storage needs of our dynmap (at some point it used 200GB of storage).
We also had two more so-called "cloud" servers. These are small virtualized servers running on a bigger one. This allows a hosting company to split a single powerful server into many smaller slices that they can sell for a much cheaper prize.
Our website and Discord bot ran on one of these and all the performance monitoring ran on the other one.

Lessons learned

It is always important to reflect what you have done and look at the past to find things you have done wrong or not optimally. I don't think that anything about the hardware choices is wrong, we just outgrew it, which is natural and a good thing.
Well that leaves the software side. We had one fairly powerful server, yet we also rented two more cloud servers for seemingly no real reason. Was the main dedicated server to weak? No. Would anyone have noticed any slowdown if everything would have been centralized? No.
Why did you do it that way then? Well, running a lot of different software pieces together like that poses a handful of challenges:
First of all there is the security and stability angle. You don't want software A to access the files of software B. Say, software A has a newly discovered security vulnerability that allows random hackers to access files that software A has access to. If everything is shielded off, that's unfortunate but not a problem of "nuclear" scale. The same problem can also be caused by bugs and so on.
The second thing to consider is performance. Databases in particular are known for using a lot of RAM for things like caching. That means that the database doesn't need to read everything from the slower disks all the time, instead they can get their data from the much faster memory. This is all fine and good until you need to run multiple different databases that take all of the systems memory hostage for their caching needs.
The third reason also has its place, but hasn't been an issue for us yet. Its dependencies. Let's say software A needs software C installed to function. software B also needs software C. All fine and good, until A and B need different versions of C. But C doesn't like to be installed twice...
Good solutions to these problems (and I probably forgot a few more) exist and I will discuss our new approach in the "New software" section. Limited knowledge on my part meant that the best solution to me was just running three servers.
You might ask what the problem with running three servers instead of one is. The answer is simple: Work. Say there is a new vulnerability in sshd (the piece of software that let's me connect and manage the servers remotely) and I want to apply the software update. With three servers I have to do the process three times, with one server it only needs to be done once. This is obviously one example of many. Another might be the amount of passwords to "remember".

New hardware

Okay, now let's finally leave the boring past behind us and talk about the new hardware. For the processor we went with a system equipped with an AMD Ryzen 3900. It not only has three times as many cores as our old i7, it also has more performance per core. The singlecore performance bump is what will hopefully help Minecraft to run faster. The tripling of cores is what is going to allow us to run multiple server simultaneously (like our new creative server).
Memory was plenty even on our old server, but our new one comes with a sweet doubling to 128GB. That much memory has a nice advantage tho: Caching. I talked about caching of databases earlier, turns out linux (the only real server operating system ;) ) also does caching for everything stored on disk. So, having enough memory will help with stuff like restart and save times of gameservers.
Our storage is probably the biggest increase. It is almost four times as large with two 1.92TB of (again) blazing fast NVME SSDs. We can only use 1.92TB because of RAID. But that means that one SSD could die and the only one who would notice it is me.
We also still have one cloud server because that was just the easiest route to get our new gameserver hosting software going. More on that in the following section.
All in all, we made a pretty solid deal. Paying about twice as much for 3x the CPU cores, 2x the memory and 4x the storage capacity.

New software

The "Lesson learned" section goes into a lot of detail about separating software for various benefits. Now let's talk about the solution.
Well not so easy. There are actually two solutions, one older one and one that's fairly new. The old one is called virtual machine. These virtual machines date back into the late 60s and are described by wikipedia as the following: "In computing, a virtual machine (VM) is an emulation of a computer system. Virtual machines are based on computer architectures and provide functionality of a physical computer.".
Okay great, but what does that mean? Well a virtual machine is basically a program running on a physical computer or server. Inside of that "program" there are one or multiple fully operational operating systems. All the hardware of a VM doesn't actually exist. It is all running in software. Each network adapter, every storage disk. Nothing is real. Well of course there are many optimizations, code and programs that run inside the VM can actually be executed directly by the processor without any kind of translation and memory is also directly accessed, yet separated between VMs and the physical host system. You might see some virtualization related options in your computers BIOS settings, these allow various things like enabling direct hardware access, while keeping everything nice and separated.
That separation is why VMs are great and still widely used today. Every VM running on the physical host runs its own full operating system (including the kernel, the bit of code closest to hardware in an operating system). This carries a huge benefit: Using different operating systems on one physical computer/server. This allows you to try out linux or even macOS on your windows PC without needing to worry about loosing any data. The strong separation also allows you try out viruses inside a VM without it effecting anything else (you still need to be careful that there is absolutely no connection like networking!!).
If it sounds to you like that's a massive amount of overhead for when you don't actually need that strong separation, then well, you're absolutely right. The overhead isn't that huge, but it can be a lot of work to manage multiple VMs, when you could have used something leaner instead.
That leaner technology has many names: Containers is probably a good one. Many people also refer to it as "Docker / Docker containers" tho, because that's the specific piece of extra software that made containers as popular as they are today.
Containers are enabled by a future in the linux kernel (only one of many reasons why linux is awesome) called namespaces. What that means is, that, all running containers and the host use the same kernel. All software running in containers is then separated off just by the fact that it is in a different namespace with limited access to processes running in other namespaces. Containers also only have very limited access to the filesystem of the host.
This eliminates overhead caused by virtualization. The real advantage and power of containers lies in more modern principles. Containers should be stateless, that means that they should not contain any data. Data should be stored in special volumes that can outlife a container, in a database or directly on the host filesystem. This allows you to delete and rebuild a container without losing any data.
Installing new software is as easy as copying the contents of a special docker-compose.yml file to your host, modifying a few bits to suit your needs and then running the command docker-compose up -d is all you need to do. After leaning back for a short bit while docker downloads everything you are left with a new application running. This application can consist of multiple containers, volumes, shared network ports, private networks for the containers to communicate with each other and so much more. Meanwhile in the old days you would have needed to install the operating system, dependencies, the software itself and then configure all the dependencies so they work with the software you actually want to use.
Okay so, containers are great, but do you know what's even greater? Images! Well not the kind of images you take with your phone. I mean the images that represent the "hard disk" storing all the software components of a container. In fact, a container is just the running state of an image. Images in docker are simply amazing, because they have layers. Your first layer might be a distro of linux, like alpine which is specifically made for running in containers. The second one could contain all the dependencies you need and the third the actually application you want to run. Now when updating your application, the layers of the OS and dependencies can stay the same. When you then want to deploy the new image on your server, you only need to download and store the new application layer. When you create a container based on an image, a new layer gets created that contains all the changes the container makes when running (like writing log files).
I could probably fill many more overly long blog posts with talk about containers and docker, but I will stop now.

We went down the docker path for our server. As of right now we are running seven containers, these are databases, monitoring tools, the creative Minecraft server (and soon the SMP) and also our website.
For this to work I had to create my own docker image for the website/discord-bot.
As a programmer you write code. Storing that code only on your computer in plain files is a horrible idea, because what do you do when you want to share your code with others, and, even worse: What happens when others have changes for you? How do you incorporate these into the codebase? What do you do when you also changed some of the same parts of the code? You cry. And then you start using git. Git is a protocol that allows you to store code on a central server, every developer can then check out code, work on it locally and, when done, push the changes back to the central server. To make it easier there is a neat feature called branches. Different branches don't affect each other. So you could implement new features in one branch that might break stuff short-term, while fixing important bugs in your main branch. When you're done implementing the feature, you can just merge the changes back into the master branch. While doing that you might have to resolve some merge conflicts, when different changes from both branches collide. You could just run a git server yourself, but there are some code-collaboration platforms that are build ontop of git and have more features like issue tracking. One of these is Github and we did use that for almost a year before switching to Gitlab a few weeks ago. I really love Gitlab so far as it has some more features than Github (like a darkmode!).
One of the things that Gitlab is pretty good at is doing things with your code automatically (called CI/CD). Github now also has expanded in that regard, but whatever. Why am I telling you all this? Well, I had the challenge of needing to build my own docker image and I just wanted to brag how easily I did that with Gitlab. Now every time I push code to the main "master" branch, Gitlab automagically builds a new image based on a Dockerfile that's also part of the codebase. The Dockerfile just gives build instructions, like which files should be put where, which commands need to be run to install all necessary stuff, etc. No more than five minutes after I pushed code to master, I can pull the changes on the server to update the image, send another command to rebuild the running container and then everything is up-to-date. I can even see the status of the image build right inside of my programming tool (IDE) Visual Studio Code.

Okay I think this section is already waaaay to looooong, so imma make the last thing quick (I promise).
That would be the software to run the Minecraft server(s). Sure, I could just use some Minecraft server container and call it a day, but that's hard to manage. In the past we used multicraft, which you might have used already when renting a Minecraft server at ceratin hosting companies. Multicraft is nice, but basically not possible to run inside a container. Well, it is possible, but then all Minecraft servers would run inside of a single container, which is not ideal. Instead I chose to go for Pterodactyl, which not only looks more modern, but it also has a better feature set enabling you to do more things like limiting memory, CPU usage, storage space and even prioritize storage access for some containers. Pterodactyl also goes beyond just running Minecraft servers. The list of supported games is long and growing.
Now I will stop with this section, when you want to know more about what Pterodactyl can do, just hit me up on Discord and I will write a follow up in the future, when I gained more experience.

The future

An outlook into the future is also fun. It is also always something that can age horribly wrong, but hey, let me give a try. "Our current hardware will be plenty for at least a year, probably two" -TxT 2020.


Well I had a lot of fun setting everything up, installing and configuring everything and writing this post. If you have any questions about literally anything related to the new server, just ask in our discord in #technology or shoot me a DM (TxT#0001).
Also let me know if you want more behind the scenes posts like this one, I could imagine that a post about the inner workings of our website, discord bot or Minecraft plugin would be interesting to some.
Cheers ~TxT