Minecraft Docker Container
Sat 03 August 2024I started on this process about 5 months ago, got it barely working, then gave up cause we stopped playing minecraft. Biggest problem was, I didn't use a volume so I wasn't able to save state. I recall thinking volumes'd be easy to implement, but I didn't go through the effort. I also didn't have a docker-compose file. It was a prototype, so I just jotted down the docker run --flags I was using in case I wanted to use it more.
Fast forward to today, and I'm looking for excuses to use Docker again, and in more typical, reliable, and robust ways. Gonna implement volumes, gonna make a compose file. It's gonna be great, and it's not gonna suck at all.
First thing is to look at the images, and do a bit of pruning. I had some dangling images with no reference and no tag, so I ran docker image prune and then docker image rm 'list of image ids' to get rid of some. Next, the Dockerfile.
The Dockerfile
I double checked what my Dockerfile specified from last time I worked on this:
FROM openjdk:17-oracle
WORKDIR /minecraft_server
COPY . .
CMD java -Xmx2048M -Xms2048M -jar minecraft_server_1.20.4.jar nogui
pretty simple. No volume, just copying a save file from the working directory to the container. Then running the java command to run the jar with 2 gigs of memory and no gui. I didn't change anything here yet. I ran docker build to make an image, then looked at how to turn that image into a container.
How to turn an image into a container: run and compose
Then, the docker run options I'd use are docker run --name mc-server-container -p 25565:25565 minecraft-server. Updating it to run as a daemon and use volumes, it becomes docker run --name mc-server-container -p 25565:25565 -d -v /rhrgrtopia:/rhrgrtopia minecraft-server. Now that it works, let's turn it into a docker-compose.yaml!
services:
app:
build: .
container_name: mc-server-container
command: java -Xmx2048M -Xms2048M -jar minecraft_server_1.20.4.jar nogui
ports:
- 25565:25565
volumes:
- /rhrgrtopia:/rhrgrtopia
The flags map 1:1 with keys in the yaml file; we're building in pwd, naming the container, specifying ports and volumes, and running a command. The more experienced among you will notice a problem in my volumes declaration there that I spent several hours tracking down and fixing! But for now, it works!*
*it does not work.
The old approach, and adding additional needed functionality:
Compare that to the shell script I use to launch the server bare metal :
OLDIP=$(</home/rhrgrt/minecraft_server/old_ip.txt)
MYIP=$(dig @resolver4.opendns.com myip.opendns.com +short)
if [ "$OLDIP" != "$MYIP" ]
then
curl -X GET -d 'host=minecraft' -d 'domain=blue-industries.net' -d 'password=$(<secrets.cfg)' -d ip=$MYIP https://dynamicdns.park-your-domain.com/update
fi
echo "$MYIP"
echo "$OLDIP"
echo "$MYIP" > /home/rhrgrt/minecraft_server/old_ip.txt
java -Xmx2048M -Xms2048M -jar minecraft_server_1.20.4.jar nogui
you can see there's some more functionality I need to get working. I'm just some person so I don't have a static IP. I use a bash script to make sure my IP's still what it thinks it should be, saves info for later, and prints relevant info to stdout. I'll have to find a way to get that functionality in the container, too.
Let me test removing the java execution at the end, so it's just an ip updater script, and changing the compose command into a ./sh && java line. I'll have to rebuild the image, and add the old_ip.txt file expected to the volume, instead of the root directory.
The container doesn't know about dig. I tried to add a COPY directive for /bin/dig to a bin subdirectory, then specifying path for the ip_updater.sh script. That seemed to work. I.e., running docker-compose up caused ip_updater output to happen, but no server was running, and it exited with code 0. Turns out, just running sh ip_updater.sh && java -Xmx2048M -Xms2048M -jar minecraft_server_1.20.4.jar nogui as a command doesn't work-- the command quits after the first one, despite the && I very clearly asked it to respect. So wrapping that command in a sh -c "do this && that" was necessary.
sh -c "sh ip_updater.sh && java -Xmx2048M -Xms2048M -jar minecraft_server_1.20.4.jar nogui"
Getting volumes working
Oh, whoops, this is a full on nightmare I blazed past. Because I had my save info in the working directory, my volume wasn't working as expected and I didn't know until recreating the image for an update. Cool! How do volumes actually work though? The situation:
- I have a save directory at ~/minecraft_server/rhrgrtopia
- I want a new docker image that loads that save file
- Containers which run should make additions to that save file
- Save file changes should be accessible through the host machine
I eventually figured out, through reading enough articles and docs, that the volume directive here has 3 parts, delimited by colons. /path/on/host:/aspirational/path/in/container:permissions_code. And the paths used are absolute, meaning '/rhrgrtopia:/rhrgrtopia' means "map the directory rhrgrtopia in the filesystem root to a directory called rhrgrtopia in the container's root". I did not want this. So I changed the first part to /home/rhrgrt/minecraft_server/rhrgrtopia.
Copying some things over and running docker exec -ti mc-server-container sh, then checking the container's filesystem shows the save directory's absolute path ends up being /minecraft_server/rhrgrtopia. This means changing the volumes directive in the docker-compose.yaml to
volumes:
- /home/rhrgrt/minecraft_server/rhrgrtopia:/minecraft_server/rhrgrtopia:rw
You can see the entity on the left is an existing directory on the host, and the entity on the right is a volume to be mounted inside the container. Since the minecraft server jar file pulls from a directory of server.properties.level-name, that puts it in the right spot relative to the jar.
So, cool, I finally got the docker container to load from and save to a host directory!
The next disaster: permissions
Once I got it running once (wait what) I ran into the docker problem where since the container is running things as root internally, the save files and logs are owned by root upon output from the container. This means the next build/launch is not allowed, since the shit the container just changed isn't owned by the user. Thanks Docker! Now I have to learn to use best practices instead of defaults.
Changing the docker-compose file to use my user (10 different attempts based on 10 different articles, blog posts, and stackexchange answers) resulted in lack of permissions to find or even run the java jar, along with an instant container exit preventing me from checking ownership of the container's contents.
I eventually found the Dockerfile official recommended RUN groupadd && useradd command, but even putting the USER directive at the top of the Dockerfile still resulted in some of the things being owned by root. I had to add the group+user, coupled with a chown and then set the user. I tested this by changing the docker-compose.yaml file to just run ls -alR / to see who owned dirs and files, since the containers weren't staying around long enough for a docker exec -ti mc-server-container sh.
Dockerfile contents:
RUN groupadd -g 1000 rhrgrt && \
useradd -m -u 1000 -g rhrgrt rhrgrt
[then some directory and file copying]
RUN chown -R rhrgrt:rhrgrt .
USER rhrgrt:rhrgrt
This finally had the effect of bringing all the files necessary into the container, creating a user+group to mirror the host's user+group (with IDs (1000 for the group, 1000 for the user) determined by id -g and id -u in the host terminal as my typical user (I'm rhrgrt, yes)). Testing yields a docker-compose up that doesn't fail due to a lack of access to the container contents like earlier, and doesn't result in save files owned by root as output like earlier-earlier.
And with that, holy shit, I'm done. I can run docker compose up and it'll run, while saving files to the correct directory on disk. Then, I can run the original shell script and that works to load the containerized save data and pick up where I left off. Then if I run docker compose up, it'll load the save from the shell script. Full circle!
Blu Blog