Intro
I got a Synology DS1621+. I had been thinking over this for the past 2-3 years (though probably longer if I'm being honest). I loved my previous frankenstein storage server. But I was unreliable. I would start some project, break the server for 3-6 months, and then end up abandoning the project for something else anyway. I'm finally trying a boring option for storage, and it seems to be working. Anything exciting or that I'm not ready to be "production" are done in a VM running k8s(rke2 to be specific), so I get the best of both worlds. Also, with nightly VM snapshots and backups if I do break something I can just revert back and fix it at my convenience.
I am fully utilizing the features of the NAS as well as some additional things I don't think are super common, so I figured I'd write about them.
Options
I toyed around with a few different options.
- build a "production" grade storage server myself, use truenas scale or plain linux distro
- build a dedicated storage server from the parts I have laying around, use truenas scale or plain linux distro
- buy a QNAP/Synology NAS
As hinted at earlier reliability was the deciding factor. Not necessarily hardware/software reliability but my own. It will be interesting to see how the Synology holds up. Also, I realize part of the reason I kept breaking and not immediately fixing the previous storage server was boredom. I wasn't really doing anything new on the storage side, I was just changing things like distros or adding too many VMs trying out new k8s stuff. So managing the hardware and config was holding me back from new interesting challenges. I hope the Synology is set and forget outside of monthly or quarterly updates.
Perhaps I should also note before I ultimately decided to get the Synology, I tried and failed to migrate back to the old 12-bay Supermicro chassis. It just wasn't fun trying to troubleshoot issues while hunched over in the crawlspace.
The Build Spec
- DS1621+ - $900
- 2x16gb RAM or similar, need to confirm it doesn't require ecc ram - $105
- 4x6tb drives, playing by the rules and chose WD Red Plus drives from the compatibility list - $135 each
- 2x 500gb WD sn700 m.2 cache drives - $150
With this the unit is pretty much maxed out as far as performance goes. Spoiler alert, a 10Gb card may be in my future as I can easily saturate 1Gb doing large transfers. I chose a unit with the AMD Ryzen V1500B, with the intent of making this THE home server, not just storage.
Setup
Install was simple. Not really unexpected, but it oversimplifies/dumbs down certain things while not really explaining others. My existing knowledge of linux and storage probably puts me outside their target market. The average user probably doesn't even know what copy-on-write(COW) is nor do they care, that's the whole reason they bought an appliance. Once I figured out the language things went pretty smoothly. But taking this approach that I don't really care how things work underneath, Synology did not do a good job of recommending things like volume layouts. Luckily I figured out fairly quickly that the SSD cache acted at the volume level and that everything including ISCSI luns could be hosted on the same volume.
I also tried to go all in on Synology apps, but I really only ended up using basic ones: Samba Directory Server, Hyper Backup, Virtual Machine Manager, and Snapshot Replication Manager.
Notes From Initial Configuration
Here are some notes I made during the initial configuration. These are the kinds of things people who are already familiar with the device won't even think about not knowing.
I went to install the Virtual Machine Manager and it wants to install it on a volume, but doesn't really explain why/what. Should I create a dedicated volume for packages? Is the volume for storing VM images? Went ahead and installed it, now it failed configuring the open vswitch but didn't actually stop the configuration so its just stuck on a spinning progress bar... closing it and re-opening seemed to fix it.
When making an iscsi lun or a volume it doesn't provide an option to disable COW, does it automatically do this? It seems COW and CRC32 checksums are "Enable data checksum for advanced data integrity" on shares.
It looks like there is no benefit to multiple volumes, so deleting the 2nd test volume and making 1 giant volume.
Going to install the samba directory server but not 100% sure if I'll use it. Should I join my desktop to the domain? kerberized nfs? I still would kind of prefer something like seafile if it didn't have to be blobs behind the scenes... (spoiler: I am using the domain for a couple things, but relying on nebula for encrypting nfs traffic)
enabled ssh (and home dir service) though hopefully I don't need it. Also no option to limit to keys only so may end up disabling it again. ( other than curiosity I fortunately have not had a good reason to do anything via ssh yet)
should probably doing more of this config as code, but hopefully most of it is set and forget (there are some rest api guides and a python client but the result is really disappointing)
synology drive is a bit janky, had to rebuild the deb as an rpm and connect using the IP or else I'd get a really lame message about upgrading my server version. Actually I guess that means the unofficial flatpak should work after all.
https://github.com/SynologyOpenSource/synology-csi got it working but seems like it requires admin rights for the user which is less than ideal. Oops look like I spoke too soon and there is a bug with the volume attacher https://github.com/SynologyOpenSource/synology-csi/issues/2. Also the documentation doesn't really have an upgrade procedure, so I guess I'll try deploying over the existing one and see what happens. It looks like simply re-running the deploy script works for updating.
need to fix my packer template for ubuntu to work on synology (just need to fix networking really) and create ansible playbook to deploy
The api clients look less complete than I initially expected. Looks like I might have a project there, need to create an api client for at least the virtual machine manager and hopefully also make an ansible module for it.
Why are all snapshots managed in the snapshot replication manager? just a little confusing
media server/video station being separate doesn't really make sense, also video station doesn't see any of the files so I'll probably need to use jellyfin anyway
Hyper backup can do client side encryption so should be able to replace the borg/rclone combo hopefully. I must have copied the password wrong, but the encryption key file works to restore at least.
Data Migration
I copied data over nfs with rsync primarily. Forgot about sparse files at first, so synology space used ended up a little higher. At some point I need to go through the old lab VM disks and truly delete the ones that don't matter. Nothing else really exciting here. I kept my directory layout the same as before. I did some tidying up and there is always more of that to do, but for now I have plenty of storage so don't really need to worry about how much I'm using.
- user directories
- media: including movies, pictures, music
- lab
- ISOs
- localbackups
I created a few AD groups for access to things like the media share, but really with just 2 users that wasn't really necessary.
Conclusion
So far the Synology seems like a good buy. There were a few papercuts along the way, but that is to be expected with something like this. In the next part of this series I'll go through the k8s VM I'm using to provide the extra functionality I wanted to add without affecting the main unit.