My NAS was getting increasingly annoying.
It would give error messages about not being shut down properly after scheduled restarts.
Apps would sometimes work and sometimes not. I had to manually stop and restart my video library each time to make it work. It was slow, it was refusing to do more than one thing at a time.
So, I finally started it up. Shunted all the data to external drives, setup the box from scratch.
Between it being fresh, and me knowing better what I’m doing and how I want things from the get-go, it’s running better than ever, better even than when I got it a few years back.
Interesting, while it was offline and being setup I found myself realising how integral it’s become to my day. So much stuff I went to do, only to discover I needed my box.
It was intended as a file backup and server, but so much has changed since then, I’ve grown used to having it here!
Still tempted to get an upgrade, maybe later this year if things workout well with the cash.
Wanted to share this with a community who can appreciate the feeling of having something working well!
It’s best if a NAS remains as a dedicated NAS and nothing else; I would build a separate machine to tinker with, and share the appropriate folders on the NAS with whatever service(s) you’re running. That way, if you’re experimenting and fuck something up, it doesn’t take your data with it when it goes down.
That’s what containers are for. Fucking up the container won’t fuck up the host. That was the best decision in self hosting I’ve done. Even that one virtual machine feels weird and uncomfortably legacy now but it needs to interact with hardware in a certain way that just won’t fully work with docker.
That’s what I’m doing. Here’s my setup:
It’s incredibly unlikely that I’ll mess anything up on the host, but I can always reinstall if needed.
If your NAS has enough resources the happy(ish) medium is to use your NAS as a hypervisor. The NAS can be on the bare hardware or its own VM, and the containers can have their own VMs as needed.
Then you don’t have to take down your NAS when you need to reboot your container’s VMs, and you get a little extra security separation between any externally facing services and any potentially sensitive data on the NAS.
Lots of performance trade offs there, but I tend to want to keep my NAS on more stable OS versions, and then the other workloads can be more bleeding edge/experimental as needed. It is a good mix if you have the resources, and having a hypervisor to test VMs is always useful.
I’ve done this before. That’s why I have a Proxmox cluster separate from my NAS now.
On the other hand, having one home server that does it all has its advantages. I have a mini PC with an N100 processor and two HDD drive bays. It hosts my Docker containers and holds my data. As long as you install all the software on the internal drive and keep only the data on the HDDs in RAID, you should be pretty safe. I hope. So far I’ve managed not to fuck it up.
I used to do that, but I have a bad habit of over-tinkering with the underlying system. Having Proxmox as a base where I can spin up VMs and LXC containers to fuck with to my heart’s content is far more ideal in my situation. Plus, my entire cluster - NAS included - pulls 100-120 watts.
This, plus also it’s good to revisit your setups from time to time to audit and improve.
Oh I’m always doing something with it, it’s basically my winter hobby haha. I’m currently building a “new” NAS out of an old HP Proliant G2 case (from like 2002) and 7th gen Intel hardware, to replace the current Mac mini/4-bay Sabrent DS-SC4B. Still gonna run OMV on the new NAS, because OMV is awesome; but the USB connection between the Mac and drive station is cumbersome and risky.