Brother Printer using WLAN with Unifi APs

Having your printer continual to fall off WiFi is the worst. Whenever you actually want to print something, lo-and-behold, you can’t, and you need to spend 2-20 mins fiddling with it to get it back on the network. While all mine took was a printer restart for it to magically reconnect to wifi, this is always how I felt.

Office Space GIF by 20th Century Fox Home Entertainment - Find & Share on GIPHY

After enough frustration, I finally took some time to sit down and fix the problem. After a bit of searching, I stumbled upon this Brother article (granted I’m printing from a Windows PC and my specific printer is a Brother MFC-L2750DW). That at least gave me some hope, as I was using a single SSID for both 5Ghz and 2.4Ghz – you know, like a sane person.

With the above article in hand, I created a new SSID that was only on the 2.4Ghz with the following settings (Unifi Controller 7.0.23):

New UI:

  • Broadcasting APs: I have it set to just the 1 where closest to the printer
  • WiFi Band: 2.4Ghz
  • WiFi Type: Standard
  • Multicast Management:
    • Multicast Enhancement: ▢
    • Multicast and Broadcast Control: ▢
  • Client Device Isolation: ▢
  • Proxy ARP: ▢
  • BSS Transition: ▢
  • UAPSD: ▢
  • Fast Roaming: ▢
  • 802.11 DTIM Period: Auto
  • Minimum Data Rate Control: Auto
  • Security Protocol: WPA2
  • PMF: Optional
  • Group Rekey Interval: ▣ 3600 seconds
  • Hide WiFi Name: ▣

Legacy UI:

  • Security: WPA Personal
  • WiFi Band: 2.4Ghz
  • WPA3: ▢
  • Guest Policy: ▢
  • Broadcasting APs: I have it set to just the 1 where closest to the printer
  • Multicast and Broadcast Filtering: ▢
  • Fast Roaming: ▢
  • Hide SSID: ▣
  • Group Rekey Interval: GTK rekeying every 3600 seconds
  • UAPSD: ▢
  • Multicast Enhancement: ▢
  • RADIU DAS/DAC (CoA): ▢
  • Beacon Country: ▢
  • BSS Transition: ▢
  • TDLS Prohibit: ▢
  • Point to Point: ▢
  • P2P Cross Connect: ▢
  • Proxy ARP: ▢
  • L2 Isolation: ▢
  • Legacy Support: ▢
  • PMF: Optional
  • WPA Mode: WPA2 Only
  • DTIM Mode: Use Default Values
  • 2G Data Rate Control: ▣
    • 6Mbps
    • Disable CCK Rates: ▢
    • Also require clients to use rates at or above the specified value: ▢
    • Send beacons at 1Mbps: ▢

The printer has been online for over 20 days, whereas before it would fall off the network sometimes before it even fell asleep. 🎉🎉

Hopefully this helps someone else out there.

Published
Categorized as networking

pfSense, FreeRADIUS and Unifi MAC-based VLAN tagging with a fallback VLAN

We may have had an issue with a young “midnight surfer” on the internet one night, and it has since taken me a wild ride of VLANs, schedules, traffic shaping, RADIUS servers and SSIDs. I’ll give a bit of an abbreviated journey so you can relive the fun, but the important takeaway is how to do MAC-based port authentication on the switch while also doing it on the WLAN, and having both have the same fallback VLAN.

TL;DR – Having DEFAULT Accept auth-type that assigns a specific VLAN, works for WLAN clients on Unifi APs but does not work for MAC-based authentication on Unifi Switches. This is regardless of specifying a fallback network in the switch configuration or not. Instead, you should use the fallback network in the switch config and scope the Default user to only authenticate for devices on the APs via a huntgroup.

So, have your last user in the user’s config file (i.e. the fallback) look like the following:

DEFAULT Huntgroup-Name == "<huntgroupname>", Auth-Type := Accept
  Tunnel-Type = VLAN,
  Tunnel-Medium-Type = IEEE-802,
  Tunnel-Private-Group-ID = "<vlanID>"

Instead of:

DEFAULT Auth-Type := Accept
  Tunnel-Type = VLAN,
  Tunnel-Medium-Type = IEEE-802,
  Tunnel-Private-Group-ID = "<vlanID>"

Back to our “midnight surfer” – I woke up one night to some giggling to find my son had decided to use an old phone we have to watch tiktok videos. I knew this day would come but was just surprised it had come so fast/soon. Good thing I have all the technology required to lock this down!

My home networking consists of the following equipment:

Between the Qotom and the switch I have a 4-port link aggregation. Do I need 4Gbps between the router and the switch? Probably not, but I’m not using the ports anyways, and why not?! Additionally all the APs have a wired uplink to the switch.

Iteration 1 of the setup was to create 4 VLANs (Trusted, Guest, IoT, and Kids) and have them map to different SSIDs and manually specify the port VLAN on the switches – using a VLAN trunk for the wired APs and the link aggregation to the router. This setup was quick, easy, and worked! However, maintenance was a pain as I now had 3 new SSIDs that I needed to track the passwords for and getting devices onto the new network(s) – and any future devices – was a pain. Additionally, I use a wired connection for my work machine, but I also plug in my personal laptop to the same hub which connects to the same port. Yeah, I could use one of the USW-Flex-Minis and swap the connection the hub everytime, but let’s be honest – that’s annoying. Instead, I knew there had to be a better way.

Low and behold, there is – using a RADIUS server! Oh, and look at that, the incredibly powerful pfSense has a freeRADIUS package!

The initial configuration was pretty simple for wireless:

  1. Add the network devices (switch & APs) as NAS clients with a shared secret (same for all of them)
  1. Update the freeRADIUS EAP-TTLS and EAP-PEAP configuration to use tunneled reply and do not disable weak EAP types as that will cause the switch port MAC-based authentication to fail
  1. Add a new RADIUS profile into the Unifi Controller that’s enabled for wired and wireless networks and specify the pfSense server as the auth server
  1. Edit the wireless network to use RADIUS MAC Authentication. P.S., I highly recommend using the aa:bb:cc:dd:ee:ff format, because you can easily copy/paste from the device info in the Unifi Controller. Note that in the new UI (as shown) the wireless network will still have a Network defined. However, if you revert to the old UI, it will show “RADIUS assigned VLAN”.
  1. Load up the list of users (i.e. the MAC addresses) in freeRADIUS – putting them on whatever VLAN you want (can also be blank!). Use the MAC address in the format you specified in step 3 as both the username and password are both the MAC.

Unfortunately, there is no fallback network/VLAN that you can define in the Unifi Controller for wireless networks. This is unfortunate and would’ve solved a lot of time later. However, you can define your own.

By default, if the user is not in the list, freeRADIUS will send a REJECT answer. However, we can enable a fallback user by setting the username and password as blank, specifying the fallback VLAN ID, adding “DEFAULT Auth-Type := Accept” to the top of this entry, and ensuring this client always the last user in the list as users are identified top-to-bottom.

After doing all that, I was able to move all my wireless clients back to the original SSID I had just moved them off of the previous weekend, and they still have the proper VLAN segregation. Woohoo!

Now, on to the switch ports – which was a multi-hour frustration, granted, it was late, and there was beer involved.

  1. Assuming that you enabled wired networks on the radius profile, you should be able to visit the switch settings > services and enable 802.1X Control, select the previously created RADIUS profile and the Fallback VLAN (Network). If you’re using a default port profile (All), all ports will use the 802.1X Control of “force authorize” – aka doesn’t really do anything with the auth and so there will be no impact. You’ll want to verify the port settings prior to enabling 802.1X control to ensure you don’t lock yourself out prior to creating all the users in the RADIUS server.
  1. Load up the list of users (i.e. the MAC addresses) in freeRADIUS – putting them on whatever VLAN you want (can also be blank!). The username and password are both the MAC address in the format of AABBCCDDEEFF.
  1. In the old Unifi Controller UI you can override profiles and so you need to change the individual port(s) to use “MAC-based” 802.1X control. Otherwise, you can create a new port profile and assign it to the port(s) in question.

Assuming you’ve added users in the RADIUS server for every MAC address on the network, it’ll all just work! Unfortunately, any MAC addresses that are picked up by the DEFAULT rule added in earlier, will not authenticate on the Unifi switch. The RADIUS server correctly authenticates the unknown MAC address and responds with the correct VLAN (as seen in the freeRADIUS logs), but the response message doesn’t contain all the same info which is probably why the switch doesn’t accept it.

To fix the failback you need to scope the DEFAULT user config to only be for your wireless APs. Once that is done, unknown clients to the RADIUS server from the switch will fail authentication and then the switch will use the Fallback VLAN you configured earlier on the switch config.

If you only have one AP, you can edit your DEFAULT user config directly as seen in the code snipped below by replacing <IPAddress> with the IP address of your AP:

DEFAULT NAS-IP-Address == <IPAddress>, Auth-Type := Accept

For more than 1 AP, it’s easier to create a huntgroup so you can reference all APs at once.

  1. SSH into your pfSense box
  2. Edit the /usr/local/etc/raddb/huntgroups file and create a new huntgroup as in the example, but with the IP Address(es) of your APs.
# huntgroups    This file defines the `huntgroups' that you have. A
#               huntgroup is defined by specifying the IP address of
#               the NAS and possibly a port.
#
#               Matching is done while RADIUS scans the user file; if it
#               includes the selection criteria "Huntgroup-Name == XXX"
#               the huntgroup is looked up in this file to see if it
#               matches. There can be multiple definitions of the same
#               huntgroup; the first one that matches will be used.
#
#               This file can also be used to define restricted access
#               to certain huntgroups. The second and following lines
#               define the access restrictions (based on username and
#               UNIX usergroup) for the huntgroup.
#

#
# Our POP in Alphen a/d Rijn has 3 terminal servers. Create a Huntgroup-Name
# called Alphen that matches on all three terminal servers.
#
#alphen         NAS-IP-Address == 192.0.2.5
#alphen         NAS-IP-Address == 192.0.2.6
#alphen         NAS-IP-Address == 192.0.2.7
#
# My home configuration
<huntgroupName>             NAS-IP-Address == <IPAddress1>
<huntgroupName>             NAS-IP-Address == <IPAddress2>
<huntgroupName>             NAS-IP-Address == <IPAddress3>
  1. Update the DEFAULT user config directly as seen in the code snipped below by adding in the <huntgroupName> to scope the DEFAULT rule as shown below
DEFAULT Huntgroup-Name == "<huntgroupName>", Auth-Type := Accept

And…TADA! Now your wireless and wired devices all get tagged with an appropriate or fallback VLAN!

UPDATE: Grrr, after a freeradius update, it seems to have overwritten the huntgroups file. That made it super fun to have a failback – would really nice if Unifi APs would have a fallback VLAN by default.

References:

Kubernetes ‘exec’ DNS failure – Updated

UPDATE: While the below definitely works, the correct way to do this is to properly add a DNS suffix. This should be set in your DHCP configuration if your nodes are getting their IP info from DHCP. If you’re using static IP addresses, you should run the following commands on each node. Replace <ifname> with the name of your network interface (i.e. eno1, eth0, etc.) and <domain.name> with the domain suffix you want appended.

# This change is immediate, but not persistent
sudo resolvectl domain <ifname> <domain.name>
# This makes it permanent
## Turns out, this sets the global search domain, but still fails
## echo "Domains=<domain.name>" | sudo cat /etc/systemd/resolved.conf -
## Netplan is what is setting the interface info, so be sure to edit its configuration
sudo sed -i 's|search: \[\]|search: \[ <domain.name> \]|' /etc/netplan/<netplan file>

From https://askubuntu.com/a/1211705


I have finally migrated all of my containers from my docker-ce server to kubernetes (microk8s server). The point was so that I could wipe the docker-ce server and make a microk8s cluster – which has been done and was super easy!

However, after getting the cluster setup I wasn’t able to exec into certain pods from a remote machine with kubectl. The error I was getting was below:

Error from server: error dialing backend: dial tcp: lookup <node-name>: Temporary failure in name resolution

As I had originally only had a single node, my kubectl config referenced the original nodes IP address directly. Additionally, I noticed that this error happened when the pod was located on the node that wasn’t the api server I was accessing. By changing my kube config api server to the node that hosted the pod, it then worked.

After a lot of playing with kube-dns and coredns, it really came down to something easy/obvious. When I was on one node, I couldn’t resolve the shortname of the other node, and therefore node1 couldn’t proxy to node2 to run the exec.

While there are multiple ways I could have fixed this (and I did get the right DNS suffixes added to DHCP too), I ended up editing the /etc/hosts on each node and ensuring there was an entry for the other node. Tada, exec works across nodes now.

Using Kubernetes Ingress for non-K8 Backends

TL;DR – Make sure you name your ports when you create external endpoints.

In my home environment, I need a reverse proxy that serves all port 80 and 443 requests and can interface easily with LetsEncrypt to ensure all those endpoints are secure. Originally I’ve been using Docker and Jwilder’s nginx proxy to support all these. As it’s just using nginx, you can use it to send stuff to backends that aren’t in docker pretty easily (like the few physical things that aren’t in docker). However, I’ve been transitioning over to Kubernetes and need a similar way to have a single endpoint on those ports that all services can use.

Well, the good news is that the the internet is awash of articles about this. However, after attempting to implement any of them, I was consistently getting 502 errors – no live upstreams. This was happening on a Ubuntu 20.04 LTS system running microk8s v1.19.5.

My original endpoint, service, and ingress configs were the following:

apiVersion: v1
kind: Endpoints
metadata:
  name: external-service
subsets:
  - addresses:
      - ip: <<IP>>
    ports:
      - port: <<PORT>>
        protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: external-service
spec:
  ports:
    - name: https
      protocol: TCP
      port: <<PORT>>
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: external-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"    
    cert-manager.io/cluster-issuer: letsencrypt-prod
    cert-manager.io/acme-challenge-type: http01
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  tls:
  - hosts:
    - external.rebelpeon.com
    secretName: external-prod
  rules:                           
  - host: external.rebelpeon.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: external-service
            port: 
              number: <<PORT>>

This yaml deployed successfully, but as mentioned did not work. With it deployed, when describing the Endpoint:

$ kubectl describe endpoints -n test
Name:         external-service
Namespace:    test
Labels:       <none>
Annotations:  <none>
Subsets:
  Addresses:          <<IP>>
  NotReadyAddresses:  <<none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    <unset>  443   TCP

Events:  <none>

When describing the service:

$ kubectl describe services -n test
Name:              external-service
Namespace:         test
Labels:            <none>
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP Families:       <none>
IP:                10.152.183.182
IPs:               <none>
Port:              https  443/TCP
TargetPort:        443/TCP
Endpoints:
Session Affinity:  None
Events:            <none>

Wait a minute, the service lists the endpoints as being blank – not undefined or properly defined as others. When I describe the endpoint of a working K8-managed endpoint, I see that the port has a name, and that’s the only difference.

$ kubectl describe endpoints -n test
Name:         external-service
Namespace:    test
Labels:       <none>
Annotations:  <none>
Subsets:
  Addresses:          <<IP>>
  NotReadyAddresses:  <none>
  Ports:
    Name   Port  Protocol
    ----   ----  --------
    https  443   TCP

So, I changed my config to the following (one line change):

apiVersion: v1
kind: Endpoints
metadata:
  name: external-service
subsets:
  - addresses:
      - ip: <<IP>>
    ports:
      - port: <<PORT>>
        protocol: TCP
        name: https
---
apiVersion: v1
kind: Service
metadata:
  name: external-service
spec:
  ports:
    - name: https
      protocol: TCP
      port: <<PORT>>
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: external-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"    
    cert-manager.io/cluster-issuer: letsencrypt-prod
    cert-manager.io/acme-challenge-type: http01
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  tls:
  - hosts:
    - external.rebelpeon.com
    secretName: external-prod
  rules:                           
  - host: external.rebelpeon.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: external-service
            port: 
              number: <<PORT>>

And, tada everything works! I can now access physical hosts outside of K8 via the K8 ingress! Sadly, that took about 4 hours of head bashing-in to realize…

Surface Keyboard going to Sleep

I’ve been fighting this for awhile (as have a few others based on some google searches), and now that I have it resolved I figured I’d post it here.

High level, I’ve had a Surface Ergonomic Keyboard for awhile, and absolutely love it. However, recently I upgraded from a Surface Pro 5 to a Surface Pro 7 and the keyboard keeps going to sleep – taking forever to wake back up. I’ve been on calls, just hammering the windows key to get it to wake up. Needless to say it’s been super annoying as waiting for 30 seconds or more for your keyboard to start responding again is not ideal for productivity (or sanity).

I’ve seen a few places that I just need to turn off the “allow the computer to turn off this device to save power”. However, it took me a bit to figure out which one. Turns out it’s not until you select Change settings that you can see the Power Management tab in device hardware. So without further ado…

Open Control Panel

Select View devices and Printers (or if your control panel lists all the icons, select Devices and Printers).

Select properties of the Ergonomic Keyboard and go to the Hardware tab

Select Bluetooth Low Energy GATT compliant HID device and select Properties

Click the Change settings button- tada Power Management tab!

Select the Power Management tab, unselect Allow the computer to turn off this device to save power and click the OK buttons until you are back at the devices and printers screen. Yay, now it doesn’t go to sleep!

If for some reason you still don’t see the Power Management tab, you can do the following actions:

  1. Launch your Registry Editor (Windows button and type “Regedit“)
  2. Navigate to: “Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power
  3. Select the entry (or Create a DWORD (32-bit) Value) called ‘CsEnabled
  4. Change the “Value data” to “0” (BaseHexadecimal) and select “OK
  5. Reboot your machine
Published
Categorized as computers

WireGuard

I’ve been using OpenVPN for a few things and I’ve been very interested in setting up WireGuard instead as it has a lot less overhead and is less cumbersome than OpenVPN. Well I finally took the plunge last night and it was surprisingly easy after only a few missteps!

One of my use cases is to tunnel all traffic to the VPN server, so it appears as if my internet traffic originates from the VPN server. Here is how I set it up (with thanks to a few other articles).

On the Server (Ubuntu 18.04 LTS)

Install WireGuard on the server. I am running Ubuntu 18.04 and so I had to add the repository.

Move to the /etc/wireguard directory (you may need to sudo su)

Generate the public and private keys by running the following commands. This will create two files (privatekey and publickey) in the /etc/wireguard so you can re-reference them while building out the config.

$ umask 077  # This makes sure credentials don't leak in a race condition.
$ wg genkey | tee privatekey | wg pubkey > publickey

Create the server config file (/etc/wireguard/wg0.conf). Things to note:

  1. The IP space used is specifically reserved for a shared address space per RFC6598
  2. I only care about IPv4. It is possible to add IPv6 address and routing capabilities into the configuration
  3. For routing, my server’s local interface name is eth0.
  4. You can choose any port number for ListenPort, but note that it is UDP.
  5. Add as many peer sections as you have clients.
  6. Use the key in the privatekey file in place of <Server Private Key>. Wireguard doesn’t support file references at this time.
  7. We haven’t generated the Client public keys yet, so those will be blank.
[Interface]
Address = 100.62.0.1/24
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = <Server Private Key>

[Peer]
PublicKey = <Client1 Public Key>
AllowedIPs = 100.62.0.2/32

[Peer]
PublicKey = <Client2 Public Key>
AllowedIPs = 100.62.0.3/32

Test the configuration with wg-quick

root@wg ~# wg-quick up wg0
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip address add 100.62.0.1/24 dev wg0
[#] ip link set mtu 1420 up dev wg0

Remove the interface with wg-quick

root@wg ~# wg-quick down wg0
[#] ip link delete dev wg0

Use systemd service to start the interface automatically at boot

systemctl start wg-quick@wg0
systemctl enable wg-quick@wg0

To forward traffic of the client through the server, we need to enable routing on the server

echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/wg.conf
sysctl --system

On the Client (Android)

  1. Install the WireGuard App from the Play store
  2. Open the app and create a new profile (click the +)
  3. Create from scratch (you could move a pre-created config file too)
    1. Give the interface a name
    2. Generate a private key
    3. Set the address to the address listed in the peer section of your server config – 100.62.0.2/32
    4. (Optionally) Set DNS servers as your local DHCP servers will no longer work as all packets will encrypted and sent across the VPN
    5. Click Add Peer
      1. Enter the Server’s public key
      2. Set Allowed IPs to 0.0.0.0/0 to send all traffic across the VPN
      3. Set the endpoint to the IP address you’ll access the server on, along with the port (i.e. <InternetIP/Name>:51820)

Revisit the Server Config

Now that the client has a public key, you need to update /etc/wireguard/wg0.conf

[Peer]
PublicKey = <INSERT PUBLIC KEY>
AllowedIPs = 100.62.0.2/32 

Restart the wireguard service

systemctl restart wg-quick@wg0 

Connect to the Server from the Client

Within the wireguard app, enable the VPN.

You can validate by visiting ipleak.net to verify that traffic is going through the VPN.

Edge Beta to Stable

As you may know, the new Edge based on Chromium went stable last week. Unfortunately, there is no automated way to move any of your settings from the Beta channel to Stable. That means, for those of us that were using the beta, you need to re-setup everything in stable.

However, as it is based on Chromium, all the information is stored in a profile (or multiple profiles). That means you can move all your profile data from the Beta folder to the stable folder. I did this and the only issue I ran into was if you run multiple profiles that use custom images, the taskbar profile icon will retain the “BETA” tag as those icons are generated during profile creation and stored in the profile location. Unfortunately, deleting the icon in the profile folder does not seem to reset the icon.

Stable Microsoft Edge
%LocalAppData%\Microsoft\Edge\User Data

Microsoft Edge Beta
%LocalAppData%\Microsoft\Edge Beta\User Data

UPDATE – If you have edge profiles assigned to a Microsoft account where your image is from O365 or another account, I found a way where you can regen the taskbar icons after doing the above steps.

Just go to edge://settings/profiles and sign out of the account and then sign back in and it will recreate the profile icons. Make sure you do not check the box to clear all your settings though! For profiles not linked to a Microsoft, just change the profile image.

Tada!

Backup Decision

Tl;dr, I’m using Duplicacy with the new Web UI. This is hosted in a docker image, and currently pushes data to an Azure storage account.

Also, wow, just had a slight heart-attack while writing this as I removed Docker from my NAS, which blew away a whole share of my Docker data (14 different containers including all my NextCloud personal files!). They were all backed up with Duplicacy, and while I had tested it before with a few files, you never know. It wasn’t as painless as I’d like – partially my fault with mounted drives to the container read only, partially the GUI isn’t super great yet, and really that Azure connections continually getting reset and the underlying CLI doesn’t account for that – but it’s all back and humming along again. Phew!

Options Considered

I’ve only included the main contenders below. In particular, I was interested in using non-proprietary storage backends that allowed me multiple options (B2, AWS, Azure, etc). The ones that were quickly removed and not tested:

Now for the ones that were tested.

CrashPlan

CrashPlan has served me great for a large number of years. I have used it from two different continents successfully. There are definitely some good things about it: continuous backup, dedupe at the block level, compression, and you can provide your own encryption key. However, with the changes awhile ago (and continual changes I get emailed about), I knew it was time to look for other options. Plus, even with 1 device, it was going jump from $50/year to $120 – while not horrible, definitely a motivator.

Synology’s Hyper Backup

I store most of my data on my Synology NAS, and it comes with some built in tools (Glacier Backup, Hyper Backup, and Cloud Sync). I actually was running CrashPlan in a docker image on the NAS prior to doing this assessment. Of the 3 tools, Hyper Backup was really the only one I consider as Glacier is for snapshots and Cloud Sync isn’t really a backup product. For Hyper Backup, you can backup to multiple different storage providers, including Azure which was my preferred. Like CrashPlan it can do dedupe at the block level, compression, and allows you to specify your own encryption. Unlike CrashPlan it isn’t continuous (can do hourly), will send failure emails, and won’t automatically include new folders in a root if only some of the subfolders are selected. The service is free, you only pay for the storage you use.

Duplicati

With Duplicati I ran it from a docker image on my NUC. This meant I had access to some files that Hyper Backup could not access, which was good. Plus, you can backup to multiple different storage providers including Azure. Like CrashPlan it can do dedupe at the block level, compression, and allows you to specify your own encryption. Unlike CrashPlan it isn’t continuous (can do hourly), and I was getting lots of errors when adding new folders. Plus the database is notorious for becoming corrupt, which is not something you want with your backups. The service is free, you only pay for the storage you use.

CloudBerry Linux

With CloudBerry I ran it from a docker image on my NUC. This meant I had access to some files that Hyper Backup could not access, which was good. Plus, you can backup to multiple different storage providers including Azure. Like CrashPlan it can do dedupe at the block level, compression, and allows you to specify your own encryption. Unlike CrashPlan it isn’t continuous (can do hourly), I could receive notification emails. One of the really neat features is that CloudBerry understands Azure storage tiers (hot, cold, and archive) and can manage the lifecycle with regards to those. However, while the files are encrypted in the blob storage (you can’t open them), they retain their folder structure and name. Additionally, the GUI isn’t great and I was getting a few errors. The service is not free ($30), and you pay for the storage you use.

Restic

I tried to use restic, but wasn’t able to ever get it to work. I tried to run it in a docker, but the CLI and I just never go along (no GUI). It can use different storage providers including Azure, and it can dedupe and encrypt. However, it can’t compress, which means backups will be larger. The service is free, you only pay for the storage you use.

Duplicacy

With Duplicacy I ran it from a docker image on my NUC. The web-UI was still in beta when I was testing it, but fundamentally it met my needs, plus had a functional CLI (basically the UI just uses the CLI anyways). This meant I had access to some files that Hyper Backup could not access, which was good. Plus, you can backup to multiple different storage providers including Azure. Like CrashPlan it can do dedupe at the block level, compression, and allows you to specify your own encryption. Unlike CrashPlan it isn’t continuous (can do 15 minutely), but I could receive notification emails. It’s also blazingly fast and can do dedupe across machines if I was backing up more than one. The service is not free ($10), and you pay for the storage you use.

Choosing

For each of the ones listed above (except for Restic simply because I couldn’t get it to go), I setup test storage accounts on my Azure account and began backing up the same 50GB with each product. The key things I was looking for was: easy of use and setup, time to backup on an hourly basis, storage and transactions consumed to get an idea of ongoing costs, and any issues I ran into.

Duplicati was the first to go simply because of the errors I was getting with it backing up the files. However, it was fast at 1:02 min for the incremental hourly scan and upload.

CloudBerry Linux was the next to go. This was due to it being more expensive to run (storage costs), a few errors, it was second to last in speed at 1:23, and the folder/file names listed above.

HyperBackup stuck it out the longest. Out of the box, it was definitely one of the easiest to setup. However, it was also the slowest to scan and backup (probably due to it running on the NAS and not on my NUC) a 1:32, and was uploading more data than Duplicacy. In order to have multiple copies, Hyper Backup would have to run 2 separate jobs that do the exact same thing.

Duplicacy is what I am now using. It is incredibly fast (0:16 in the test, and only 2-5 mins every hour to scan and upload with my 900GB actual backups), and had the best cost usage for Azure. Additionally, I can easily clone to another online provider without having to rerun the drive scan, it just copies the new backup chunks. I have also setup a versioning solution that runs weekly to prune the hourly snapshots. This is based on the same pruning schedule that CrashPlan was using, and I’m seeing negligible storage increases month over month. The biggest risk is that this it is a newer piece of software that may have some bugs/issues. As mentioned in the tl;dr, my restore has taken way longer than it should’ve due to improper retries and timeouts with Azure (all the data is there though, and I can access it anywhere I install the Duplicacy CLI), but otherwise I’ve been very happy and have actually cancelled my CrashPlan account.

Note: Technically using Azure is more expensive than if I had stuck with CrashPlan. My monthly storage costs for my backups storage account is $15-20. However, with credits, it works out to $0 for me. Plus, I’m now in more control of my backups than I was before, and I can choose what storage provider I want to use to minimize costs.

Thinking About Backups…Again

Well, it’s getting close to that time to re-evaluate backups as I think my $2.50/month backup plan is going away in July.

So far, there’s a few things I’ve looked at, but interested in what others are thinking (if anyone even reads this anymore).

  1. Glacier Backup (Synology)
  2. Hyper Backup (Synology)
  3. P5 Backup
  4. Cloud Sync (Synology)
  5. iDrive
  6. CloudBerry
  7. Duplicati
  8. Duplicacy

Some background – in CrashPlan my backup set is currently 1.3TB. However, a lot of that is versions.

Published
Categorized as computers

Migrated to CrashPlan for Small Business

Well, I’m doing it (migrating my CrashPlan account – see previous post with updates)!  This is primarily because I get the feeling the discount will disappear at the end of the month when they officially stop supporting home.  For those that haven’t gone through the steps, just taking screenshots as an FYI.  Additionally check out the other post as to how I’m managing non-NAS backups.

  1.  You get to pick which devices you want to migrate.  It will tell you very plainly how much and when your billing changes.  Depending on how many devices you pick, the number changes.  As mentioned before, I’m keeping my NAS backups, and that’s it.
  2. You update and add your info.
  3. It re-iterates your price.
  4. You agree to a bunch of stuff that they’ve already called out before.
  5. You enter your CC info and agree to auto-bill
  6. All done! (my client will be updated in the background…and on my device I didn’t migrate it updated as I was writing this)

The UI when you log into your account (same user/pass) is now way different/better than the home one.  Plus I get some of my storage back on my NAS due to it deleting computer-to-computer backups.

Published
Categorized as computers