375 lines
20 KiB
HTML
375 lines
20 KiB
HTML
|
<!DOCTYPE html>
|
|||
|
<html>
|
|||
|
<head>
|
|||
|
<title>The Internet Vagabond :: Setting Up a BTRFS RAID-1</title>
|
|||
|
<link type="application/atom+xml" rel="alternate" href="https://www.theinternetvagabond.com/feed.xml" title="The Internet Vagabond" />
|
|||
|
<meta name="description"
|
|||
|
content="Rants of a wandering techy, in search of truth, knowledge, and a decent ping." />
|
|||
|
<meta name="author" content="Bill Niblock" />
|
|||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
|||
|
<link rel="canonical" href="https://www.theinternetvagabond.com/2020/06/14/setting-up-btrfs.html" />
|
|||
|
<link rel="stylesheet" type="text/css"
|
|||
|
href="https://www.theinternetvagabond.com/src/styles/corrupt_layout.css" />
|
|||
|
<link rel="stylesheet" type="text/css"
|
|||
|
href="https://www.theinternetvagabond.com/src/styles/corrupt_typog.css" />
|
|||
|
<link rel="icon" type="image/x-icon"
|
|||
|
href="https://www.theinternetvagabond.com/src/images/favicon.ico" />
|
|||
|
<link rel="stylesheet"
|
|||
|
href="https://cdn.jsdelivr.net/npm/fork-awesome@1.2.0/css/fork-awesome.min.css"
|
|||
|
integrity="sha256-XoaMnoYC5TH6/+ihMEnospgm0J1PM/nioxbOUdnM8HY="
|
|||
|
crossorigin="anonymous">
|
|||
|
<script data-goatcounter="https://theinternetvagabond.goatcounter.com/count"
|
|||
|
async src="https://www.theinternetvagabond.com/src/scripts/goatcounter.js"></script>
|
|||
|
</head>
|
|||
|
<body>
|
|||
|
<div class="cor_page">
|
|||
|
<header>
|
|||
|
<a href="/">
|
|||
|
<div>
|
|||
|
<span class="first">T</span>he
|
|||
|
<span class="first">I</span>nternet
|
|||
|
<span class="first">V</span>agabond
|
|||
|
</div>
|
|||
|
</a>
|
|||
|
</header>
|
|||
|
<main>
|
|||
|
<article>
|
|||
|
<h1 id="btrfs-smooth-as-butter">BTRFS: Smooth as Butter</h1>
|
|||
|
|
|||
|
<p>I have a habit of calling BTRFS “butter-F-S.” Conveniently in text I don’t feel
|
|||
|
a need to say that, because it’s easier to type out BTRFS than “butter-F-S”, as
|
|||
|
opposed to being easier to say the latter than the former. Regardless, BTRFS is
|
|||
|
a file system, which can be thought of as the organization system used by a hard
|
|||
|
drive to store files. File systems provide the functionality necessary for
|
|||
|
handling data; without one, data would exist on a disk with no means of (simple,
|
|||
|
reliable) access, management, or use. Every operating system provides the
|
|||
|
necessary configurations for using at least one file system, and often times can
|
|||
|
be expanded to understand more file systems, as is the case with Linux. If
|
|||
|
you’re used to Windows, you’ll be primarily familiar with two file systems:
|
|||
|
NTFS, and FAT. If you’re familiar with Linux, you’ll have probably dealt with
|
|||
|
those, as well as EXT. If you’re adventerous, you have have tried additional
|
|||
|
file systems such as ZFS, or BTRFS.</p>
|
|||
|
|
|||
|
<p>When I returned to Linux full-time on my desktop, I decided I wanted to setup a
|
|||
|
storage system. I initially shopped around for a NAS: network-attached storage.
|
|||
|
This would be a separate device, basically a motherboard with hard-drives. It
|
|||
|
would include software for storing data reliably, as well as applications for
|
|||
|
serving that data, such a Plex. There are many top-rated off-the-shelf options
|
|||
|
available, but many are costly, propietary, and lock you in to that solution. I
|
|||
|
decided to go with something a bit more readily available, and turn two existing
|
|||
|
3 terrabyte drives into a storage system that would live as part of my desktop.
|
|||
|
The remainder of this post will deal with how I setup BTRFS on my Linux desktop,
|
|||
|
using sub-volumes, creating automated snapshots, and setting up a back-up
|
|||
|
schedule.</p>
|
|||
|
|
|||
|
<h2 id="setting-up-btrfs">Setting up BTRFS</h2>
|
|||
|
|
|||
|
<p>Linux has “first-class” support for BTRFS, which was a deciding force between it
|
|||
|
and ZFS. (Though, recently, ZFS has made some strides as well.) The only
|
|||
|
requirements necessary for using BTRFS is to install the <code class="language-plaintext highlighter-rouge">btrfs-progs</code> program,
|
|||
|
which is required for basic operations. With requirements done, the next step is
|
|||
|
to setup the filesystem on your disk of choice. This will delete all information
|
|||
|
on your disk, so only do this when you’re certain any existing data has been
|
|||
|
backed-up, or you don’t mind losing it.</p>
|
|||
|
|
|||
|
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkfs.btrfs /dev/partition
|
|||
|
</code></pre></div></div>
|
|||
|
|
|||
|
<p>I decided to go with a partitionless setup, which is a slightly modified version
|
|||
|
of the above command. The above command also allows for adding a disk label, as
|
|||
|
well as a few other options; <code class="language-plaintext highlighter-rouge">man mkfs.btrfs</code> will give you all the details. I
|
|||
|
decided to call my BTRFS storage system my “Bag of Holding.”</p>
|
|||
|
|
|||
|
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkfs.btrfs -L BagOfHolding /dev/sdg
|
|||
|
</code></pre></div></div>
|
|||
|
|
|||
|
<p>Creating a partitionless setup removes the MBR or GPT partitioning schemes, and
|
|||
|
relies on subvolumes to simulate partitions. Because I’m only using these disks
|
|||
|
for storage, and I won’t be booting from them, this seemed like the way to go.</p>
|
|||
|
|
|||
|
<p>My setup will take two drives, and combine them together into a RAID-1. In order
|
|||
|
to allow for me to get the data from the drives into my new RAID, I did one disk
|
|||
|
at a time, and moved data between them, I then balanced the RAID.</p>
|
|||
|
|
|||
|
<h2 id="configuring-a-btrfs-raid">Configuring a BTRFS RAID</h2>
|
|||
|
|
|||
|
<p>At this point, I have two separate drives. One of my drives has all my data on
|
|||
|
it, the other drive is a raw, partitionless filesystem. At this point, we can
|
|||
|
leverage BTRFS to combine both our disks into a single “device”, and then
|
|||
|
balance it. All these commands will leverage the <code class="language-plaintext highlighter-rouge">btrfs</code> command, which needs to
|
|||
|
be run as root.</p>
|
|||
|
|
|||
|
<p>First, mount one of the drives. In my case, I mounted the drive with data</p>
|
|||
|
|
|||
|
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mount -t btrfs /dev/sdg /mnt/BagOfHolding
|
|||
|
</code></pre></div></div>
|
|||
|
|
|||
|
<p>Next, I added my second device to the mounted file system</p>
|
|||
|
|
|||
|
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>btrfs device add /dev/sdh /mnt/BagOfHolding
|
|||
|
</code></pre></div></div>
|
|||
|
|
|||
|
<p>At this point, we have a filesystem with two devices, but the data and metadata
|
|||
|
hasn’t been balanced yet. To simply balance the data, and replicate a RAID-0
|
|||
|
setup, you would run the <code class="language-plaintext highlighter-rouge">btrfs balance</code> command, specifying the
|
|||
|
mounted filesystem. In my case, I wanted to replicate a RAID-1 setup, having the
|
|||
|
two disks mirrored instead of striped. The command is modified to include a
|
|||
|
“balance filter”:</p>
|
|||
|
|
|||
|
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>btrfs balance -dconvert=raid1 -mconvert=raid1 /mnt/BagOfHolding
|
|||
|
</code></pre></div></div>
|
|||
|
|
|||
|
<p>This command will take time, since it has to re-balance the data across the
|
|||
|
devices. A convenient time for a short aside:</p>
|
|||
|
|
|||
|
<h3 id="buzzwords-of-butter">Buzzwords of Butter</h3>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li>Copy-on-Write (COW): Basically, only make copies to data when there are
|
|||
|
written changes to it. I don’t fully understand Copy-on-Write, and is
|
|||
|
possibly a good candidate for a future post.</li>
|
|||
|
<li>Subvolumes: Like a partition, but not a block device. The BTRFS Wiki defines
|
|||
|
it as “an independently mountable POSIX filetree.” I think of subvolumes as
|
|||
|
“software partitions” which I’m sure is both wrong and infuriating to people
|
|||
|
who know more about it than I do.</li>
|
|||
|
<li>Snapshots: A snapshot is a subvolume that shares its data with another
|
|||
|
subvolume, using copy-on-write. This means if there are no changes to the
|
|||
|
underlying data, a snapshot is basically just a reference to the exactly
|
|||
|
same data as the initial subvolume. As changes get made, the snapshot
|
|||
|
references a copy of “old” data, as opposed to the new data. Thus, a
|
|||
|
snapshot represents data at a specific point in time.</li>
|
|||
|
</ul>
|
|||
|
|
|||
|
<h2 id="setting-up-subvolumes">Setting up Subvolumes</h2>
|
|||
|
|
|||
|
<p>At this point, I have a single device made of two disks. The device, when
|
|||
|
queried using <code class="language-plaintext highlighter-rouge">btrfs filesystem show</code> shows the total available and used space,
|
|||
|
and the individual disks composing it. Creating subvolumes is optional; by
|
|||
|
default, a BTRFS filesystem has one subvolume (with id 5) as the “root.” If you
|
|||
|
mount the device, you’ll mount that, and see the entire device. I wanted a bit
|
|||
|
more organization, and options for snapshots, so I created a number of
|
|||
|
subvolumes for different files: Books, Code, Documents, Games, Misc, Music,
|
|||
|
Pictures, Videos. I mount each separately, and then sym-link directories in my
|
|||
|
home directory to a corresponding subvolume.</p>
|
|||
|
|
|||
|
<p>Creating a subvolume is very straight forward, using the <code class="language-plaintext highlighter-rouge">brtfs subvolume
|
|||
|
create</code> command. I made many, as mentioned before, and I’ll walk through how I
|
|||
|
setup the Books subvolume. I followed the same steps for all other subvolumes.</p>
|
|||
|
|
|||
|
<p>First, I created it:</p>
|
|||
|
|
|||
|
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>btrfs subvolume create /mnt/BagOfHolding/Books
|
|||
|
</code></pre></div></div>
|
|||
|
|
|||
|
<p>Then, I configured it to automatically mount. This involved adding a line to my
|
|||
|
fstab file:</p>
|
|||
|
|
|||
|
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>...
|
|||
|
UUID=658cc4e0-93e1-43b5-b068-d889b44ae98d /mnt/BagOfHolding/Books btrfs subvol=/Books,defaults,nofail,x-systemd.device-timeout=5
|
|||
|
...
|
|||
|
</code></pre></div></div>
|
|||
|
|
|||
|
<p>Looks very similar to other entries, except that the option <code class="language-plaintext highlighter-rouge">subvol=/Books</code> is
|
|||
|
necessary! This whole line tells the file system to mount the BTRFS subvolume
|
|||
|
located at Books <em>relative to the “root” subvolume</em>, to the mount point
|
|||
|
“/mnt/BagOfHolding/Books”. The other important thing to remember is that
|
|||
|
subvolumes are not block devices. For the BTRFS device, there is only one block
|
|||
|
device, and that’s the RAID we setup earlier. If you run <code class="language-plaintext highlighter-rouge">btrfs filesystem show</code>
|
|||
|
you’ll see the device has a single UUID, despite having the two individual
|
|||
|
disks. In fact, if you were to mount either of the disk devices, you would mount
|
|||
|
the raid; in my case, if I were to use <code class="language-plaintext highlighter-rouge">/dev/sdg</code> or <code class="language-plaintext highlighter-rouge">/dev/sdh</code> instead of the
|
|||
|
UUID, it would do the same thing. UUIDs are more reliable, though, so I tend
|
|||
|
towards them. My fstab has a line like the above for each subvolume. Once that’s
|
|||
|
done, unmount the RAID, and then either run <code class="language-plaintext highlighter-rouge">mount -a</code> or restart to get each
|
|||
|
individual subvolume mounted. The final step I did was to symbolic link
|
|||
|
directories from my home directory to the corresponding subvolumes. Following
|
|||
|
with Books, I did <code class="language-plaintext highlighter-rouge">ln -s /mnt/BagOfHolding/Books Books</code> from my home directory.
|
|||
|
Now, if I <code class="language-plaintext highlighter-rouge">cd ~/Books</code> I get to the subvolume on my RAID.</p>
|
|||
|
|
|||
|
<h2 id="scheduling-snapshots">Scheduling Snapshots</h2>
|
|||
|
|
|||
|
<p>With the RAID established, and subvolumes created, mounted and linked, I now can
|
|||
|
schedule automatic snapshots. An easy way to do so is with a program called
|
|||
|
Snapper. Installing that provides the application, as well as schedules both via
|
|||
|
cron and Systemd. Because I’m running Arch, we’ll rely on the Systemd timer.
|
|||
|
Before that, we need to create a Snapper configuration.</p>
|
|||
|
|
|||
|
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo snapper -c books create-config /mnt/BagOfHolding/Books
|
|||
|
</code></pre></div></div>
|
|||
|
|
|||
|
<p>This will create the configuration file in “/etc/snapper/configs/”. The
|
|||
|
configuration includes limits on how many snapshots to keep of different types
|
|||
|
(“hourly”, “weekly”, etc..). The defaults seemed sane enough for me. Without a
|
|||
|
cron scheduler, though, nothing else happens. (If you have a cron scheduler,
|
|||
|
then it will have started automatically and will run accordingly). The final
|
|||
|
step is to enable and start the “snapper-timeline” timer. If desired, modify the
|
|||
|
timer frequency (I believe the default is hourly, which is good enough).</p>
|
|||
|
|
|||
|
<p>One last thing to do for Systemd is to also enable and start the
|
|||
|
“snapper-cleanup” timer, which will cull snapshots down to the configured amount
|
|||
|
from the configuraiton file.</p>
|
|||
|
|
|||
|
<p>An interesting thing about snapshots is that, unless something has changed, they
|
|||
|
won’t take up space. Creating 10 snapshots will not replicate data 10 times.
|
|||
|
What each snapshot will capture are any changes that have been made to the data.</p>
|
|||
|
|
|||
|
<h2 id="creating-backups-from-snapshots">Creating Backups from Snapshots</h2>
|
|||
|
|
|||
|
<p>The final phase of my BTRFS journey is to establish backups. One thing that must
|
|||
|
be emphasized: <strong>SNAPSHOTS ARE NOT BACKUPS</strong>. They can be used to make backups,
|
|||
|
though. The way I’m doing that currently is with a program called snap-sync.
|
|||
|
snap-sync will iterate through each Snapper config, and send a snapshot from
|
|||
|
each to a remote BTRFS-formatted source. In my case, the remote source is an
|
|||
|
external hard drive. I formatted it similar to my RAID drives, without a
|
|||
|
partition. Once done, I ran <code class="language-plaintext highlighter-rouge">snap-sync</code> as root, which provides guidance for
|
|||
|
choosing a disk, and walks through each Snapper config. I ran it once, to get
|
|||
|
each directory established on the external drive. The manual (<code class="language-plaintext highlighter-rouge">man snap-sync</code>)
|
|||
|
includes example Systemd timers, which I used to create a timer and service in
|
|||
|
“/usr/lib/systemd/system”. Then, I enabled and started the timer. The example
|
|||
|
runs once a week, though I think I may update that to once a day.</p>
|
|||
|
|
|||
|
<h2 id="conclusion">Conclusion</h2>
|
|||
|
|
|||
|
<p>With that, I feel I have a good solution to my storage needs. I can keep all my
|
|||
|
data on a RAID drive with backups, accessible easily from the primary machine I
|
|||
|
use. I further synchronize music and pictures to and from my phone using
|
|||
|
Syncthing, which will be an upcoming topic of discussion. Some next steps:</p>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li>setup and configure Calibre for my books</li>
|
|||
|
<li>better configure Demlo for my music</li>
|
|||
|
<li>look into accessing my RAID from my Raspberry Pi, perhaps via NFS, and
|
|||
|
leveraging wake-on-lan, to allow for streaming media remotely whenever,
|
|||
|
without having to leave my desktop on</li>
|
|||
|
</ul>
|
|||
|
|
|||
|
<p>I’m writing this post as part of <a href="https://100daystooffload.com">#100DaysToOffload</a>, an initiative to inspire writing habits. Perhaps
|
|||
|
you could do the same.</p>
|
|||
|
|
|||
|
<h1 id="sources">Sources</h1>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li><a href="https://wiki.archlinux.org/index.php/Btrfs">btrfs on the Arch Wiki</a></li>
|
|||
|
<li><a href="https://wiki.archlinux.org/index.php/Snapper">Snapper on the Arch Wiki</a></li>
|
|||
|
<li><a href="https://github.com/wesbarnett/snap-sync">snap-sync</a></li>
|
|||
|
<li><a href="https://btrfs.wiki.kernel.org/index.php/Main_Page">The BTRFS Wiki</a></li>
|
|||
|
</ul>
|
|||
|
|
|||
|
<div class="author_info">
|
|||
|
Bill Niblock
|
|||
|
<a href="https://unlicense.org/"
|
|||
|
aria-label="Code dedicated to the public domain under Unlicense">
|
|||
|
<span class="fa fa-cc-pd" aria-hidden="true"
|
|||
|
title="Code dedicated to the public domain under Unlicense"</span>
|
|||
|
</a>
|
|||
|
<a href="https://creativecommons.org/publicdomain/zero/1.0/"
|
|||
|
aria-label="Published to the public domain under CC0">
|
|||
|
<span class="fa fa-cc-zero" aria-hidden="true"
|
|||
|
title="Content dedicated to the public domain under CC0"</span>
|
|||
|
</a>
|
|||
|
2020-06-14
|
|||
|
<br />
|
|||
|
[
|
|||
|
|
|||
|
<a href="/topics/technology">technology</a>
|
|||
|
|
|||
|
]
|
|||
|
</div>
|
|||
|
</article>
|
|||
|
</main>
|
|||
|
<footer>
|
|||
|
<nav>
|
|||
|
<div><a href="/">home</a></div>
|
|||
|
|
|||
|
<div><a href="/topics/all">all</a></div>
|
|||
|
|
|||
|
<div><a href="/topics/gaming">gaming</a></div>
|
|||
|
|
|||
|
<div><a href="/topics/other">other</a></div>
|
|||
|
|
|||
|
<div><a href="/topics/philosophy">philosophy</a></div>
|
|||
|
|
|||
|
<div><a href="/topics/technology">technology</a></div>
|
|||
|
|
|||
|
<div><a href="/topics/writing">writing</a></div>
|
|||
|
|
|||
|
</nav>
|
|||
|
|
|||
|
<hr />
|
|||
|
|
|||
|
<section class="h-card">
|
|||
|
<section class="footer_about" id="about">
|
|||
|
<div>The Site</div>
|
|||
|
<div>
|
|||
|
<a class="u-url" href="https://www.theinternetvagabond.com/feed.xml"
|
|||
|
aria-label="RSS feed for the site">
|
|||
|
<span class="fa fa-rss" aria-hidden="true"
|
|||
|
title="RSS Feed"</span>
|
|||
|
</a> |
|
|||
|
<a class="u-url" href="https://theinternetvagabond.goatcounter.com/"
|
|||
|
aria-label="GoatCounter statistics for the site">
|
|||
|
<span class="fa fa-bar-chart" aria-hidden="true"
|
|||
|
title="GoatCounter Statistics"</span>
|
|||
|
</a> |
|
|||
|
<a class="u-url" href="https://codeberg.org/VagabondAzulien/the-internet-vagabond-dot-com"
|
|||
|
aria-label="Source code repository for the site">
|
|||
|
<span class="fa fa-code" aria-hidden="true"
|
|||
|
title="Site Source Code"</span>
|
|||
|
</a>
|
|||
|
</div>
|
|||
|
<a class="u-url u-uid" href="https://theinternetvagabond.com"></a>
|
|||
|
<p>
|
|||
|
This site is a small slice of internet real-estate that I use for
|
|||
|
occasional writing. Nothing I say is visionary or profound. I
|
|||
|
focus on technology, gaming, and philosophy. All opinions my
|
|||
|
own.
|
|||
|
</p>
|
|||
|
<div>The Vagabond</div>
|
|||
|
<div>
|
|||
|
<a class="u-email" rel="me"
|
|||
|
href="mailto:bill@theinternetvagabond.com"
|
|||
|
aria-label="Email Bill at The Internet Vagabond dot com">
|
|||
|
<span class="fa fa-envelope-o" aria-hidden="true"
|
|||
|
title="Email bill at theinternetvagabond.com"</span>
|
|||
|
</a> |
|
|||
|
<a class="u-url" rel="me"
|
|||
|
href="https://matrix.to/#/@vagabondazulien:matrix.org"
|
|||
|
aria-label="Speak with me on Matrix">
|
|||
|
<span class="fa fa-matrix-org" aria-hidden="true"
|
|||
|
title="Speak with me on Matrix"</span>
|
|||
|
</a> |
|
|||
|
<a class="u-url" rel="me"
|
|||
|
href="https://mastodon.social/@azulien"
|
|||
|
aria-label="Find me on the Fediverse">
|
|||
|
<span class="fa fa-mastodon" aria-hidden="true"
|
|||
|
title="Find me on the Fediverse"</span>
|
|||
|
</a> |
|
|||
|
<a class="u-url" rel="me" href="https://www.twitch.tv/vagabondazulien/profile"
|
|||
|
aria-label="Link to my Twitch channel">
|
|||
|
<span class="fa fa-twitch " aria-hidden="true"
|
|||
|
title="My Twitch channel"</span>
|
|||
|
</a>
|
|||
|
</div>
|
|||
|
<p>
|
|||
|
My name is <span class="p-name">Bill Niblock</span>. <span
|
|||
|
class="p-note">I'm a computer scientist by education, a technologist
|
|||
|
by trade, a gamer by hobby, and a philosopher by accident. I
|
|||
|
live in <span class="p-locality">Buffalo</span>, <span class="p-region">
|
|||
|
New York</span>, <span class="p-country-name">USA</span>.
|
|||
|
</p>
|
|||
|
</section>
|
|||
|
<section style="display: none;">
|
|||
|
<span class="p-category">Gaming</span>
|
|||
|
<span class="p-category">Technology</span>
|
|||
|
<span class="p-category">Philosophy</span>
|
|||
|
<span class="p-category">Open Source Software</span>
|
|||
|
<span class="p-category">Self-Hosting</span>
|
|||
|
<span class="p-category">Coffee</span>
|
|||
|
</section>
|
|||
|
</section>
|
|||
|
</footer>
|
|||
|
|
|||
|
</div>
|
|||
|
</body>
|
|||
|
</html>
|