[ale] New hard drive procedure

DJ-Pfulio djpfulio at jdpfu.com
Sun Jun 7 04:45:02 EDT 2015


Do you seriously suggest btrfs for normal people without a specific requirement?

Just switched to ext4 last year, because I didn't want to be on the bleeding
edge and wanted some real-world history behind the file system. Had jfs before
that and never lost a bit.

Btrfs will be ready around 2020, I think.

Would use ZFS, if I had 16G of RAM to blow for storage. Alas, I do not. I
consider it mature.

Plus there are performance warnings about using btrfs with KVM.



On 06/07/2015 04:32 AM, Michael Trausch wrote:
> +1.
> 
> If you're running a system that "can't" go down under reasonably normal
> circumstances, have at least two drives and use btrfs on them, set to an
> appropriate redundancy mode. With weekly or monthly scrubs of the volume,
> you'll get even earlier warning than SMART monitoring alone, plus your data
> is recovered on the fly for errors that are spatially related.
> 
> I use a four disk setup (4x1TB) with btrfs in double redundancy mode. I've
> had it catch some instances of silent corruption that wouldn't ever be caught
> by anything other than probably ZFS, and automatically recover. That's a
> winner for me. Plus it's insanely flexible, way more so than LVM2. Say
> goodbye to backup, drive swap, restore and resize. Just swap and resize.
> 
> Note well: as with software raid, the processes for planned vs. emergency
> drive replacement differ. In this case, planned is shrink, swap, grow,
> rebalance. Emergency is of course swap, rebuild from redundancy. The former
> is less time-intensive compared to the latter. When you've seen the first
> scrub reporting recovery occurring on a disk due to media errors and not
> silent corruption, it's time to replace that drive using the planned
> procedure before you have to use the emergency one, when possible.
> 
> Sent from my iPhone
> 
>> On Jun 6, 2015, at 8:10 PM, Jim Kinney <jim.kinney at gmail.com> wrote:
>> 
>> I do nothing more than install it and run it. I let my OS do the checking
>> with smartctl.
>> 
>> I will do a quick check of the specs and compare to the manual or
>> manufactures specs to make sure nothing obvious.
>> 
>> I've probably installed 3-4 hundred as singles and I don't _want_ to think
>> how many as array units. I've never had a bad new drive out of the box.
>> I've had a few, maybe 2 or 3 that failed within a couple of months and
>> given the power supply died shortly after, I'm pretty sure it was killed
>> the drives.
>> 
>> Most of the used drives I've installed have failed within a few months. All
>> were over 5 years old when I got them.
>> 
>>> On Sat, 2015-06-06 at 19:59 -0400, Sam Rakowski wrote: Hi,
>>> 
>>> I've purchased a new hard drive(magnetic) to replace an old one that has 
>>> some bad sectors on it. I haven't ever bought a new hard drive; most of
>>> my hard drives come used or part of a new device.
>>> 
>>> I'm just interested in hearing what you all run through when you receive
>>> a new hard drive. Zero it? Run badblocks? All or none of the above?


More information about the Ale mailing list