It appears to work fine (it contains my home partition for my main machine I daily drive) and I haven’t noticed signs of failure. Not noticeably slow either. I used to boot Windows off of it once upon a time which was incredibly slow to start up, but I haven’t noticed slowness since using it for my home partition for my personal files.
Articles online seem to suggest the life expectancy for an HDD is 5–7 years. Should I be worried? How do I know when to get a new drive?
Hdd can live a long and happy life, but absolutely don’t trust a single drive ever, independently of how rugged, old or expensive it is.
My main hard drive lasted 5 years with 1 year of power on hours, working fine and suddenly failed. It was a good fail because I was able to get all the data from it, but it took almost one month for how slow it was.
Always assume your data storage is going to die tomorrow and be ready to replace it.
don’t trust a sibgle drive
sibgle?
Edit: oh I see the edit now. “single” is what it meant. I couldn’t figure that out at the time. Shitty to be downvoted for asking a question.
I had a drive fail two days after purchase. I had just copied all my data to it and erased my primary drive to copy everything back and clean it up. I spent (rather, my father spent (I was barely an adult and he helped me out)) ~$3000 for data recovery) to get everything back. Despite recovery, the experience caused me depression at how fragile my digital life was.
Storage is comparatively cheap compared to at that time. I was depressed because I was young and not making much money and storage was expensive. I could hardly afford to pay to protect my data. It’s much easier to do so now.
Have multiple backups. Have one be offsite in case of natural disaster. I mailed an external drive of all the music I’d made on my computer to a family member in another state. Cover your ass.
If you can afford to eat out on occasion, you can save enough to protect your data. Backblaze is currently $9 / month. It’s stupid-cheap. An external disk and some open source backup software is stupid-cheap. Run both and you have your data in three places: source, external, cloud.
If you don’t have backups then yes you should be worried.
Same goes for any storage.
of any age. brand new disks fail too.
As others have said, you don’t have to be concerned about anything if you keep good backups. Disk storage at this time is very cheap compared to what it used to be, you could probably find a 5200 RPM 5 TB disk for ~100 dollars USD, or even better, two 2 TB disks which you could configure with software RAID.
- Follow the 3 - 2 - 1 backup rule.
- You can use SMART to see the health of your drive.
it could easily last a few more years but after that long you should assume it could fail any day and either don’t keep anything critical on it or make regular backups of anything important.
I have old 500 gb drives from 2009 that i ripped out of beaten laptops still working 24/7 and i’ve had new drives grenade themselfs two weeks in use, there are too many factors to properly gauge how long of a life a drive has, the best option is to have backups, even something as simple as a copy on a flash drive is better than nothing.
I get people saying follow the 3-2-1 rule, but there are places like mine where storage is prohibitively expensive, so just do what you can, anything is better than nothing in this cases.
Hard drives can last a long long time. I have test equipment with hard drives from the 90s that still run fine. That said when hard drives fail they fail quickly
I run a 15 drive nas. You’ll often see a few smart errors one day then total drive failure the next day. Sometimes the drive fails completely without any smart warning, especially if it’s that old. I try to retire drives from my nas before they fail for that reason (if they hit 7 year service life, and that’s pretty long but my nas is just a home server thing)
Do you only count active years as service life? I have one I hadn’t used for years and luckily it still worked just fine and had no data loss after a couple of years, but I am not sure if I should count those years towards the max 7. Also it’s a NAS drive, not the standard stuff.
That’s a pretty good question. I’ve never had it come up though; every drive in the nas is purchased and thrown in there. Although now that I’m thinking about it I don’t think I’ve ever purchased a brand new drive for my nas. I only ever buy refurbs from places that decommission server drives so I guess my “years” are inflated a bit, at least 2-3. Maybe I should adjust that number down! Although it’s been fine for years tbf
Do you raid? I just have one rn and am wondering if I could get a 2nd one and put it in raid without accidentally wiping the current one. I guess that would mitigate any failures
Yeah I have a 15 drive array.
You can raid 1 and that’s basically just keeping a constant copy of the drive. A lot of people don’t do this because they want to maximize storage space but if you only have a 2 drive array it’s probably your safest option
it’s only when you get to 3 (2 drive array + parity) that you have some potential to maximize storage space. Note that here you’re still basically sacrificing the space of an entire drive but now you basically double it and this is more resilient overall because the data is spread out over multiple drives. But it costs more because obviously you need multiple drives
Keep in mind none of these are back up solutions though. It’s true that when a drive dies in a raid array you can rebuild the data from other drives but it is also true that this operation is extremely stressful and can lead to death of the array. Eg in raid 1 a single drive dies and when adding a new drive the second drive that held the copy of your data starts having sector corruption during rebuild of the new drive, or in raid 2 one of the 3+ drives dies and when you rebuild from parity the parity drive dies for similar reasons. These drives are normally only being accessed occasionally and the rebuild operation is basically seeking to every sector on the drive if you have a lot of data, and often puts the drive under a lot of read operation for a very long period of time (like days) especially if you get very large modern drives (18,20,24tb)
So either be okay with your data going “poof” or back up your data as well. When I got started I was okay with certain things going “poof”, like pirated media, and would backup essential documents to cloud providers. This was really the only feasible solution because my array is huge (about 200tb with about 100tb used). But now I have tape backup so I back everything up locally although I still back up critical documents to backblaze. Depends on your needs. I am very strict about not wanting to be integrated to google, apple, dropbox, etc. and my media collection is not simply stuff I can retorrent, it’s a lot of custom media I’ve put together the “best” version of to my taste. but to set something up like this either takes a hefty investment or if you’re like me years of trawling ewaste/recycling centers and decommission auctions (and it’s still pricey then but at least my data is on my server and not googles)
Hmm. Yeah I’m thinking of keeping my operation lean and simple, with an online copy. One issue I’ve noticed is that sometimes files just get corrupted. Perhaps due to a radiation event? A parity drive could solve that, but I want something simpler. I’m thinking just a tar with hash and then store multiple copies. What do you think?
Bitrot sucks
Zfs protects against this. It historically has been a pain to work with for home users but recently the implementation raidz expansion has made things a lot easier as you can now expand vdevs and increase the size of arrays without doubling the amount of disks.
This is a potential great option for someone like you who is just starting out but still would require a minimum of 3 disks and the associated hardware. Sucks for people like me though who built arrays lonnnnng before zfs had this feature! It was literally up streamed like less than a year ago, good timing on your part (or maybe bad, maybe it doesn’t work well? I haven’t read much about it tbf but from the small amount I have read it seems to work fine. They worked on it for years)
Btrfs is also an option for similar reasons as it has built in protections against bitrot. If you read on this there can be a lot of debate about whether it’s actually useful or dangerous. FWIW the consensus seems to be for single drives it’s fine. My array has a separate raid1 array of 2tb nvme drives, these are utilized as much higher speed cache/working storage for the services that run. Eg if a torrent downloads it goes to the nvme first as this storage is much easier to work with than the slow rotational drives that are even slower because they are in a massive array, then later the file is moved to the large array for storage in the middle of the night. Reading from the array is generally not an intensive operation but writing to it can be and a torrent that saturates my gigabit connection sometimes can’t keep up (or other operations that aren’t internet dependent like muxing or transcoding a video file). Anyway, this array has btrfs and has had 0 issues. That said I personally wouldn’t recommend it for raid5/6 and given the nature of this array I don’t care at all about the data on it
My array has xfs. This doesn’t protect against bitrot. What you can do if you are in this scenario is what I do: once a week I run a plugin that checksums all new files and verifies checksums of old files. If checksums don’t match it warns me. I can then restore the invalid file from backup and investigate for issues (smart errors, bad sata cable, ecc problem with ram, etc). The upside of my xfs array is that I can expand it very easily and storage is maximized. I have 2 parity drives and at any point I can simply pop in another drive and extend the array to be bigger. This was not an option with zfs until about 9 months ago. This is a relatively “dangerous” setup but my array isn’t storing amazing critical data, it’s fully backed up despite that, and despite all of that it’s been going for 6+ years and has survived at least 3 drive failures
That said my approach is inferior to btrfs and zfs because in this scenario they could revert to snapshot rather than needing to manually restore from backup. One day I will likely rebuild my array with zfs especially now that raidz expansion is complete. I was basically waiting for that
As always double check everything I say. It is very possible someone will reply and tell me I’m stupid and wrong for several reasons. People can be very passionate about filesystems
Where do you store the checksums? Is it for every file? I thought of just making a tar for each year and then storing it next to it, and storing a copy off-site.
Always assume your data is in N-1 places at all times.
Any drive can and will fail at any time, no matter how well it was working yesterday.
I’ve had people in with their entire PhD and years of research on one single drive, with no backup - just gone.
If your data is only in one place, it will be in zero places soon enough.
Disposable or replaceable data - which honestly is going to be 90% of your stuff - meh.
But anything that you need and couldn’t replace, that shit needs backing up to AT LEAST one other place.
As for the rest - drives can fail slowly, or they can fail fast. When they fail slowly, you start getting a couple of disk errors here and there, and you may just be able to order one in time to replace it.
When they fail fast, they just drop like a heart attack.
There’s no way to know in advance. If your data is safe, then you’ll either be out a few days while a replacement arrives, or you’ll be just about able to copy stuff across. At that age, I wouldn’t trust it farther than I could spit it. It could work fine for years more, but the moment you rely on it for something important, it’ll give out on you.
Always make sure that important files and folders are backed up at least twice! Even when drives are new, they can and do fail at random without warning. My HDD’s are the better half of a decade old and I had no issue with them at all until last year. They’re now starting to experience random corruptions that will sometimes compromise entire folders.
I’ve not responded to the majority of comments in this thread because I’d have nothing to add except “thanks”, but here:
They’re now starting to experience random corruptions that will sometimes compromise entire folders.
Er why haven’t you bought new drives at that point??
I’ve not responded to the majority of comments in this thread because I’d have nothing to add except “thanks”, but here:
They’re now starting to experience random corruptions that will sometimes compromise entire folders.
Er why haven’t you bought new drives at that point??
Er why haven’t you bought new drives at that point??
There’s different ways to arrange data on multiple physical drives. One group of ways is called RAID. One specific type of RAID is called RAID5. And, one can have 3 or more drives in the RAID5 array.
I’ve 3 drives, each 2TB. In RAID5 I only get 4TB of effective storage (not 6TB). If any one of my 3 physical drives fails, the array preserves all data and continues to operate at a slower speed. The failed drive can be replaced, a rebuilding process performed, and performance restored. If a second drive fails then data is lost and the array stops working. But, even then, new drives can be purchased and data restored from backup.
In a business we never want unplanned downtime because it’s costly. We’d be replace hard drives before they fail on a schedule we choose: planned downtime when no one is working. But, at home, particularly with backups, unplanned downtime often isn’t very costly. We can keep using our old hardware, maximizing its value, until it fails entirely.
I’m gonna buy a new computer when this one inevitably refuses to boot up 🤷♀️ there’s more age related issues besides just the HDD’s at this point so it’ll be less hassle to start over.
I’ve not responded to the majority of comments in this thread because I’d have nothing to add except “thanks”, but here:
They’re now starting to experience random corruptions that will sometimes compromise entire folders.
Er why haven’t you bought new drives at that point??
The majority of HDD failures happen in the first 1-2 years (see Backblaze data). I have a NAS that has the same 5 drives running since 2013 and in all that time those disks were not spinning for maybe 3 weeks total.
That said I assume that any drive can fail at any time and anything I don’t want to lose has 2 backup copies, e.g. stuff I am working on on my PC gets copied to that NAS, that in turn backs it up online.
those disks were not spinning for maybe 3 weeks total
This is actually a good thing for longevity. Start up and stopping is the hardest part of a drive’s life. So you will see more failures on a personal PC that you turn off every night than a server drive running 24/7. Laptop drives will typically fare the worst as they may be power cycled many times a day, often fully stop when idle for power saving and get shaken much more than other drives.
Always do backups using the 1-2-3 method for any data stored on any media includi g cloud storage if sensibly possible. You WILL need it eventually and you WILL hate your past self for not checking if your backup actually works. Include your phone, too. If opensource is wanted, syncthing is a no-brainer. rsync pr freesync for personal bulk backups without cloud. (There is no Cloud. It’s just someone elses computer.)
One time I had a hard drive that stopped working without even giving two weeks notice! I concluded the technology is useless.
backup. backup. backup.
then also check the SMART stats on it and run the internal tests. if you don’t know how, gsmartcontrol is a good place to start.
i’ve had a couple disks fail right away, and others that just go forever–and one of those is a deathstar, even.
i have a 1tb hdd that i’ve taken with me over a few different pcs now, it’s 10 years old and whined about dying to me like 7 years ago.
I only use it for backup stuff, but it’s still going strong. Mostly I leave it just chilling like the old veteran it is.
I’ve got a 300gb WD velociraptor 10k rpm model that has been running almost non stop in every computer I have built for the last 20 years. I only use it as an extension of my steam library though so when it does die I won’t lose anything.