• Re: Do you think the days of the hard drive is finally over?

    From Mark Perkins@mark@none.invalid to comp.sys.ibm.pc.hardware.storage on Sat May 16 16:02:45 2020
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On Sat, 16 May 2020 07:50:32 -0400, Yousuf Khan
    <bbbl67@spammenot.yahoo.com> wrote:

    On 5/16/2020 5:31 AM, pedro1492@lycos.com wrote:
    20 terrorbites would be an "archive" drive with shingles.
    The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.

    I haven't done a scandisk in quite a few years, and prior to that it was another few years since the previous one. It's not something I worry about,
    nor do I worry about how long it takes to fill a drive with data. My
    primary concerns are how many SATA ports and drive bays I have on hand.
    Those are the limiting factors.

    I think even 16 TB is way too large, shingles or not. It would still
    take nearly 18 hours.

    We all have different needs. My server has 16 SATA ports and 15 drive bays,
    so the OS lives on an SSD that lays on the floor of the case. The data
    drives are 4TB x5 and 2TB x10, for a raw capacity of 40TB, formatted to
    36.3TB. I use DriveBender to pool all of the drives into a single volume. Windows is happy with that. Since there are no SATA ports or drive bays available, upgrading for more storage means replacing one or more of the current drives. External drives aren't a serious long-term option.

    The PC that I'm typing on, which I consider my workstation, has 6 SATA
    ports native to the mobo, 3 NVMe sockets, and 10 drive bays. I use an NVMe drive for the OS and 4TB x3 plus 12TB x2 for data, giving me 36TB raw and 32.7TB formatted. I also use DriveBender here, so Windows sees single
    32.7TB volume. With one SATA port available (and 5 drive bays), I can
    expand the storage by adding one drive. Beyond that, since I'll be out of
    SATA ports and don't really want to use a PCIe SATA card, my next move
    would be to replace the 4TB drives with something bigger.

    At the moment, I'm looking at 12TB and 14TB drives as possible system
    upgrades. The 16TB drives are still expensive, with most being north of
    $400 apiece.

    What would be a type of HDD that a system could handle practically now?
    I think perhaps the upper limit is 8 TB? That would take nearly 9 hours
    to fill. 6 TB would take 6.5 hours, 4 TB would take 4.5 hours.

    Mine are 36.3TB and 32.7TB. I've never filled a volume that size all at
    once.

    --- Synchronet 3.18a-Linux NewsLink 1.113
  • From Yousuf Khan@bbbl67@spammenot.yahoo.com to comp.sys.ibm.pc.hardware.storage on Sun May 17 22:06:35 2020
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On 5/16/2020 5:02 PM, Mark Perkins wrote:
    On Sat, 16 May 2020 07:50:32 -0400, Yousuf Khan
    <bbbl67@spammenot.yahoo.com> wrote:

    On 5/16/2020 5:31 AM, pedro1492@lycos.com wrote:
    20 terrorbites would be an "archive" drive with shingles.
    The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.

    I haven't done a scandisk in quite a few years, and prior to that it was another few years since the previous one. It's not something I worry about, nor do I worry about how long it takes to fill a drive with data. My
    primary concerns are how many SATA ports and drive bays I have on hand.
    Those are the limiting factors.

    Well, nobody does Scandisks more than once in several years. I'm sure
    Pedro meant that as an extreme example, but not something that is
    unreasonable to expect to do occasionally.

    I think even 16 TB is way too large, shingles or not. It would still
    take nearly 18 hours.

    We all have different needs. My server has 16 SATA ports and 15 drive bays, so the OS lives on an SSD that lays on the floor of the case. The data
    drives are 4TB x5 and 2TB x10, for a raw capacity of 40TB, formatted to 36.3TB. I use DriveBender to pool all of the drives into a single volume. Windows is happy with that. Since there are no SATA ports or drive bays available, upgrading for more storage means replacing one or more of the current drives. External drives aren't a serious long-term option.

    But the point is, neither are internal ones these days, it seems.
    Assuming even if these are mainly used in enterprise settings, they
    would likely be part of a RAID array. Now if the RAID array is new and
    all of these drives were put in new as part of the initial setup,
    there's nothing to worry about, you fill it up to whatever level of data
    you have. Hopefully your array holds at least twice the amount of data
    that someone's old setup had, so it can keep growing before it too needs
    to be replaced or upgraded. Now as this array ages, it's reasonable to
    assume that one of the drives may die, and it would need to be replaced.
    By the time this event happens, likely this array is probably at least
    80% full or more. Inserting a replacement drive into the array will
    require massive amount of time to resync, even if it is a smart resync,
    doing only the blocks that actually have data on them.

    Now, looking up what Drive Bender is, it seems to be a virtual volume concatenator. So it's not really a RAID, individual drives die and only
    the data on them are lost, unless they are backed up. So even in that
    case, if one of these massive drives is part of your DB setup, replacing
    that drive will be a major pain in the butt even while restoring from
    backups. It really begs the question how long are you willing to wait
    for a drive to get repopulated, knowing that while this is happening
    it's also going to be maxing out the rest of your system for the amount
    of hours that the restore operation is happening?

    My point is that I think people will only be willing to wait a few
    hours, perhaps 4 or 5 hours at most, before they say it's not worth it,
    in a home environment. In an enterprise environment, that tolerance may
    get extended out to 8 or 10 hours. So at some point, all of this
    capacity is useless, because it's impractical to manage with the current
    drive and interface speeds.

    If SSD's were cheaper per byte, then even SSD's running on a SATA
    interface would still be viable at the same capacities we see HDD's at
    right now. So a 16 or 20 TB SSD would be usable devices, but 16 or 20 TB
    HDD's aren't.

    Yousuf Khan
    --- Synchronet 3.18a-Linux NewsLink 1.113
  • From Mark Perkins@mark@none.invalid to comp.sys.ibm.pc.hardware.storage on Mon May 18 17:11:06 2020
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On Sun, 17 May 2020 22:06:35 -0400, Yousuf Khan
    <bbbl67@spammenot.yahoo.com> wrote:

    On 5/16/2020 5:02 PM, Mark Perkins wrote:
    On Sat, 16 May 2020 07:50:32 -0400, Yousuf Khan
    <bbbl67@spammenot.yahoo.com> wrote:

    On 5/16/2020 5:31 AM, pedro1492@lycos.com wrote:
    20 terrorbites would be an "archive" drive with shingles.
    The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.

    I haven't done a scandisk in quite a few years, and prior to that it was
    another few years since the previous one. It's not something I worry about, >> nor do I worry about how long it takes to fill a drive with data. My
    primary concerns are how many SATA ports and drive bays I have on hand.
    Those are the limiting factors.

    Well, nobody does Scandisks more than once in several years. I'm sure
    Pedro meant that as an extreme example, but not something that is >unreasonable to expect to do occasionally.

    I think even 16 TB is way too large, shingles or not. It would still
    take nearly 18 hours.

    We all have different needs. My server has 16 SATA ports and 15 drive bays, >> so the OS lives on an SSD that lays on the floor of the case. The data
    drives are 4TB x5 and 2TB x10, for a raw capacity of 40TB, formatted to
    36.3TB. I use DriveBender to pool all of the drives into a single volume.
    Windows is happy with that. Since there are no SATA ports or drive bays
    available, upgrading for more storage means replacing one or more of the
    current drives. External drives aren't a serious long-term option.

    But the point is, neither are internal ones these days, it seems.

    I don't follow what you're saying. To me, internal drives are the primary
    data storage option.

    Assuming even if these are mainly used in enterprise settings, they
    would likely be part of a RAID array. Now if the RAID array is new and
    all of these drives were put in new as part of the initial setup,
    <snip>

    No, I'm not assuming that (Enterprise and RAID) at all. I'm assuming use in
    the home market, and specifically the subset of the home market where
    people want to keep large amounts of data accessible. RAID is relatively
    rare in that setting, isn't it? I don't know anyone who uses it, but that doesn't mean much.

    Now, looking up what Drive Bender is, it seems to be a virtual volume >concatenator. So it's not really a RAID, individual drives die and only
    the data on them are lost, unless they are backed up. So even in that
    case, if one of these massive drives is part of your DB setup, replacing >that drive will be a major pain in the butt even while restoring from

    Restoring just the missing files is a major pain? Why does that have to be
    the case? FWIW, I haven't found that to be true. It's much faster than
    doing a full restore, for example.

    backups. It really begs the question how long are you willing to wait
    for a drive to get repopulated, knowing that while this is happening
    it's also going to be maxing out the rest of your system for the amount
    of hours that the restore operation is happening?

    If there's something you need right away, you prioritize that. Otherwise,
    let the restore run and do its thing. It's not like disk access brings a
    modern system to its knees, right? Performance wise, you wouldn't even know it's happening. So in general, there's no significant waiting, and remember that failed drives are not an every day/week/month/year occurrence. Most
    drives last longer than I'm willing to use them, getting replaced when the
    data has outgrown their capacity.

    My point is that I think people will only be willing to wait a few
    hours, perhaps 4 or 5 hours at most, before they say it's not worth it,
    in a home environment.

    I don't follow that at all.

    In an enterprise environment, that tolerance may
    get extended out to 8 or 10 hours. So at some point, all of this
    capacity is useless, because it's impractical to manage with the current >drive and interface speeds.

    ??? How often are you clearing and refilling an entire drive?

    If SSD's were cheaper per byte, then even SSD's running on a SATA
    interface would still be viable at the same capacities we see HDD's at
    right now. So a 16 or 20 TB SSD would be usable devices, but 16 or 20 TB >HDD's aren't.

    That sounds like nonsense. If 100TB HDD's were available at a reasonable
    price and reasonably reliable, many people would find them to be perfectly usable. I'd love to replace all of my smaller drives with fewer larger
    drives and in fact that's exactly what I've been doing since the
    mid-1980's.

    --- Synchronet 3.18a-Linux NewsLink 1.113