On 5/16/2020 5:31 AM, pedro1492@lycos.com wrote:
20 terrorbites would be an "archive" drive with shingles.
The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.
I think even 16 TB is way too large, shingles or not. It would still
take nearly 18 hours.
What would be a type of HDD that a system could handle practically now?
I think perhaps the upper limit is 8 TB? That would take nearly 9 hours
to fill. 6 TB would take 6.5 hours, 4 TB would take 4.5 hours.
On Sat, 16 May 2020 07:50:32 -0400, Yousuf Khan
<bbbl67@spammenot.yahoo.com> wrote:
On 5/16/2020 5:31 AM, pedro1492@lycos.com wrote:
20 terrorbites would be an "archive" drive with shingles.
The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.
I haven't done a scandisk in quite a few years, and prior to that it was another few years since the previous one. It's not something I worry about, nor do I worry about how long it takes to fill a drive with data. My
primary concerns are how many SATA ports and drive bays I have on hand.
Those are the limiting factors.
I think even 16 TB is way too large, shingles or not. It would still
take nearly 18 hours.
We all have different needs. My server has 16 SATA ports and 15 drive bays, so the OS lives on an SSD that lays on the floor of the case. The data
drives are 4TB x5 and 2TB x10, for a raw capacity of 40TB, formatted to 36.3TB. I use DriveBender to pool all of the drives into a single volume. Windows is happy with that. Since there are no SATA ports or drive bays available, upgrading for more storage means replacing one or more of the current drives. External drives aren't a serious long-term option.
On 5/16/2020 5:02 PM, Mark Perkins wrote:
On Sat, 16 May 2020 07:50:32 -0400, Yousuf Khan
<bbbl67@spammenot.yahoo.com> wrote:
On 5/16/2020 5:31 AM, pedro1492@lycos.com wrote:
20 terrorbites would be an "archive" drive with shingles.
The sensible drives go up to 16 TB? Even that is going to take ages for a scandisk.
I haven't done a scandisk in quite a few years, and prior to that it was
another few years since the previous one. It's not something I worry about, >> nor do I worry about how long it takes to fill a drive with data. My
primary concerns are how many SATA ports and drive bays I have on hand.
Those are the limiting factors.
Well, nobody does Scandisks more than once in several years. I'm sure
Pedro meant that as an extreme example, but not something that is >unreasonable to expect to do occasionally.
I think even 16 TB is way too large, shingles or not. It would still
take nearly 18 hours.
We all have different needs. My server has 16 SATA ports and 15 drive bays, >> so the OS lives on an SSD that lays on the floor of the case. The data
drives are 4TB x5 and 2TB x10, for a raw capacity of 40TB, formatted to
36.3TB. I use DriveBender to pool all of the drives into a single volume.
Windows is happy with that. Since there are no SATA ports or drive bays
available, upgrading for more storage means replacing one or more of the
current drives. External drives aren't a serious long-term option.
But the point is, neither are internal ones these days, it seems.
Assuming even if these are mainly used in enterprise settings, they<snip>
would likely be part of a RAID array. Now if the RAID array is new and
all of these drives were put in new as part of the initial setup,
Now, looking up what Drive Bender is, it seems to be a virtual volume >concatenator. So it's not really a RAID, individual drives die and only
the data on them are lost, unless they are backed up. So even in that
case, if one of these massive drives is part of your DB setup, replacing >that drive will be a major pain in the butt even while restoring from
backups. It really begs the question how long are you willing to waitfor a drive to get repopulated, knowing that while this is happening
it's also going to be maxing out the rest of your system for the amount
of hours that the restore operation is happening?
My point is that I think people will only be willing to wait a few
hours, perhaps 4 or 5 hours at most, before they say it's not worth it,
in a home environment.
In an enterprise environment, that tolerance may
get extended out to 8 or 10 hours. So at some point, all of this
capacity is useless, because it's impractical to manage with the current >drive and interface speeds.
If SSD's were cheaper per byte, then even SSD's running on a SATA
interface would still be viable at the same capacities we see HDD's at
right now. So a 16 or 20 TB SSD would be usable devices, but 16 or 20 TB >HDD's aren't.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,090 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 173:40:53 |
| Calls: | 13,923 |
| Calls today: | 1 |
| Files: | 187,022 |
| D/L today: |
7,999 files (2,265M bytes) |
| Messages: | 2,455,730 |