• SSD "overprovisioning" (was: Re: Why is it not letting me extend the partition?)

    From J. P. Gilliver (John)@G6JPG@255soft.uk to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 10:17:00 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On Tue, 23 Mar 2021 at 23:25:49, VanguardLH <V@nguard.LH> wrote (my
    responses usually follow points raised):
    Yousuf Khan <bbbl67@spammenot.yahoo.com> wrote:
    []
    drive, so I had to restore to a new SSD from backups. That took a fair
    bit of time to restore, but the new drive is twice as large as the old
    one, but it created a partition that is the same size as the original. I
    expected that, but I also expected that I should be able to extend the
    partition after the restore to fill the new drive's size. However going
    into disk management it doesn't allow me to fill up that entire drive.
    Any idea what's going on here?

    Yousuf Khan

    There are a lot of partition manipulations that the Disk Manager in
    Windows won't do. You need to use a 3rd party partition manager. There
    are lots of free ones. I use Easeus Partition Master, but there are
    lots of others.

    (I use that one too. It was the first one I tried and does what I want,
    so I haven't tried any others, so can't say if it's better or worse than
    any. The UI is similar to the Windows one - but then maybe they all
    are.)

    You might want to investigate overprovisioning for SSDs. It prolongs
    the lifespan of SSDs by giving them more room for remapping bad blocks.
    SSDs are self-destructive: they have a maximum number of writes. They
    will fail depending on the volume of writes you impinge on the SSD. The
    SSD will likely come with a preset of 7% to 10% of its capacity to use
    for overprovisioning. You can increase that. A tool might've come with
    the drive, or be available from the SSD maker. However, a contiguous
    span of unallocated space will increase the overprovisioning space, and
    you can use a 3rd party partition manager for that, too. You could
    expand the primary partition to occupy all of the unallocated space, or
    you could enlarge it just shy of how much unallocated space you want to
    leave to increase overprovisioning.

    How does the firmware (or whatever) in the SSD _know_ how much space
    you've left unallocated, if you use any partitioning utility other than
    one from the SSD maker (which presumably has some way of "telling" the firmware)?

    If, after some while using an SSD, it has used up some of the slack,
    because of some cells having been worn out, does the apparent total size
    of the SSD - including unallocated space - appear (either in
    manufacturer's own or some third-party partitioning utility) smaller
    than when that utility is run on it when nearly new?

    If - assuming you _can_ - you reduce the space for overprovisioning to
    zero (obviously unwise), will the SSD "brick" either immediately, or
    very shortly afterwards (i. e. as soon as another cell fails)?

    If, once an SSD _has_ "bricked" [and is one of the ones that goes to
    read-only rather than truly bricking], can you - obviously in a dock on
    a different machine - change (increase) its overprovisioning allowance
    and bring it back to life, at least temporarily?
    --
    J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

    "I'm tired of all this nonsense about beauty being only skin-deep. That's deep enough. What do you want, an adorable pancreas?" - Jean Kerr
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 08:24:36 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    J. P. Gilliver (John) wrote:


    If, after some while using an SSD, it has used up some of the slack,
    because of some cells having been worn out, does the apparent total size
    of the SSD - including unallocated space - appear (either in
    manufacturer's own or some third-party partitioning utility) smaller
    than when that utility is run on it when nearly new?

    The declared size of an SSD does not change.

    The declared size of an HDD does not change.

    What happens under the covers, is not on display.

    The reason you cannot arbitrarily move the end of a drive,
    is because some structures are up there, which don't appear
    in diagrams. This too is a secret.

    Any time something under the covers breaks, the
    storage device will say "I cannot perform my function,
    therefore I will brick". That is preferable to moving
    the end of the drive and damaging the backup GPT partition,
    the RAID metadata, or the Dynamic Disk declaration.

    Paul
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From J. P. Gilliver (John)@G6JPG@255soft.uk to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 16:28:52 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On Wed, 24 Mar 2021 at 08:24:36, Paul <nospam@needed.invalid> wrote (my responses usually follow points raised):
    J. P. Gilliver (John) wrote:

    If, after some while using an SSD, it has used up some of the slack, >>because of some cells having been worn out, does the apparent total
    size of the SSD - including unallocated space - appear (either in >>manufacturer's own or some third-party partitioning utility) smaller
    than when that utility is run on it when nearly new?

    The declared size of an SSD does not change.

    The declared size of an HDD does not change.

    What happens under the covers, is not on display.

    That's what I thought.

    The reason you cannot arbitrarily move the end of a drive,
    is because some structures are up there, which don't appear
    in diagrams. This too is a secret.

    Any time something under the covers breaks, the
    storage device will say "I cannot perform my function,
    therefore I will brick". That is preferable to moving
    the end of the drive and damaging the backup GPT partition,
    the RAID metadata, or the Dynamic Disk declaration.

    Paul

    So how come our colleague is telling us we can change the amount of "overprovisioning", even using one of many partition managers _other_
    that one made by the SSD manufacturer? How does the drive firmware (or whatever) _know_ that we've given it more to play with?
    --
    J. P. Gilliver. UMRA: 1960/<1985 MB++G()AL-IS-Ch++(p)Ar@T+H+Sh0!:`)DNAf

    It's no good pointing out facts.
    - John Samuel (@Puddle575 on Twitter), 2020-3-7
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 12:44:18 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    J. P. Gilliver (John) wrote:

    So how come our colleague is telling us we can change the amount of "overprovisioning", even using one of many partition managers _other_
    that one made by the SSD manufacturer? How does the drive firmware (or whatever) _know_ that we've given it more to play with?

    Once you've set the size of the device, it's
    not a good idea to change it. That's all I can
    tell you.

    If you don't want to *use* the whole device, that's your business.
    I've set up SSDs this way before. As you write C: and materials
    "recirculate" as part of wear leveling, the virtually unused
    portion continues to float in the free pool, offering more
    opportunities for wear leveling or consolidation. You don't
    have to do anything. You could make a D: partition, keep it empty,
    issue a "TRIM" command, to leave no uncertainty as to what your
    intention is. Then delete D: once the "signaling" step is complete.

    +-----+-----------------+--------------------+
    | MBR | C: NTFS | <unallocated> | +-----+-----------------+--------------------+
    \__ This much extra__/
    in free pool

    Paul
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From VanguardLH@V@nguard.LH to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 18:15:50 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    "J. P. Gilliver (John)" <G6JPG@255soft.uk> wrote:

    How does the firmware (or whatever) in the SSD _know_ how much space
    you've left unallocated, if you use any partitioning utility other
    than one from the SSD maker (which presumably has some way of
    "telling" the firmware)?

    Changing the amount of unallocated space on the SSD is how the tools
    from the SSD makers work, too. You can use their tool, or you can use a partitioning tool.

    If, after some while using an SSD, it has used up some of the slack,
    because of some cells having been worn out, does the apparent total
    size of the SSD - including unallocated space - appear (either in manufacturer's own or some third-party partitioning utility) smaller
    than when that utility is run on it when nearly new?

    The amount of overprovisioning space set at the factory is never
    available for you to change. If they set 7% space for overprovisioning,
    you'll never be able to allocate that space to any partition. That
    space is not visible, fixed, and set at the factory. For example, they
    might sell a 128GB SSD, but usuable capacity is only 100GB. This is the
    static overprovisioning set at the factory.

    From the usable capacity of the drive, unallocated space is used for
    dynamic overprovisioning. Typically you find that you cannot use all unallocated space for a partition. There's some that cannot be
    partitioned; however, by making partition(s) smaller then there is more unallocated space available for use by dynamic overprovisioning. It's
    dynamic because it changes with the amount of write delta (stored data changes). The unallocated space is a reserve. Not all of it may get
    used.

    Individual cells don't get remapped. Blocks of cells get remapped. If
    you were to reduce the OP using unallocated space, the previously marked
    bad blocks would have to get re-remapped to blocks within the partition.
    Those bad blocks are still marked as bad, so remapping has to be
    elsewhere. Might you lose information in the blocks in the dynamic OP
    space when you reduce it? That I don't know. Partition managers don't
    know about how the content of unallocated space is used.

    The SSD makers are so terse as to be sometimes unusably vague in their responses. Samsung said "Over Provisioning can only be performed on the
    last accessible partition." What does that mean? Unallocated space
    must be located after the last partition? Well, although by accident,
    that's how I (and Samsung Magician) have done it. The SSD shows up with
    1 partition consuming all usuable capacity, and I or Samsung Magician
    ended up shrinking the partition to make room for unallocated space at
    the end. However, SSD makers seem to be alchemists or witches: once
    they decide on their magic brew of ingredients, they keep it a secret.

    I have increased OP using Samsung Magician, and decreased it, too. All
    that it did was change the size of the unallocated space by shrinking or enlarging the last partition, so the unallocated space change was after
    the last partition. When shrinking the unallocated space, it was not
    apparent in Samsung Magician that any bad cell blocks that got remapped
    to unallocated space either got re-remapped into the static OP space
    which would reduce endurance. Since the firmware had marked a block as
    bad, it still gets remapped into static or dynamic OP. If unallocated
    space were reduced to zero (no dynamic OP), static OP gets used for the remappings. However, I haven't found anything that discusses for
    remappings into dynamic OP when the unallocated space is shrunk.
    Samsung Magician's OP adjustment looks to be nothing more than a limited partition manager to shrink or enlarge the last partition, which is the
    same you could do using a partition manager. I suspect any remap
    targets in the dynamic OP do not get written into the static OP, so you
    could end up with data corruption. A bad block got mapped into dynamic
    OP, you reduced the size of dynamic OP which means some of those
    mappings there are gone, and they are not written into static OP. Maybe Samsung's Magician is smart enough to remap the dynamic OP remaps into
    static OP, but I don't see that happening yet it could keep that
    invisible to the user. Only if I had a huge number of remappings stored
    in dynamic OP and then shrunk the unallocated space might I see the
    extra time spent to copy those remappings into static OP when compared
    to using a partition tool just just enlarge the last partition.

    Since the information doesn't seem available, I err on the side of
    caution: I only reduce dynamic OP immediately after enlarging it should
    I decide the extra OP consumed a bit more than I want to lose in
    capacity in the last partition. Once I set dynamic OP and have used the computer for a while, I don't reduce dynamic OP. I have yet to find out
    what happens to the remappings in dynamic OP when it is reduced. If I
    later need more space in the partition, I get a bigger drive, clone to
    it, and decide on dynamic OP at that time. With a bigger drive, I
    probably will reduce the percentage of dynamic OP since it would be a
    huge waste of space. For a drive clone, the static or dynamic
    remappings from the old drive aren't copied to the new drive. The new
    drive will have its own independent remappings, and the reads during the
    clone are going to copy from the remaps from the old drive into the the
    new drive's partition(s). Old remappings vaporize during the copy to a different drive.

    Unless reducing the dynamic OP size (unallocated space) is done very
    early after creating it to reduce the chance of new remappings happening between defining the unallocated space and then reducing its size, I
    would be leery of reducing unallocated space on an SSD after lots of use
    for a long time. Cells will go bad in SSDs, and why remapping is
    needed. I don't see any tools that move remappings from dynamic OP when
    it gets reduced, and the sectors where were the remapping get moved to
    static OP. You can decide not to use dynamic OP at all, and hope the factory-set static OP works okay for you for however long you own the
    SSD. You can decide to sacrifice some capacity to define dynamic OP,
    but I would recommend only creating it, perhaps later enlarging it, but
    not to shrink it. I just can't find info on what happens to the remaps
    in dynamic OP when it is shrunk.

    Overprovisioning, whether fixed (static, set by factory) or dynamic (unallocated space within the usuable space after static OP) always
    reduces capacity of the drive. The reward is reducing write
    amplication, increased performance (but not better than factory-time performance), and endurance. You trade some of one for the other. It's
    like insurance: the more you buy, the less money you have now, but you
    hope you won't be spending a lot more later.

    If - assuming you _can_ - you reduce the space for overprovisioning to
    zero (obviously unwise), will the SSD "brick" either immediately, or
    very shortly afterwards (i. e. as soon as another cell fails)?

    Since the cell block is still marked as bad, it still needs to get
    remapped. With no dynamic OP, static OP gets used. If you create
    dynamic OP (unallocated space) where some remaps could get stored, what
    happens to the remaps there when you shrink the dynamic OP? Sure, the
    bad blocks are still marked bad, so future writes will remap the bad
    block into static OP, but happened to the data in the remaps in dynamic
    OP when it went away? Don't know. I don't see any SSD tool or
    partition manager will write the remaps from dynamic OP into static OP
    before reducing dynamic OP. After defining dynamic OP, reducing it
    could cause data loss.

    If you just must reduce dynamic OP because you need that unallocated
    space to get allocated into a partition, your real need is a bigger
    drive. When you clone (copy) the old SSD to a new SSD, none of the
    remaps in the old SSD carry to the new SSD. When you get the new SSD,
    you could change the size (percentage) of unallocated space to change
    the size of dynamic OP, but I would do that immediately after the clone
    (or restore from backup image). I'd want to reduce the unallocated
    space on the new bigger SSD as soon as possible, and might even use a
    bootable partition manager to do that before the OS loads the first
    time. I cannot find what happens to the remaps in dynamic OP when it
    gets reduced.

    If, once an SSD _has_ "bricked" [and is one of the ones that goes to read-only rather than truly bricking], can you - obviously in a dock on
    a different machine - change (increase) its overprovisioning allowance
    and bring it back to life, at least temporarily?

    Never tested that. Usually I replace drives before they run out of free
    space (within a partition) with bigger drives, or I figure out how to
    move data off the old drive to make for more free space. If I had an
    SSD that catastrophically failed into read-only mode, I'd get a new (and probably bigger) SSD and clone from old to new, then discard the old.

    Besides my desire to up capacity with a new drive when an old drive gets
    over around 80% full, and if I don't want to move files off of it to get
    back a huge chunk to become free space, I know SSDs are self
    destructive, so I expect them to fail unless I replace them beforehand.
    From my readings, and although they only give a 1-year warranty, most
    SSD makers seem to plan on a MTBF of 10 years, but that's under a write
    volume "typical" of consumer use (they have some spec that simulates
    typical write volume, but I've not seen those docs). Under business or
    server use, MTBF is expected to be much lower. I doubt that I would
    keep any SSD for more than 5 years in my personal computers. I up the
    dynamic OP to add insurance, because I size drives far beyond expected
    usage. Doubling is usually my minimum upsize scale.

    I wouldn't plan on getting my SSD anywhere near its maximum write cycle
    count that would read-only brick it. SMART does not report the number
    of write cycles, but Samsung's Magician tool does. It must request info
    from firmware that is not part of the SMART table. My current 1 TB NVMe
    m.2 SSD is about 25% full after a year's use of my latest build.
    Consumption won't change as much in the future (i.e., it pretty much
    flattened after a few months), but if it gets to 80% would then be when
    I consider getting another matching NVMe m.2 SSD, or replace the old 1
    TB one with 2TB, or larger, and cloning would erase all those old remaps
    in the old drive (the new drive won't have those). Based on my past
    experience and usage, I expect my current build to last another 7 years
    until I the itch gets too unbearable to do a new build. 20% got used
    for dynamic OP just as insurance to get an 8-year lifespan, but I doubt
    I will ever get close to bricking the SSD.

    I could probably just use the 10% minimum for static OP, but I'm willing
    to spend some capacity as insurance. More than for endurance, I added
    dynamic OP to keep up the performance of the SSD. After a year, or
    more, of use, lots of users have reported their SSDs don't perform like
    when new. The NVMe m.2 SSD is a 5 times faster (sequential, and more
    than 4 times for random) for both reads and writes than my old SATA SSD
    drive, and I don't want to lose that joy of speed that I felt at the
    start.

    I might be getting older and slower, but not something I want for my
    computer hardware as it ages.
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From VanguardLH@V@nguard.LH to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 19:00:04 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    "J. P. Gilliver (John)" <G6JPG@255soft.uk> wrote:

    So how come our colleague is telling us we can change the amount of "overprovisioning", even using one of many partition managers _other_
    that one made by the SSD manufacturer? How does the drive firmware
    (or whatever) _know_ that we've given it more to play with?

    Static OP: What the factory defines. Fixed. The OS, software, and you
    have no access. Not part of usable space.

    Dynamic OP: You define unallocated space on the drive. You can shrink a
    partition to make more unallocated space, or expand a
    partition to make less unallocated space (but might cause
    data loss for remaps stored within the dynamic OP). (*)

    (*) I've not found info on what happens to remaps stored in the dynamic
    OP when the unallocated space is reduced (and the reduction covers
    the sectors for the remaps).
    --- Synchronet 3.18c-Linux NewsLink 1.113