• Why is it not letting me extend the partition?

    From Yousuf Khan@bbbl67@spammenot.yahoo.com to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Tue Mar 23 23:20:51 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    So one of my oldest SSD's just finally had a bad misfire. One of its
    memory cells seems to have gone bad, and it happened to be my boot
    drive, so I had to restore to a new SSD from backups. That took a fair
    bit of time to restore, but the new drive is twice as large as the old
    one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going
    into disk management it doesn't allow me to fill up that entire drive.
    Any idea what's going on here?

    Yousuf Khan
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From John Doe@always.look@message.header to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 03:48:58 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    Yousuf Khan <bbbl67@spammenot.yahoo.com> wrote:

    So one of my oldest SSD's just finally had a bad misfire. One of its
    memory cells seems to have gone bad, and it happened to be my boot
    drive, so I had to restore to a new SSD from backups. That took a fair
    bit of time to restore, but the new drive is twice as large as the old
    one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going
    into disk management it doesn't allow me to fill up that entire drive.
    Any idea what's going on here?

    You mean Microsoft disk management? Use a real partitioning utility. I got a free one several years ago downloaded from Amazon that works... Partition Master Technician 13.0 Portable. See if it's still available. If you make Windows backups (like everybody should), you don't even need to keep it on your system, just don't re-install it after the next restore.

    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From VanguardLH@V@nguard.LH to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Tue Mar 23 23:25:49 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    Yousuf Khan <bbbl67@spammenot.yahoo.com> wrote:

    So one of my oldest SSD's just finally had a bad misfire. One of its
    memory cells seems to have gone bad, and it happened to be my boot
    drive, so I had to restore to a new SSD from backups. That took a fair
    bit of time to restore, but the new drive is twice as large as the old
    one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going
    into disk management it doesn't allow me to fill up that entire drive.
    Any idea what's going on here?

    Yousuf Khan

    There are a lot of partition manipulations that the Disk Manager in
    Windows won't do. You need to use a 3rd party partition manager. There
    are lots of free ones. I use Easeus Partition Master, but there are
    lots of others.

    You might want to investigate overprovisioning for SSDs. It prolongs
    the lifespan of SSDs by giving them more room for remapping bad blocks.
    SSDs are self-destructive: they have a maximum number of writes. They
    will fail depending on the volume of writes you impinge on the SSD. The
    SSD will likely come with a preset of 7% to 10% of its capacity to use
    for overprovisioning. You can increase that. A tool might've come with
    the drive, or be available from the SSD maker. However, a contiguous
    span of unallocated space will increase the overprovisioning space, and
    you can use a 3rd party partition manager for that, too. You could
    expand the primary partition to occupy all of the unallocated space, or
    you could enlarge it just shy of how much unallocated space you want to
    leave to increase overprovisioning.
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 02:14:59 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    Yousuf Khan wrote:
    So one of my oldest SSD's just finally had a bad misfire. One of its
    memory cells seems to have gone bad, and it happened to be my boot
    drive, so I had to restore to a new SSD from backups. That took a fair
    bit of time to restore, but the new drive is twice as large as the old
    one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going
    into disk management it doesn't allow me to fill up that entire drive.
    Any idea what's going on here?

    Yousuf Khan

    It's GPT and you need to find a utility that does a
    better job of showing the partitions.

    The Microsoft Reserved partition has no recognizable
    file system inside, and the information I can find suggests
    it is used as a space when something needs to be adjusted. It
    is a tiny supply of "slack". But, it might also function as
    a "blocker" when Disk Management is at work. And then, not
    every utility lists it properly. Some utilities try to "hide"
    things like this, and only show data partitions.

    Try Linux GDisk or Linux GParted, and see if you can
    spot the blocker there. The disktype utility might work,
    but the only edition available there is the Cygwin one.

    disktype.exe /dev/sda

    --- /dev/sda
    Block device, size 2.729 TiB (3000592982016 bytes)
    DOS/MBR partition map
    Partition 1: 2.000 TiB (2199023255040 bytes, 4294967295 sectors from 1)
    Type 0xEE (EFI GPT protective)
    GPT partition map, 128 entries
    Disk size 2.729 TiB (3000592982016 bytes, 5860533168 sectors)
    Disk GUID EE053214-E191-B343-A670-D3A712F353DB
    Partition 1: 512 MiB (536870912 bytes, 1048576 sectors from 2048)
    Type EFI System (FAT) (GUID 28732AC1-1FF8-D211-BA4B-00A0C93EC93B)
    Partition Name "EFI System Partition"
    Partition GUID 0CF3D241-6DA1-764C-AE0F-559E55314B8C
    FAT32 file system (hints score 5 of 5)
    Volume size 511.0 MiB (535805952 bytes, 130812 clusters of 4 KiB) Partition 2: 20 GiB (21474836480 bytes, 41943040 sectors from 1050624)
    Type Unknown (GUID AF3DC60F-8384-7247-8E79-3D69D8477DE4)
    Partition Name "MINT193"
    Partition GUID 0647492B-0C78-DC4E-914C-E210AB6FF5A5
    Ext3 file system
    Volume name "MINT193"
    UUID E96B501E-23B5-4F80-A41C-CEE6A5E1D59C (DCE, v4)
    Last mounted at "/media/bullwinkle/MINT193"
    Volume size 20 GiB (21474836480 bytes, 5242880 blocks of 4 KiB)
    Partition 3: 16 MiB (16777216 bytes, 32768 sectors from 123930624) <=== not visible,
    Type MS Reserved (GUID 16E3C9E3-5C0B-B84D-817D-F92DF00215AE) diskmgmt.msc
    Partition Name "Microsoft reserved partition"
    Partition GUID 0C569E59-E917-AC40-B336-E7B2527D77AD
    Blank disk/medium
    Partition 4: 300.4 GiB (322502360576 bytes, 629887423 sectors from 123963392)
    Type Basic Data (GUID A2A0D0EB-E5B9-3344-87C0-68B6B72699C7)
    Partition Name "Basic data partition" <=== actually,
    Partition GUID 65A1A4E6-4F11-7944-874A-B3A515F131DE "WIN10"
    NTFS file system
    Volume size 300.4 GiB (322502360064 bytes, 629887422 sectors)
    Partition 5: 514 MiB (538968064 bytes, 1052672 sectors from 753854464)
    Type Unknown (GUID A4BB94DE-D106-404D-A16A-BFD50179D6AC)
    Partition Name ""
    Partition GUID 99242951-459E-1144-BF88-61517A280CCA <=== recovery
    NTFS file system partition
    Volume size 514.0 MiB (538967552 bytes, 1052671 sectors)

    HTH,
    Paul


    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From Jeff Barnett@jbb@notatt.com to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 02:01:20 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On 3/24/2021 12:14 AM, Paul wrote:
    Yousuf Khan wrote:
    So one of my oldest SSD's just finally had a bad misfire. One of its
    memory cells seems to have gone bad, and it happened to be my boot
    drive, so I had to restore to a new SSD from backups. That took a fair
    bit of time to restore, but the new drive is twice as large as the old
    one, but it created a partition that is the same size as the original.
    I expected that, but I also expected that I should be able to extend
    the partition after the restore to fill the new drive's size. However >> going into disk management it doesn't allow me to fill up that entire >> drive. Any idea what's going on here?

        Yousuf Khan

    It's GPT and you need to find a utility that does a
    better job of showing the partitions.

    The Microsoft Reserved partition has no recognizable
    file system inside, and the information I can find suggests
    it is used as a space when something needs to be adjusted. It
    is a tiny supply of "slack". But, it might also function as
    a "blocker" when Disk Management is at work. And then, not
    every utility lists it properly. Some utilities try to "hide"
    things like this, and only show data partitions.

    Try Linux GDisk or Linux GParted, and see if you can
    spot the blocker there. The disktype utility might work,
    but the only edition available there is the Cygwin one.

    disktype.exe /dev/sda

    --- /dev/sda
    Block device, size 2.729 TiB (3000592982016 bytes)
    DOS/MBR partition map
    Partition 1: 2.000 TiB (2199023255040 bytes, 4294967295 sectors from 1)>   Type 0xEE (EFI GPT protective)
    GPT partition map, 128 entries
      Disk size 2.729 TiB (3000592982016 bytes, 5860533168 sectors)
      Disk GUID EE053214-E191-B343-A670-D3A712F353DB
    Partition 1: 512 MiB (536870912 bytes, 1048576 sectors from 2048)
      Type EFI System (FAT) (GUID 28732AC1-1FF8-D211-BA4B-00A0C93EC93B)
      Partition Name "EFI System Partition"
      Partition GUID 0CF3D241-6DA1-764C-AE0F-559E55314B8C
      FAT32 file system (hints score 5 of 5)
        Volume size 511.0 MiB (535805952 bytes, 130812 clusters of 4 KiB) Partition 2: 20 GiB (21474836480 bytes, 41943040 sectors from 1050624)
      Type Unknown (GUID AF3DC60F-8384-7247-8E79-3D69D8477DE4)
      Partition Name "MINT193"
      Partition GUID 0647492B-0C78-DC4E-914C-E210AB6FF5A5
      Ext3 file system
        Volume name "MINT193"
        UUID E96B501E-23B5-4F80-A41C-CEE6A5E1D59C (DCE, v4)>     Last mounted at "/media/bullwinkle/MINT193"
        Volume size 20 GiB (21474836480 bytes, 5242880 blocks of 4 KiB) Partition 3: 16 MiB (16777216 bytes, 32768 sectors from 123930624)
    <=== not visible,
      Type MS Reserved (GUID 16E3C9E3-5C0B-B84D-817D-F92DF00215AE)             diskmgmt.msc
      Partition Name "Microsoft reserved partition"
      Partition GUID 0C569E59-E917-AC40-B336-E7B2527D77AD
      Blank disk/medium
    Partition 4: 300.4 GiB (322502360576 bytes, 629887423 sectors from 123963392)
      Type Basic Data (GUID A2A0D0EB-E5B9-3344-87C0-68B6B72699C7)
      Partition Name "Basic data partition"
    <=== actually,
      Partition GUID 65A1A4E6-4F11-7944-874A-B3A515F131DE                      "WIN10"
      NTFS file system
        Volume size 300.4 GiB (322502360064 bytes, 629887422 sectors) Partition 5: 514 MiB (538968064 bytes, 1052672 sectors from 753854464)
      Type Unknown (GUID A4BB94DE-D106-404D-A16A-BFD50179D6AC)
      Partition Name ""
      Partition GUID 99242951-459E-1144-BF88-61517A280CCA
    <=== recovery
      NTFS file system                                                         partition
        Volume size 514.0 MiB (538967552 bytes, 1052671 sectors)

    HTH,
       Paul
    There may be another issue. I'm thinking of Samsung over provisioning
    (or is over something else?) where about 10% of disk free space is used
    by the disk firmware to shuffle blocks in use in order to level wear. If I wanted to change my SSD, I'd probably need to use the Samsung Magician to first undo that block; then I could do my partition management; then
    use Samsung again to enable the wear leveling. I presume that that more
    than Samsung implements such a scheme.
    This is not my area of expertise and I'm generalizing from my limited experience using a few Samsung SSD on my systems. Perhaps someone more knowledgeable can either poo poo my observation or, if it sounds right,
    flesh out what is going on.
    --
    Jeff Barnett
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 06:39:31 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    Jeff Barnett wrote:
    On 3/24/2021 12:14 AM, Paul wrote:
    Yousuf Khan wrote:
    So one of my oldest SSD's just finally had a bad misfire. One of its
    memory cells seems to have gone bad, and it happened to be my boot
    drive, so I had to restore to a new SSD from backups. That took a
    fair bit of time to restore, but the new drive is twice as large as
    the old one, but it created a partition that is the same size as the
    original. I expected that, but I also expected that I should be able
    to extend the partition after the restore to fill the new drive's
    size. However going into disk management it doesn't allow me to fill
    up that entire drive. Any idea what's going on here?

    Yousuf Khan

    It's GPT and you need to find a utility that does a
    better job of showing the partitions.

    The Microsoft Reserved partition has no recognizable
    file system inside, and the information I can find suggests
    it is used as a space when something needs to be adjusted. It
    is a tiny supply of "slack". But, it might also function as
    a "blocker" when Disk Management is at work. And then, not
    every utility lists it properly. Some utilities try to "hide"
    things like this, and only show data partitions.

    Try Linux GDisk or Linux GParted, and see if you can
    spot the blocker there. The disktype utility might work,
    but the only edition available there is the Cygwin one.

    disktype.exe /dev/sda

    --- /dev/sda
    Block device, size 2.729 TiB (3000592982016 bytes)
    DOS/MBR partition map
    Partition 1: 2.000 TiB (2199023255040 bytes, 4294967295 sectors from 1)
    Type 0xEE (EFI GPT protective)
    GPT partition map, 128 entries
    Disk size 2.729 TiB (3000592982016 bytes, 5860533168 sectors)
    Disk GUID EE053214-E191-B343-A670-D3A712F353DB
    Partition 1: 512 MiB (536870912 bytes, 1048576 sectors from 2048)
    Type EFI System (FAT) (GUID 28732AC1-1FF8-D211-BA4B-00A0C93EC93B)
    Partition Name "EFI System Partition"
    Partition GUID 0CF3D241-6DA1-764C-AE0F-559E55314B8C
    FAT32 file system (hints score 5 of 5)
    Volume size 511.0 MiB (535805952 bytes, 130812 clusters of 4 KiB)
    Partition 2: 20 GiB (21474836480 bytes, 41943040 sectors from 1050624)
    Type Unknown (GUID AF3DC60F-8384-7247-8E79-3D69D8477DE4)
    Partition Name "MINT193"
    Partition GUID 0647492B-0C78-DC4E-914C-E210AB6FF5A5
    Ext3 file system
    Volume name "MINT193"
    UUID E96B501E-23B5-4F80-A41C-CEE6A5E1D59C (DCE, v4)
    Last mounted at "/media/bullwinkle/MINT193"
    Volume size 20 GiB (21474836480 bytes, 5242880 blocks of 4 KiB)
    Partition 3: 16 MiB (16777216 bytes, 32768 sectors from 123930624)
    <=== not visible,
    Type MS Reserved (GUID
    16E3C9E3-5C0B-B84D-817D-F92DF00215AE) diskmgmt.msc
    Partition Name "Microsoft reserved partition"
    Partition GUID 0C569E59-E917-AC40-B336-E7B2527D77AD
    Blank disk/medium
    Partition 4: 300.4 GiB (322502360576 bytes, 629887423 sectors from
    123963392)
    Type Basic Data (GUID A2A0D0EB-E5B9-3344-87C0-68B6B72699C7)
    Partition Name "Basic data partition"
    <=== actually,
    Partition GUID
    65A1A4E6-4F11-7944-874A-B3A515F131DE "WIN10"
    NTFS file system
    Volume size 300.4 GiB (322502360064 bytes, 629887422 sectors)
    Partition 5: 514 MiB (538968064 bytes, 1052672 sectors from 753854464)
    Type Unknown (GUID A4BB94DE-D106-404D-A16A-BFD50179D6AC)
    Partition Name ""
    Partition GUID 99242951-459E-1144-BF88-61517A280CCA
    <=== recovery
    NTFS file
    system partition
    Volume size 514.0 MiB (538967552 bytes, 1052671 sectors)

    HTH,
    Paul

    There may be another issue. I'm thinking of Samsung over provisioning
    (or is over something else?) where about 10% of disk free space is used
    by the disk firmware to shuffle blocks in use in order to level wear. If
    I wanted to change my SSD, I'd probably need to use the Samsung Magician
    to first undo that block; then I could do my partition management; then
    use Samsung again to enable the wear leveling. I presume that that more
    than Samsung implements such a scheme.

    This is not my area of expertise and I'm generalizing from my limited experience using a few Samsung SSD on my systems. Perhaps someone more knowledgeable can either poo poo my observation or, if it sounds right, flesh out what is going on.

    Wear leveling is done in the virtual to physical translation
    inside the drive. Sector 1 is not stored in offset 1 of the
    flash. Your data is "sprayed" all over the place in there.
    If you lose the virtual to physical map inside the SSD, the
    data recovery specialist will not be able to "put the
    blocks back in order".

    The drive declares a capacity. It's a call in the ATA/ATAPI
    protocol. The sizing was settled in a law suit long ago, which
    penalized a company for attempting to lie about the capacity.
    The capacity on a 1TB drive, will be some number of
    cylinders larger than 1e12 bytes. The size is an odd number,
    so some CHS habits of yore, continue to work. The size is not
    actually a rounded number that customers would enjoy, it's
    a number used to keep snotty softwares happy.

    Any spares pool, and spares management for wear leveling,
    is behind the scenes and does not influence drive operation.
    The spares pool means the physical surface inside the drive,
    is somewhat larger than the virtual presentation to the outside
    world.

    We can Secure Erase the drive. All this does, is remove
    memory of what was there previously (Secure Erase being
    suitable before selling on the drive).

    We can TRIM a drive, and this is an opportunity for the
    OS, to deliver a "hint" to the drive, as to what virtual
    areas of the 1TB, are not actually in usage by the OS.
    If you've removed the partition table from the drive,
    then the OS during TRIM, could tell the drive that the
    entire surface is unused, then all LBAs are put in the
    spares, ready to be used on the next write(s). You might
    be able to deliver this news from the ToolKit software,
    if the GUI in the OS had no mechanism for it. (Maybe
    you can do it from Diskpart, but I haven't checked.)

    The SMART table gives information about Reallocations,
    which are permanently spared out blocks. As the drive
    gets older, the controller may mark portions of it as
    unusable. But, because there is virtual to physical
    translation, as long as there are sufficient blocks
    to present a 1TB surface, we can't tell from the outside,
    it's in trouble. However, if you have the ToolKit for
    the drive installed, it can take a reading every day,
    and extrapolate remaining life (using either the
    number of writes to cells, or, using the reallocation
    total to predict the drive is in trouble). A drive
    can die before the warranty period is up, or before the
    wear life has expired. SMART allows this to be tracked.

    There is a "critical data" storage area, which may
    receive a lot more writes than the average cell. Perhaps
    it's constructed from SLC cells. If this is damaged, that
    can lead to instant drive death, because the drive
    has lost its spares table, its map of virtual to
    physical and so on. Some drives may have sufficient wear
    life, but a failure to record critical data, means they
    poop out early. And maybe this isn't covered all that
    well from a SMART perspective.

    But generally, all corner cases ignored, you just use
    SSDs in the same way you'd use an HDD. You don't need to
    pamper them. The ToolKit will tell you if your pattern
    is abusive, and with any luck, warn you before the drive
    takes a dive. But like any device, you should have
    backups for any eventuality. Regular hard drives can
    die instantly, if the power (like +12V), rises above
    +15V or so. So if someone tells me they have a 33TB array
    and no backups, all I have to do is warn them that the
    ATX PSU is a liability and could, if it chose to, ruin
    the entire array (redundancy and all) in one fell swoop.

    We had a server at work, providing licensed software to
    500 engineers. One day, at 2PM in the afternoon, the
    controller firmware in the RAID controller card, wrote
    zeros across the array, down low. Wiping out some critical
    structure for the file system. Instantly, 500 engineers
    had no software. Most went home for the day :-) Paid of course.
    Costing the company a lost-work fortune. While RAIDs are
    nice and all, they do have some (rather unfortunate)
    common mode failure modes.

    A second RAID controller of the same model, did the same
    thing to its RAID array. Nobody went home for that one,
    and at least then they were thinking it was a firmware
    bug in the RAID card.

    Summary - No, the SSD has no excuses. It's either ready
    for service, or its not. There are no in-between
    states where a partition boundary cannot move.
    The ToolKit software each brand provides, will
    have rudimentary extrapolation of life-remaining.
    As long as some life remains, you can move
    partition boundaries or do anything else involving
    writes.

    Paul
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From Chris Elvidge@chris@mshome.net to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 10:47:14 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On 24/03/2021 03:20 am, Yousuf Khan wrote:
    So one of my oldest SSD's just finally had a bad misfire. One of its
    memory cells seems to have gone bad, and it happened to be my boot
    drive, so I had to restore to a new SSD from backups. That took a fair
    bit of time to restore, but the new drive is twice as large as the old
    one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going
    into disk management it doesn't allow me to fill up that entire drive.
    Any idea what's going on here?

    Yousuf Khan

    Without a current layout diagram it's impossible to say what's wrong.
    Is the free space into which you want to expand the partition contiguous
    with the partition you want to expand? Is it the partition you wish to
    expand the boot partition?
    See here: https://answers.microsoft.com/en-us/windows/forum/all/how-to-expand-boot-partition/69767a28-2efb-4a13-9c7b-2462a09bf629
    --
    Chris Elvidge
    England
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 08:43:34 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    Yousuf Khan wrote:
    So one of my oldest SSD's just finally had a bad misfire. One of its
    memory cells seems to have gone bad, and it happened to be my boot
    drive, so I had to restore to a new SSD from backups. That took a fair
    bit of time to restore, but the new drive is twice as large as the old
    one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going
    into disk management it doesn't allow me to fill up that entire drive.
    Any idea what's going on here?

    Yousuf Khan

    One thing you can try.

    Boot from your Linux LiveDVD USB stick.

    Attempt to mount the partitions on the disk. Then

    cat /etc/mtab

    Look at the mount points. Are any "ro" for
    read-only, instead of "rw" for read-write ?
    It's possible to mark a storage device as
    read-only, but I've not been able to find
    sufficient diagrams of the details. It may
    be a flag located next to the VolumeID 32 bit
    number in the MBR. The partition headers may
    have a similar mechanism, but I got no hints at
    all there.

    https://linux.die.net/man/8/hdparm

    https://www.geeksforgeeks.org/hdparm-command-in-linux-with-examples/

    sudo hdparm -I /dev/sda # Dump info

    sudo hdparm -r0 /dev/sda # set ReadOnly flag to zero, make drive ReadWrite.
    # reboot recommended, as Ripley would say.

    Diskpart in Windows likely has a similar function,
    but we're not sure it works. The threads I could find
    were not conclusive. Otherwise I would have done a Windows one for you.

    In any case, the *boot* drive, should not be the
    same drive you experiment with. On Windows, maybe
    C: is on /dev/sda, whereas /dev/sdb is the "broken"
    drive needing modification. And a reboot maybe.
    No OS need behave well when it comes to corner conditions.
    F5 (refresh) doesn't work at all levels.

    Paul
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From Yousuf Khan@bbbl67@spammenot.yahoo.com to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 08:45:48 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On 3/23/2021 11:20 PM, Yousuf Khan wrote:
    So one of my oldest SSD's just finally had a bad misfire. One of its
    memory cells seems to have gone bad, and it happened to be my boot
    drive, so I had to restore to a new SSD from backups. That took a fair
    bit of time to restore, but the new drive is twice as large as the old
    one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going
    into disk management it doesn't allow me to fill up that entire drive.
    Any idea what's going on here?

        Yousuf Khan

    Okay, I figured it out, I was just being fooled into thinking it wasn't working. Due to the fact that the new drive was exactly twice as big as
    the previous drive, I thought it was telling me that the current size
    was its maximum limit, and that it couldn't add any more of the drive
    space. But in actual fact it was telling me that it could add an
    additional amount of space that just so happened to be exactly the same numerically as the existing space. So I got fooled into thinking the
    wrong thing. I added the additional space without problem.

    On an alternate note, the old drive now has one tiny little bad sector
    hole in it, that I'm thinking the drive can deprovision, and carry on
    without in the future. Is there something that can allow the drive
    electronics to carry on an internal test and remove the bad sectors?

    Yousuf Khan
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From Paul@nospam@needed.invalid to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 09:31:06 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    Yousuf Khan wrote:


    On an alternate note, the old drive now has one tiny little bad sector
    hole in it, that I'm thinking the drive can deprovision, and carry on without in the future. Is there something that can allow the drive electronics to carry on an internal test and remove the bad sectors?

    Yousuf Khan

    Testing burns wear life.

    *******

    A sector has three states (for this discussion):

    1) Error free (in TLC/QLC era, highly unlikely)

    2) Errors present, ECC can correct.

    3) Errors present, ECC cannot correct. tiny little bad sector.

    If (3) were marked with "write, but do immediate read verify",
    this would allow evaluating the material in question, after
    it was put in the free pool. The "questionable status" should
    follow the block around, until it can be ascertained that it
    is (1) or (2) again. If it showed up (3) on a retry, it should
    be thrown into the old sock drawer. Any "write attempt", is
    an excellent time to be checking credentials of the block.

    The procedure should be similar to hard drives, economical
    in nature, yet not endangering user data. To do walking-ones
    or a GALPAT on the flash block, that would be seriously naughty
    and pointless. You could burn out the entire block wear life, then
    conclude there is nothing wrong with the block :-)

    Seagate has a field on their hard drives, called "CurrentPending".
    For the longest while, I took that at face value. However,
    that field isn't what it appears. It only seems to increment
    when the drive is in serious trouble and has run out of spares
    at some level. It's unclear whether there is an "honest"
    item in the SMART table, keeping track of items like (3) so
    a customer can judge how bad things are.

    SMART is generally not completely honest anyway. There's some info,
    but they are dishonest so that users do not "cherry pick" drives,
    and send back the ones that have a tiny blemish when purchased.

    On hard drives, at one time it was considered to be OK for a
    drive to leave the factory, with 100,000 errored sectors on it.
    That's because the yields were bad, and the science could not
    keep up. Now, if SMART was completely honest about your drive,
    imagine how you'd freak out if you saw "100,000" in some table.
    This is why the scheme is intentionally biased so drive devices
    look "perfect" when they leave the factory, when we know there
    is metadata inside indicating the drive is not perfect. Especially
    with TLC or QLC. SSD drives do not leave the factory with
    a state of (1) over 100% of the surface. There is lots of (2),
    and more (2) the longer the new drive sits on the shelf. That's
    why, if you want to bench a modern SSD, you should write it from
    end to end first. This removes the degree of errored-ness on
    the surface, before you do your read benchmark test. If the drive
    was SLC or MLC, I would not be doing this... It would not need it.

    The Corsair Neutron I bought, on first test, I was getting 125 to 130MB/sec
    on reads. Dreadful. The performance popped up, after a refresh. I still took
    it back to the store for a refund the next morning, because
    (maybe) the manufacturer would like some feedback on what
    I think of them.

    Paul
    --- Synchronet 3.18c-Linux NewsLink 1.113
  • From Ken Blake@ken@invalidemail.com to alt.comp.os.windows-10,comp.sys.ibm.pc.hardware.storage on Wed Mar 24 07:58:16 2021
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On 3/23/2021 8:20 PM, Yousuf Khan wrote:
    So one of my oldest SSD's just finally had a bad misfire. One of its
    memory cells seems to have gone bad, and it happened to be my boot
    drive, so I had to restore to a new SSD from backups. That took a fair
    bit of time to restore, but the new drive is twice as large as the old
    one, but it created a partition that is the same size as the original. I expected that, but I also expected that I should be able to extend the partition after the restore to fill the new drive's size. However going
    into disk management it doesn't allow me to fill up that entire drive.
    Any idea what's going on here?


    It's probably because there's no free space contiguous to the partition
    you want to expand. You need to use a third-party partition manager.
    --
    Ken
    --- Synchronet 3.18c-Linux NewsLink 1.113