Mixed IO Performance

For details on our mixed IO tests, please see the overview of our 2021 Consumer SSD Benchmark Suite.

Mixed IO Performance
Mixed Random IO Throughput Power Efficiency
Mixed Sequential IO Throughput Power Efficiency

The ADATA XPG Gammix S50 Lite has mediocre overall performance on the mixed random IO test and comes in last place among this batch of drives for the mixed sequential IO test. These results aren't too surprising at this point; the mixed IO tests are both conducted on a mostly-full drive without restricting the test to a narrow slice of the drive, and we've already seen that these conditions bring out the worst in the S50 Lite.

Mixed Random IO
Mixed Sequential IO

On the mixed random IO test, the S50 Lite is at least fairly consistent; once the workload has more than about 30% writes there isn't much change in the performance. By contrast, the mixed sequential IO test results are a mess, with performance bouncing around with no clear pattern. SLC cache overflow is probably the primary factor here, but it ends up being less consistent than the results from the sustained sequential write test. The fact that we're testing four independent streams of sequential IO is probably also a very poor match for the kind of IO patterns this drive is tuned for.

Power Management Features

Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.

For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.

ADATA XPG Gammix S50 Lite
NVMe Power and Thermal Management Features
Controller Silicon Motion SM2267
Firmware 82A7T92C
NVMe
Version
Feature Status
1.0 Number of operational (active) power states 3
1.1 Number of non-operational (idle) power states 2
Autonomous Power State Transition (APST) Supported
1.2 Warning Temperature 75 °C
Critical Temperature 80 °C
1.3 Host Controlled Thermal Management Supported
 Non-Operational Power State Permissive Mode Not Supported

The S50 Lite supports the most common NVMe power management features, including low-power idle states that are supposed to have quick transition latencies. The maximum power of 9W in the full-power state is a fairly conservative figure; if the drive ever actually draws that much, it's only for very short intervals.

ADATA XPG Gammix S50 Lite
NVMe Power States
Controller Silicon Motion SM2267
Firmware 82A7T92C
Power
State
Maximum
Power
Active/Idle Entry
Latency
Exit
Latency
PS 0 9.0 W Active - -
PS 1 4.6 W Active - -
PS 2 3.8 W Active - -
PS 3 45 mW Idle 2 ms 2 ms
PS 4 4 mW Idle 15 ms 15 ms

Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.

Idle Power Measurement

SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.

Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks, and depending on which NVMe driver is in use. Additionally, there are multiple degrees of PCIe link power savings possible through Active State Power Management (APSM).

We report three idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. Our Desktop Idle number represents what can usually be expected from a desktop system that is configured to enable SATA link power management, PCIe ASPM and NVMe APST, but where the lowest PCIe L1.2 link power states are not available. The Laptop Idle number represents the maximum power savings possible with all the NVMe and PCIe power management features in use—usually the default for a battery-powered system but rarely achievable on a desktop even after changing BIOS and OS settings. Since we don't have a way to enable SATA DevSleep on any of our testbeds, SATA drives are omitted from the Laptop Idle charts.

Idle Power Consumption - No PMIdle Power Consumption - DesktopIdle Power Consumption - Laptop

The S50 Lite is one of the more power-hungry drives when idle power management is disabled, drawing over 1W. But the low-power idle states are working well, unlike what we saw with the Intel SSD 670p that uses a close relative of this SM2267 controller. (We're still working with Silicon Motion to figure out that bug.) It also appears that Silicon Motion has moderately improved the real-world wake-up latencies, which are surprisingly high for the SM2262EN drives. The competition shows that there's still room for Silicon Motion to provide an order of magnitude improvement here, and we'd like to see the SMI controllers start living up to the transition times advertised by their firmware.

Idle Wake-Up Latency

Advanced Synthetic Tests: Block Sizes and Cache Size Effects Conclusion
Comments Locked

93 Comments

View All Comments

  • utmode - Sunday, May 2, 2021 - link

    @Billy Tallis, is there any recent research done on data retention on QLC drive. Electrons are very naughty at staying at a set voltage.
  • Billy Tallis - Sunday, May 2, 2021 - link

    Write endurance limits are set based on how much you can wear out the flash and still have one year of unpowered data retention (or three months for enterprise drives). That's still largely determined with high-temperature accelerated testing, but it's pretty well understood how to do that properly.
  • bansheexyz - Saturday, May 1, 2021 - link

    Can we ban this idiot already? A 10TB QLC drive will have a larger write endurance than a 2TB TLC drive does today. There is nothing inherently wrong with QLC tech, its supposedly inferior write endurance is self-mitigating by the fact that there are more cells to spread writes across. Which is exactly why TLC overtook MLC, and MLC overtook SLC. Go the frick away.
  • GeoffreyA - Sunday, May 2, 2021 - link

    Not a fan of freedom of speech/expression/press? Especially when it comes to these money-driven corporations, one needs to put whatever they do under a microscope and pay little heed to their words.

    There's nothing wrong with QLC. It's a product with a place: supposedly, bigger size and cheaper price. (Concerning endurance, if the ratings are true and not made up, they ought to be fine for most people.) But as far as I can see, QLC isn't *that* much cheaper than TLC. About 15-20% or something to that effect. Costly to make, greed, the pandemic, or all three?
  • futrtrubl - Sunday, May 2, 2021 - link

    That's pretty much what it should be. QLC holds about 33% more than TLC, or, for the same amount of storage QLC uses 25% fewer cells. It's a less mature tech so I wouldn't expect it to get the full 25% savings, and all the other common components will reduce savings too.
  • MFinn3333 - Sunday, May 2, 2021 - link

    It is 50% more, not 33%.

    If you look at the total number states of a TLC which is 8 or 3 bit cells versus a QLC which is 16 states or 4 bit cells.
  • Billy Tallis - Sunday, May 2, 2021 - link

    Voltage states are an enumeration of possibilities; they do not occupy physical space and are not the correct quantity to compare when discussing storage capacity.
  • FunBunny2 - Monday, May 3, 2021 - link

    "Voltage states are an enumeration of possibilities; they do not occupy physical space"

    well... would you deny that a larger cell, i.e. one with more atoms, is more capable of storing more distinct voltages with some delta of accuracy? not to mention the whole endurance thingee. IOW, as an applied physics problem, QLC is closer to the razor's edge of performance than lower xLC cells. were all this not true, then manufacturers have been wasting gobs and gobs of moolah to implement stacks of 'olde' larger node NAND for TLC and QLC.
  • Tamdrik - Sunday, May 2, 2021 - link

    But most people don't measure their storage devices by how many different states they can maintain-- they measure them by how many bits (or xxx-bytes) they can store. By that (standard) measure, futrtrubl is correct in that a QLC drive holds 33% more data than a TLC drive with the same number of cells (4 bits per cell vs. 3 bits per cell), and if costs are the same per cell, would be expected to cost 25% less for a given capacity (e.g., 1 TB).
  • Oxford Guy - Sunday, May 9, 2021 - link

    'But most people don't measure their storage devices by how many different states they can maintain-- they measure them by how many bits (or xxx-bytes) they can store.'

    You really think that's a logically-sound rebuttal?

Log in

Don't have an account? Sign up now