Subscribe to Datalight's Blog


 

Managed NAND Performance: It’s All About Use Case

Last week the UK journal PC Pro published an interesting article about fast SD cards http://www.pcpro.co.uk/features/380167/does-your-camera-need-a-fast-sd-card, with a good description of the SD card Class system. With some clever testing, they show how six cards perform in a continuous shooting situation.

These tests also demonstrate how the SD card manufacturers have customized their firmware to handle sequential write cases. A class 10 card requires a minimum of 10 MB/sec throughput, and a supplemental rating system for Ultra High Speed (UHS) indicates a higher clock rate and correspondingly higher transfer rate. For the larger frame sizes (12 megapixel photos, HD video) high transfer rates are a requirement. The resulting data is almost always sequential, which matches the firmware characteristics well.

This article brings out one more interesting point. The authors point out that the performance measurements from using an SD card in a desktop system don’t always reflect the use case. They end up performing their tests using an actual camera, thereby getting as close to the use case as possible.

For an application which uses random I/O (such as tablets and other Android devices), these firmware optimizations aren’t necessary. In some cases, such optimizations actually lower random I/O performance. Similar firmware shows up in eMMC media as well. A software solution (such as FlashFXe) can adjust much of the I/O to be more sequential and more closely match the optimized performance.

At Embedded World a few weeks ago we recorded our demonstration showing the benefits of our new FlashFXe product on eMMC.

Watch our FlashFXe Demo Video Here

Thom Denholm | March 15, 2013 | Flash Memory, Flash Memory Manager, Performance | Leave a comment

Even When Not Using a Database, You Are Still Using a Database

Recently, we’ve focused considerable development effort on improving database performance for embedded devices, specifically for Android. This is because Android is a particularly database-centric environment.

On an Android platform, each application is equipped with its own SQLite database. Data stored here is accessible by any class in the application, but not by outside applications. The database is entirely self-contained and server-less, while still being transactional and still using the standard SQL language for executing queries. With this approach, a crash in one application (the dreaded “force close” message) will not affect the data store of any other application. While fantastic for protection, this method is quite often implemented on flash media, which was designed for large sequential reads and writes.

For years, benchmarks have touted the pure performance of a drive through large sequential reads and writes. On managed flash media, the firmware programmers have responded by optimizing for this use case – at the expense of the random I/O used by most databases, including SQLite. Another challenge is the very high ratio of flushes performed by the database (sometimes 1:1). The majority of database writes are not done on sector boundaries – especially problematic for flash media which must write an entire block.

While there are a few unified “flash file systems” for Linux such as YAFFS and JFFS2, designed specifically for flash memory, they have fallen out of favor because they do not plug neatly into the standard software stack, and therefore cannot take advantage of standard Linux features such as the system cache. While traditional file systems such as VFAT and Ext2/3/4 can work with flash, they are not designed with that purpose in mind, and therefore their performance and reliability suffers. For example, discard support has largely been tacked onto Linux file systems, and is still considered to be somewhat experimental. To quote the Linux v3.5 Ext4 documentation, discard support is “off by default until sufficient testing has been done.” Another example: file systems on flash memory typically benefit from using a copy-on-write design, which ext4 does not use. The reality is that most file systems are designed for desktop (and often server) environments, where high resource usage is OK, and power-loss is infrequent.

Our solution to improving database performance on flash memory is to provide a more unified solution where the various pieces of the stack work in a cohesive fashion. Furthermore, the solution is specifically designed for embedded systems using flash memory, where power-loss is a common event. Datalight’s Reliance Nitro file system is a transactional, copy-on-write file system, designed from the ground up to support flash memory discards and power-loss safe operations.

The result of our work in this area is FlashFXe, a new Datalight product built on our many years of experience managing raw NAND, but designed for eMMC. When used together with Reliance Nitro, almost all write operations become sequential and aligned on sector boundaries for the highest performance. Internal operations are more efficiently organized for the copy-on-write nature of flash media. A multi-tiered approach allows small random writes with very frequent flushes to be efficiently handled while maintaining power-loss safe operations.

This month at Embedded World, we will be demonstrating the results of our efforts to improve database performance on embedded devices using Android. Prepare to be impressed!

Learn more about FlashFXe

Thom Denholm | February 12, 2013 | Datalight Products, Flash File System, Performance | Leave a comment

Why CRCs are important

Datalight’s Reliance Nitro and journaling file systems such as ext4 are designed to recover from unexpected power interruption. These kinds of “post mortem” recoveries typically consists of determining which files are in which states, and restoring them to the proper working state. Methods like these are fine for recovering from a power failure, but what about a media failure?

When a media block fails, it is either in the empty space, the user data, or the file system data. A block from the empty space can be detected on the next write, which will either cause failure at the application, or will be marked bad internally and the system will move on to another block. When a media block in the user space fails, it cannot be reliably read. Often, the media driver will detect and report an unreadable sector, resulting in an error status (and probably no data) to the user application. When a media block containing file system data or metadata fails, it is the responsibility of the file system to detect and (if possible) repair that damage. Often the best thing that can be done is to stop writing to the media immediately.

In some ways, blocks lost due to media corruption present a problem similar to recovering deleted files. If it is detected quickly enough, user analysis can be done on the cyclical journal file, and this might help determine the previous state of the file system metadata. Information about the previous state can then be used to create a replacement for that block, effectively restoring a file.

Metadata checksums have been added to several file system data blocks for ext4 in the 3.5 kernel release. Noticeably absent from this list are the indirect and double indirect point blocks, used to allocate trees of blocks for a very large file. The latest release of Datalight’s Reliance Nitro file system (version 3.0) adds CRCs to all file system metadata and internal blocks, allowing for rapid and thorough detection of media failures.

Optional within this new version of Reliance Nitro is using CRCs on user data blocks, for individual files or entire volumes. This failsafe can be configured to write protect the volume or halt system operations. Diagnostic messages are also available to indicate the specific logical block number of the corrupted block.

The combination of full CRC protection on every metadata block and optional protection of user file data blocks is one of the key attributes of this release of Reliance Nitro. Embedded system designers can detect more media failures in testing, and can diagnose failed units more quickly, leading to greater success in the marketplace.

Learn more about Reliance Nitro

Thom Denholm | January 26, 2013 | Flash File System, Flash Memory, Reliability | Leave a comment

eMMC Problems

If you’ve been following this blog, you’ve probably noticed a lot of discussion and analysis around eMMC. We’ve written about the reasons we are so excited about eMMC, but also why the Write Amplification issues caused by eMMC parts are a problem that needs more attention by the industry.

As more and more device manufacturers use eMMC in their devices, product reviews are beginning to highlight some of the limitations of eMMC that we have been discussing. A case in point is this recent review of Google Nexus 7 by Anand Lal Shimpi and Brian Klurg.

As the review points out, the performance downside of using eMMC parts is that they are “optimized for reading and writing large images as if they were used in a camera.” Also, eMMC was never designed to be used by a “full blown multitasking OS,” and therefore can cause major problems with device responsiveness. This is mainly because multi-tasking (i.e. any other action performed while download is in progress) effectively “turns the IO stream from purely sequential to pseudo-random.” This corroborates with our view that many eMMC parts are not equipped for optimal performance for random reads and writes. The authors’ benchmark results (below) underscore the severity of the problem:

So, how can device manufacturers get better performance from their eMMC parts, and continue to leverage the simplicity of programming and consistency of design parameters that eMMC offers?

Simplistically put, the eMMC driver is responsible for flash-aware allocation of data to flash memory. The combined layers of the driver and the file system, sometimes known as the flash file system, is the level at which hardware behavior can be translated to software behavior in a way that enhances performance without compromising the endurance and data integrity.  Also, the complementary interaction between the driver and the file system layer can bring further benefits to the device performance, endurance and reliability. Getting this part of the system right goes a long way to solving eMMC’s write amplification problem.

Here at Datalight, we have been researching the most efficient way of doing this, drawing on our decades of experience of developing driver and file system software for a wide array of flash parts. Stay tuned for more in-depth explanations on how we’re doing it, but for now we are very excited about the early test results we’re seeing in our lab, especially enhancements combining an optimized file system with our new eMMC driver.

Learn more about Datalight's eMMC solutions

AparnaBhaduri | December 19, 2012 | Datalight Products, Flash Industry Info, Flash Memory | 1 Comment

Multithreading in Focus: Performance & Power Efficiency

We’re constantly on the lookout for ways to help our customers boost performance and improve power efficiency, and often our inspiration comes by way of the conversations we have with them. Recently, several of these discussions highlighted user scenarios where the complexity of the application would benefit from an enhancement to the classic Dynamic Transaction Point™ technology found in our Reliance Nitro file system. Here are a couple examples of the user scenarios I’m talking about, specifically for multi-threaded environments:

In a multi-threaded system, the activity among threads can be unpredictable, sometimes requiring multiple writes by the file system to the media within milliseconds. Each write requiring its own transaction commit or flush by the file system takes a toll on performance with no real reliability benefit.

Another challenge in a multi-threaded system is power efficient utilization of the processor when the file system is configured to commit data after specific time intervals. These transactions “wake up” the processor just to generate a request, even though no actual commits or flushes occur if there was no disk activity since the last transaction point. This unnecessary activation of an inactive processor is a waste of valuable power. By suspending thread activity until new disk activity occurs, battery life could be extended significantly.

Understanding how customers use the configurable transaction points of our Reliance Nitro file system was instrumental in improving Reliance Nitro. Below is a little background on Reliance Nitro and Dynamic Transaction Point technology:

The Reliance Nitro file system is a highly reliable, power interrupt-safe transactional file system. Keeping the reliability intact without risking loss or corruption of data means that customers have the flexibility to configure when a “transaction” (i.e. a set of operations that constitute a change as a whole), is to be written to the storage media from cache. This can even be done during operation of the device (run time), and includes the following options:

(a) Timed: Transacts or commits to storage media after a specified time interval (e.g., commit data to storage media every 10 milliseconds).

 (b) Automatic: Transacts every time a file system event happens (e.g., handheld scanner commits every time the database file is written to (file_close)).

(c) Application-controlled: Transacts every time all conditions are met (e.g., several files that are dependent on each other and need to be updated together have been all changed.)

Using these options in combination gives customers the flexibility to choose under exactly which conditions they want to transact and protect important data, precision that enables total control over the balance between performance and protection for any use case.

Our efforts to address the needs of our multi-threaded customers described at the beginning of this blog post have led us to the next big breakthrough in embedded file system design, and the next big feature for Reliance Nitro. I will be blogging more about this feature soon!

Also coming soon, keep an eye out for our 2012 Customer Survey, another way we seek to continuously improve our understanding of what our customers need. We sincerely hope to get your feedback on the survey, but don’t hesitate to contact us anytime if you have suggestions for improvement.

Learn More About Dynamic Transaction Point technology

AparnaBhaduri | October 15, 2012 | Datalight Products, Flash File System, Flash Memory | Leave a comment

Device Longevity using Software

The new chief executive for Research in Motion Ltd., Thorsten Heins, mentioned recently that 80 to 90 percent of all BlackBerry users in the U.S. are still using older devices, rather than the latest Blackberry 7.

Longevity of a consumer device is something that we at Datalight know belongs firmly in the hands of the product designer, rather than being limited by the shortened lifespan of incorrectly programmed NAND flash media. Both Datalight’s FlashFX Tera and Reliance Nitro incorporate algorithms which reduce the Write Amplification on all Flash media. These methods are especially important on e-MMC, which is at its heart NAND flash. In addition, the static and dynamic wear leveling in FlashFX Tera provides even wearing of all flash for maximum achievable lifetime.

Shorter lifetime for some consumer devices, such as low end cell phones, may be found acceptable. However, many newer converged mobile devices that command a higher price, such as tablets, are expected by consumers to have a much longer lifetime. These devices may be replaced by the primary user with some frequency, although since they are viewed as mini-computers and therefore less “disposable,” they will likely be handed down to younger users rather than being discarded or recycled. Consumers will protest in if they discover their $500 tablet only has a lifespan of 3 years, and they will be even more upset if due to flash densities and write amplification that the next version they purchase may have even a shorter lifespan.

How will flash longevity affect your new embedded design?

Thom Denholm | March 6, 2012 | Extended Flash Life, Flash Industry Info, Flash Memory, Flash Memory Manager | Leave a comment

Datalight Sponsors Local High School Robotics Team

The Arlington Neobots are not like other high school technology clubs. For one thing they have access to a phenomenal pool of mentors from local technology companies like Boeing, Microsoft and now Datalight. They also have a growing number of female members, a rarity in youth organizations oriented to math and science.

Founded in 2008 with seed money from Boeing, the team competes in an annual robot building competition created by national non-profit organization FIRST (For Inspiration and Recognition of Science and Technology), and this year the competition is already ramping up. For 2012, FIRST has challenged the robotics teams to a game similar to basketball called Rebound Rumble. Six teams are split up into two alliances of three; one alliance is blue and the other red. During the 2-minute and 15-second match, teams compete by trying to make as many baskets as they can. Part of the match is devoted to a 15-second autonomous mode where the robot is controlled through an XBox Kinect instead of the robot’s standard remote control. There are four hoops – one high, two middle, and one low. The higher the hoop, the more points awarded for making a basket in it.

The Neobots will need to work together in teams to finish their robot by the competition deadline. First, the one-week design phase involves team analysis of the game and its rules manual, and a group decision on game strategy and design criteria for the team robot. Next, the team will split into design groups to brainstorm, research and present their findings to the team. Then, using 3D models and prototypes, each group will propose a robot design to be voted on by the team. After the design is established, the build phase involves again breaking into sub-groups that are each assigned projects like System Integration, Programming, and Drive-Base, and other functions. The team will follow an iterative process; every major milestone will be tested rigorously before they proceed.

You might ask why Datalight would sponsor a high school robotics club. VP of Engineering Ken Whitaker puts it this way; “This is one of the most important things we can do as a technology company. What you’re seeing in its raw form is the next generation of embedded engineers, and we have a responsibility to nurture and support them. In a few years time I could see any of these motivated students ending up on my engineering team.”

Learn more about Datalight

RobHart | February 20, 2012 | Datalight Products | Leave a comment

Software Perspective on eMMC

We here at Datalight are seeing a lot of interest in this weeks “Software Perspective on eMMC” presentation, across a broad spectrum. This is apparently a pretty hot topic!

If you are interested in joining us, seats are still available – http://www.datalight.com/welcome/web-seminar-switching-to-emmc

Thom Denholm | December 5, 2011 | Datalight Products, Flash Industry Info | Leave a comment

Advances in Nonvolatile Memory Interfaces Keep Pace with the Data Volume

This article entitled Advances in Nonvolatile Memory Interfaces Keep Pace with the Data Volume, recently published in RTC Magazine, gives a nice overview of maintaining performance on newer technologies.

 

Learn more about Datalight and ClearNAND

Michele Pike | November 22, 2011 | Flash Memory, Flash Memory Manager, Performance | Leave a comment

Datalight Outperforms Other Linux Flash File Systems

It’s always gratifying when you run benchmarks and discover your product actually does outperform the competition. Months and months of development effort went in to making Reliance Nitro and FlashFX Tera run flawlessly in an open source environment. We were pretty sure our transactional architecture beat the pants off YAFFS2, JFFS2, and UBIFS, but until you run the final benchmarks, you really don’t know for certain. Recently we ran tests on two platforms, a ConnectCore Wi-i.MX51 (Cortex-A8) and an NVidia Tegra 2 (Cortex ARM9). The Flash part used for all tests was a Samsung 512 MB part. The specific test used was IOZone, with a specified file size sufficient to be larger than the Linux cache, in order to better reflect the raw throughput. The results speak for themselves:

Also see an article weighing the pros and cons of JFFS2

Michele Pike | July 15, 2011 | Flash File System, Flash Memory Manager | Leave a comment