In 1M-bit/cell multi-level cell (MLC) flash memories, it is more difficult to guarantee the reliability of data as M increases. The reason is that an M-bit/cell MLC has 2M states whereas a single-level cell (SLC) has only two states. Hence, compared to SLC, the margin of MLC is reduced, thereby making it sensitive to a number of degradation mechanisms such as cell-to-cell interference and charge leakage. In flash memories, distances between 2M states can be controlled by adjusting verify levels during incremental step pulse programming (ISPP). For high data reliability, the control of verify levels in ISPP is important because the bit error rate (BER) will be affected significantly by verify levels. As M increases, the verify level control will be more important and complex. In this article, we investigate two verify level control criteria for MLC flash memories. The first criterion is to minimize the overall BER and the second criterion is to make page BERs equal. The choice between these criteria relates to flash memory architecture, bits per cell, reliability, and speed performance. Considering these factors, we will discuss the strategy of verify level control in the hybrid solid state drives (SSD) which are composed of flash memories with different number of bits per cell.
Flash memory is now the fastest growing memory segment, driven by the rapid growth of mobile devices and solid state drives (SSD). To satisfy the market demand for lower cost per bit and higher density of nonvolatile memory, there are two approaches: (1) technology scaling, (2) multi-level cell (MLC) [1-4].
As the technology continues to scale down, flash memories suffer from more severe physical degradation mechanisms such as cell-to-cell interference (coupling) and charge leakage [5,6]. In addition, M-bit/cell MLC flash memories have 2M states within the threshold voltage window whereas the single-level cell (SLC) has only two states. Therefore, the reliability of stored data is an important challenge for high density flash memories.
In order to cope with this reliability problem, many approaches have been proposed. The incremental step pulse programming (ISPP), which is the most widely used programming scheme, was proposed to maintain a tight cell threshold voltage distribution for high reliability [7,8]. ISPP is a program and verify strategy with a stair case program voltage Vpp as illustrated in Figure 1, where ΔVpp is the incremental step size. During each program and verify cycle, the floating gate threshold voltage is first boosted by up to ΔVpp and then compared with the corresponding verify level. If the threshold voltage of the memory cell is still lower than the verify level, the program and verify iteration continues. Otherwise, further programming of this cell is disabled [7-10].
Therefore, positions of program states (except the erase state) are determined by verify levels and the tightness of each program state depends on the incremental step size ΔVpp. By reducing ΔVpp, the cell threshold voltage distribution can be made tighter, but the programming time will increase [7,8]. In brief, ISPP can control both the distances between states by verify levels and the tightness of program states by the incremental step size.
For SLC, determining the verify level of the programming state is a simple problem because there is only one program state and the margin between the erase state and the program state is sufficiently large so that small changes in the margin will not change the error rates noticeably. However, the verify level control issue for M-bit/cell flash memories is more important and complex than that for SLC. This is because 2M states have to be crammed within the given constrained threshold voltage window W . More states will significantly reduce the margin between states and bit error rates (BER) will vary in response to small changes in verify levels. Furthermore, the number of verify levels which ISPP has to control increases from 1 (for SLC) to 2M−1 (for M-bit/cell MLC). In addition, as explained in the following, the multipage architecture of MLC flash memories makes verify level control more complex than SLC.
Most MLC flash memories adopt the multipage architecture. The important property of the multipage architecture is that different bits of a single cell are assigned into different pages [10-15]. Therefore, BERs of each page can be different. As a page is the unit of data that is programmed and read at one time, the error control coding (ECC) should be applied within the same page. It means that each page is composed of one or several codewords. Therefore, ECC has to be designed for the worst page BER and this leads to wasted redundancy for the other (i.e., better) pages. This uneven page BER problem is an important and practical issue and there have been several attempts to deal with it [11-15].
To deal with this different page BERs issue, we investigate two verify level control criteria for MLC flash memories. The first criterion is to minimize the overall BER. The second criterion is to make all page BERs equal . These two criteria will be formulated as convex optimization problems. After solving these optimization problems, we will compare the numerical results from two criteria. In addition, the advantages and disadvantages of the two criteria will be discussed based on reliability, speed performance, and architecture of MLC flash memories. To the best of authors’ knowledge, the convex optimization approach for verify levels of ISPP has not been addressed in the open literature though experimental approaches could be investigated in industry.
An interesting way to combine the speed advantage of SLC and the cost advantage of MLC is to use a hybrid solid state drive (SSD) that judiciously uses both SLC and MLC flash memories. The basic idea of hybrid SSD is to complement the drawbacks of SLC and MLC with each other’s advantages [16-19]. Based on the architecture of the hybrid SSD and properties of the proposed verify level control criteria, we propose a strategy to apply the proper verify level control criterion for the hybrid SSD. This strategy is aimed at both reliability and speed performance.
The rest of this article is organized as follows: Section “Cell threshold voltage distribution” discusses the cell threshold voltage distribution under the assumption of a Gaussian mixture model (GMM). Based on this statistical model, the overall BER and the page BER are derived. Sections “Criteria for verify level control” and “ECC and flash memories of multipage architecture” address verify level control criteria and discuss their advantages and disadvantages for various MLCs (M = 2 ∼ 4) considering multipage architecture and ECC. Section “Hybrid SSD and strategy for verify level control” proposes a method to choose these criteria for the hybrid SSD based on reliability and speed performance. Finally, Section “Conclusion” concludes this article.
Cell threshold voltage distribution
In M-bit/cell flash memory, the cell threshold voltage distribution is composed of 2M states from S0 (the erase state) to S2M−1 (the highest state). Even though there are tail cells and asymmetry in cell distributions, the cell threshold voltage distribution of flash memories could be approximated as a sum of Gaussian distributions [6,20,21]. Therefore, we will model the cell threshold voltage distribution f(x) by the following GMM.
where x refers to the threshold voltage and fi(x) is a Gaussian pdf with mean μi and standard deviation σi corresponding to the state Si. P(Si) is the probability of the state Si. If data size is sufficiently large and a scrambler is used, then we can assume that with high probability.
Figure 2 shows the cell distribution of 2-bit/cell flash memories. There are four states from S0 (the erase state) to S3(the highest state) within the constrained voltage window W . The constrained voltage window W is the distance between the mean of the erase state and the mean of the highest state, which is given by
Figure 2. Cell threshold voltage distribution for 2-bit/cell flash memories. There are four states from S0 to S3. Each state Si can be modeled by the distribution fi.
The overall BER (i.e., BERoverall) is the total number of erroneous bits divided by the total number of data bits which contains data of all pages. If the Gray mapping is used, there is only one bit difference between Si and Si + 1. For example, in 2-bit/cell MLC, states S0, S1, S2, and S3 denote bit patterns 11, 10, 00, and 01. Probabilities that cells are misread as states which are more than two states away from the original state are much smaller than probabilities of cells being misread as adjacent states and thus are negligible. Therefore, the overall BER can be expressed as
where Δi,1 is the distance from μi to Di,i + 1 and Δi + 1,0 is the distance from μi + 1 to Di,i + 1. Di,i + 1 is the optimal decision level between Si and Si + 1, which satisfies the condition of fi(Di,i + 1) = fi + 1(Di,i + 1) [22-24]. In addition, σi,0 and σi,1 are used separately for convenience although σi,0 = σi,1 = σi. The tail probability function Q(x) is defined as
If we change the index of Δi,j into Δk and σi,j into ρk by k =2 i + j, (3) can be rewritten as
where all Δk s are positive since it is natural that μi + 1 > μi.
Most MLC flash memories adopt multipage architectures . In this multipage architecture, ECC encoding and decoding are performed within each page. This means that pages with higher BERs will suffer from worse decoding failure rate. Therefore, the BER of each page could be more important than the overall BER in terms of ECC [11,13-15].
The page BER (i.e., the BER of each page) depends on the mapping scheme that converts a state level to corresponding bit representation. We will define BERpage m as the BER of page m. For example, if the 2-bit/cell flash memory adopts the Gray mapping of Table 1, the data of page 1 are obtained by one read operation between S1 and S2. Therefore, the BER of page 1 is determined by f1(x) and f2(x). In order to read the data of page 2, two read operations (one between S0 and S1, and another between S2 and S3) are required. Then the BER of page 2 is determined by f0(x) and f1(x), f2(x), and f3(x). Therefore, page BERs for 2-bit/cell are given by
Table 1. Gray mapping for 2-bit/cell flash memories
By the same method, page BERs for 3-bit/cell adopting the Gray mapping of Table 2 are given by
Table 2. Gray mapping for 3-bit/cell flash memories
Similarly, page BERs for 4-bit/cell or more could be derived from the mapping scheme provided.
The overall BER of (3) can be expressed as the mean of page BERs, which is given by
The distance between the means of Si and Si + 1 is μi + 1 − μi = Δi,1 + Δi + 1,0. We will term μi + 1 − μi as the distance from Si to Si + 1, and the distance from Si to Si + 1 will be determined by Δi,1 and Δi + 1,0. For all states, we will define two parameters as follows.
where Δk = Δi,j and ρk = σi,j by k = 2i + j. Therefore, and will represent the all distances between states and the tightness of program states, respectively, and they will determine the overall BER and the page BERs. In the ISPP scheme, can be controlled by verify levels and by the incremental step size. In the following section, we will propose criteria for verify level control, which means how to determine at the given .
Criteria for verify level control
We investigate two verify level control criteria. The first criterion is to minimize the overall BER, which is aimed at only reliability. The second criterion is to make page BERs equal considering both the reliability and the multipage architecture. These two criteria will be formulated as optimization problems. If the parameters of W(=μ2M−1−μ0) and are given, will be the variables of optimization problems.
We will show that the proposed criteria for verify level control are convex optimization problems. Therefore, the (globally) optimal solution can be efficiently found using numerical optimization techniques and the interior-point method was used to obtain the numerical results . Also, mathematical conditions for the optimal solutions of these criteria are derived.
Criterion 1: minimize overall BER
The first criterion is to minimize the overall BER. This criterion 1 for M-bit/cell flash memories can be formulated as follows.
The cost function g1(·) is a nonnegative weighted sum of Q(·). From (4), the second derivative of Q(·) is given by
Since Δk is the distance and ρk is the standard deviation, all Δks and ρks are always positive. Therefore, (11) is a convex optimization problem and can be solved by several numerical methods .
We will define the Lagrangian G1 as follows.
where . The optimal solution of (11) has to satisfy the following Karush-Kuhn-Tucker (KKT) conditions .
Since all Δks are positive, ηk = 0 for all k by (14), which results from complementary slackness. Therefore, the optimal solution will satisfy the following condition from (13) and (14).
From (15), the optimal solution has to satisfy the following condition in order to minimize the overall BER.
Figure 3 illustrates the condition of (16) for minimizing the overall BER. In addition, Figure 3 shows that the decision level Di,i + 1 satisfies the condition fi(Di,i + 1) = fi + 1(Di,i + 1) which corresponds to the definition of the optimal decision level [22-24].
Figure 3. Example of criterion 1 for 2-bit/cell. This example illustrates the condition which minimizes the overall BER.
(17) shows that BERpage m+1 will be twice BERpage m if variances for all states are same. When M = 2, BERpage 1 and BERpage 2 = . From (8), BERoverall = . Figure 4 shows BERpage 1, BERpage 2, and BERoverall as a function of σ for 2-bit/cell flash memories. From (17), it is seen that the ratio of page BERs for 3-bit/cell is 1:2:4. For 4-bit/cell flash memories, the ratio of page BERs will be 1:2:4:8 [11-15]. Therefore, using criterion 1 makes the difference between page BERs larger as M increases.
Figure 4. BER of 2-bit/cell flash memories using Criterion 1. The constrained voltage window W is assumed to be 5.
In addition, the difference between page BERs from criterion 1 could increase if variances for states are not equal. For example, it is possible that the erase state S0 has wider distribution than other program states since the cell threshold voltage distribution of the erase state is not controlled as tightly as other program states by ISPP . In this case, more errors will occur between S0 and S1, which results in the increase of the last page BER (BERpage M) in the Gray mapping schemes of Tables 1∼ 3. Table 4 shows the increase of the difference between page BERs. In Table 4, we assumed that the standard deviation of the erase state (S0) is σ0 and standard deviations of other program states (S1∼S3) are same as σ. As the erase state distribution becomes broader, criterion 1 will lead to more difference between page BERs.
Table 3. Gray mapping for 4-bit/cell flash memories
Table 4. BERpage 2/BERpage 1 for 2-bit/cell flash memories (W = 5)
Criterion 2: make page BERs equal
The second criterion is to make the page BERs equal . In addition, the overall BER has to be made as small as possible. Therefore, this criterion 2 for M-bit/cell flash memories can be formulated as follows.
where BERpage m. Therefore, from the constraints of this optimization problem, ε will represent the maximum value among all page BERs. While trying to minimize ε, we can find the optimal solution which minimizes the overall BER among candidates satisfying the following condition.
In other words, even though the formulation in (18) does not explicitly set the page BERs to be identical, it implicitly minimizes the difference between all page BERs. Intuitively, if BERpage m is higher than other page BERs, the optimization in (18) will try to reduce BERpage m and make it as close to other page BERs as possible.
(18) is a convex optimization problem since is a nonnegative weighted sum of convex function Q(·). The convex property of Q(·) was shown in (12). Therefore, the optimal solution can be obtained by several numerical methods.
The Lagrangian G2associated with (18) is given by
As discussed in criterion 1, all ηks will be zero due to complementary slackness of (21).
Figure 5 shows how criterion 2 works for 2-bit/cell flash memories. In order to make page BERs equal, |μ2−μ1| = Δ3 + Δ4 of criterion 2 has to be reduced compared to that of criterion 1. Meanwhile, |μ1−μ0| = Δ1 + Δ2 and |μ3−μ2| = Δ5 + Δ6 will be larger than those of criterion 1.
Figure 5. Example of criterion 2 for 2-bit/cell. This example illustrates the condition which makes page BERs equal and minimizes the overall BER as far as possible.
From (21), we can obtain following conditions for the optimal solution of 2-bit/cell flash memories.
If one of λ s (for m = 0,…,M) is zero, then all λms should be zero since and for all k. However, if all λms are zero, the condition of 1−λ1−λ2 = 0 in (22) cannot hold. Therefore, we can see that λm ≠ 0, which results in hm −ε = 0 by KKT conditions of (21). It means that all page BERs will be equal for the optimal solution of (18) .
Taking into account the aforementioned discussions, the conditions of (22) can be modified by
which are illustrated in Figure 5.
Figure 6 shows the numerical results of criterion 2 for 2-bit/cell flash memories when variances for all states are equal to σ. All page BERs and the overall BER are made equal by criterion 2. Even if variances for all states are not equal, the optimal solution can be obtained by the same method.
Figure 6. BER of 2-bit/cell flash memories using Criterion 2. The constrained voltage window W is assumed to be 5.
It is worth mentioning that the overall BER from criterion 2 is worse than that from criterion 1. The reason is that the overall BER increases when we try to make page BERs equal. Figure 7 shows the degradation of the overall BER from criterion 2 compared to criterion 1 for 2-bit/cell flash memories. In order to measure this degradation, we will define the degradation ratio γ given by
Figure 7. Comparison between the overall BER from the two criteria. The constrained voltage window W is assumed to be 5.
Figure 8 shows the degradation ratio γ for 2-bit/cell, 3-bit/cell and 4-bit/cell flash memories where we assume that variances for all states are same. For 2-bit/cell, γ is about 1.05, which means that the degradation of the overall BER is 5 %. Meanwhile, the degradation of 3-bit/cell is about 14 % (γ ≈ 1.14) and the degradation of 4-bit/cell is about 25 % (γ ≈ 1.25). These results reveal that equalizing page BERs causes an increase of the overall BER.
Figure 8. Degradation ratio γ for 2∼4-bit/cell flash memories. The constrained voltage window W is assumed to be 5.
Verify level control criteria and charge leakage
After programming data into flash memories, the cell threshold voltage distribution can change because of charge leakage. The cell threshold voltage distribution change due to charge leakage can be modeled as a change in the mean and the variance of the distributions, i.e., 
where μpre and are the mean and the variance before charge leakage. μpost and are the mean and the variance after charge leakage. μshift and are the mean and the variance of threshold voltage shift by charge leakage. μshift and depend on the program and erase (P/E) cycle count, retention time and temperature .
The proposed verify level control criteria should be applied based on μpost and because μpost and will determine the BER of flash memories. Therefore, we have to control μpre and considering the amount of μshift and . Basically, μpre and can be controlled by verify levels and the incremental step size ΔVpp of ISPP though physical mechanisms such as cell-to-cell interference, program disturbance, and background pattern dependency also affect μpre and [5,7,8].
Via chip testing, we can measure the amount of μshift and as a function of P/E cycle count and retention time . However, the allowable maximum values of μshift and are generally used because ECC has to be designed to guarantee the reliability even in the worst case, which is also called end-of-life (EOL). EOL assumes the allowable maximum P/E cycle count and the allowable maximum retention time. Therefore, it is a practical method to apply the proposed verify level control criteria based on μpost and of EOL. In this case, μpost and should be used to formulate the convex optimization problems shown in (11) and (18). Other than this minor modification, no additional change is required for our proposed mathematical formulations.
Verify level control criteria and other statistical distributions
We will extend these proposed verify level control criteria for other distributions. Suppose that the threshold voltage distribution of each state Si can be approximated as an arbitrary distribution ϕi(x) which has a maximum value at x = νi(i.e., νi is the mode of ϕi(x)). Instead of (2), the constrained voltage window W will be defined by
The distance between Si and Si + 1 will be defined as νi + 1−νi instead of μi + 1−μi and it is assumed that νi + 1 > νi for all i. In the case of Gaussian distributions, μi and νi are same.
Then, the error probability between Si and Si + 1(i.e., Ei,i + 1) is given by
where P(Si) is the probability of Si. In addition, Δi,1 is the distance from νi to Di,i + 1 and Δi + 1,0 is the distance from νi + 1 to Di,i + 1. Di,i + 1 is the decision level between Si and Si + 1.
where ϕi,−(t) = ϕi(t + (νi−ν0)) and ϕi + 1, + (t) = ϕi + 1(t−(ν2M−1−νi + 1)). ν0 and ν2M−1 are fixed value by (26).
The overall BER and the page BERs of M-bit/cell MLC flash memories are nonnegative weighted sums of Ei,i + 1 for i = 0,…,2M−2. Therefore, if Ei,i + 1 is a convex function of Δi,1 and Δi + 1,0, the proposed verify level control criteria will be convex optimization problems.
The Hessian matrix of Ei,i + 1 is given by
which mean that ϕi(x) should be a unimodal distribution for convex optimization.
Since the measured threshold voltage distributions of recent flash memory products [2-4] are unimodal, the proposed verify level control criteria can be effectively applied to flash memories. In addition, the proposed verify level control criteria can be applied to other memories such as phase change memory (PCM) because the measured distributions of PCM in literature seem to be unimodal [26-28]. Especially,  claims that the distributions of PCM could be approximated by the log-normal distribution in spite of the anomalous tail. Therefore, our proposed verify level control criteria are expected to be useful in PCM.
ECC and flash memories of multipage architecture
When algebraic ECC such as Bose, Chaudhuri, and Hocquenghem (BCH) codes are used in a binary symmetric channel (BSC) with bit error probability p, the word error rate (WER) is given by
where n is the codeword length and t is the error correcting capability. The bound becomes an equality when the decoder corrects all combinations of errors up to and including t errors, but no combinations of errors greater than t (i.e., bounded distance decoder) [29,30]. In this article, the bounded distance decoder will be considered. Once ECC parameters such as n and t are selected, the WER is a function of only p.
Though errors in flash memories are generally not symmetric, the asymmetric component of errors could be minimized if the decision level are selected appropriately [22-24]. For example, for 2-bit/cell flash memories, the errors of page 1 will be symmetric if we select the decision level between S1 and S2 which makes in (6). Similarly, the errors of page 2 can be symmetric if we choose the decision levels and which make and in (6).
Although σi ≠ σi + 1, if σi is not substantially different from σi + 1, the difference between Di,i + 1 and is almost negligible . Therefore, the BER based on is similar to that based on Di,i + 1. Considering these, we will use (31) to calculate the WER of flash memories .
In most of flash memories, program and read operations are performed in page units . Therefore, ECC encoding and decoding are also performed in page units [13-15]. It means that the WER of each page depends on each page BER which corresponds to p in (31). Therefore, the overall WER is given by
where WER(BERpage m) is the WER of page m.
WER(p) of (31) can be computed from the incomplete beta function Ix(ab) .
The second derivative of WER(p) is given by
(34) reveals that the overall WER would be improved by interleaving. If the interleaver is applied for the whole data from page 1 to page M, all page BERs will be averaged into the overall BER of (8) and the overall WER would be improved according to (34). In other words, minimizing the overall BER (i.e., criterion 1) is preferred over achieving identical page BERs (i.e., criterion 2), if interleaving is applied.
Actually, the application of interleaving and similar ideas have been proposed in order to resolve the uneven page BER problem and improve the reliability [11,12]. However, the adoption of interleaving will slow down the program and read speed performance because the interleaver should wait to collect at least M pages data before program and read operation in the multipage architecture. Especially, random speed performance would be more degraded than sequential speed performance when employing an interleaver (see Section “Hybrid SSD and strategy for verify level control”).
Therefore, criterion 2 could be a practical alternative for flash memories because it does not degrade the speed performance and exhibits only slight degradation of the overall BER as shown in Figure 8. In addition, criterion 2 does not require large memory buffer for interleaving. Figure 9 shows that the overall WER from criterion 2 is much better than that of criterion 1 without interleaving and only slightly worse than that of criterion 1 with interleaving for 2-bit/cell flash memories.
Figure 9. Comparison between the overall WER from criterion1 and from criterion 2 for 2-bit/cell. The BCH code (n = 8752, k = 8192, t = 40) is applied. The constrained voltage window W is assumed to be 5.
However, the WER degradation of criterion 2 will increase as M increases as shown in Figure 10. The reason is that the overall BER from criterion 2 will be much worse than that from criterion 1 for large M as shown in Figure 8. Therefore, criterion 2 would not be appropriate for large M in terms of the reliability.
Figure 10. Comparison between the overall WER from criterion1 and from criterion 2 for 2-bit/cell, 3-bit/cell and 4-bit/cell. The BCH code (n = 8752, k = 8192, t = 40) is applied. The constrained voltage window W is assumed to be 5.
Hybrid SSD and strategy for verify level control
In order to reduce the cost of SSD and maintain the speed performance and the durability, the hybrid SSD has been proposed [16,17]. The basic idea is to use both SLC flash memories and MLC (usually 2-bit/cell) flash memories. The SLC flash memory has an edge over the MLC flash memory in terms of the speed performance and the durability. However, the MLC flash memory is cheaper than the SLC flash memory. Therefore, combining them can allow both types of flash memories to complement each other [16-19].
Recently, many flash translation layer (FTL) mapping schemes classify incoming data into hot and cold based on the access frequency and size. If a data is updated frequently, it is referred to as hot, and otherwise cold. Generally, small data are accessed more often, and they are classified as hot data. Meanwhile, cold data correspond to bulk writes at low frequencies [16,18]. The speed performance of SSD is classified into random speed performance and sequential speed performance. The random speed performance is measured in input/output operations per second (IOPS) and the sequential speed performance is measured by transfer rate or throughput such as MB/sec . Considering the characteristics of hot and cold data, we see that the random speed performance is a pivotal factor for hot data and the sequential speed performance is important for cold data.
Figure 11 illustrates the architecture of the hybrid SSD. In this architecture, the hot and cold detection module separates hot data from cold ones dynamically and directs them either to SLC or MLC based on the decision. Before SLC flash memories run out of free blocks, the hybrid SSD performs garbage collection to merge valid cold data of SLC and move them into MLC .
Based on this architecture of the hybrid SSD, we propose that criterion 1 with interleaving is suitable for storing cold data in MLC because the interleaving would have only a small impact on the sequential speed performance for the cold data access and the garbage collection. Of course, we do not need to consider the verify level control criterion for SLC.
In addition, we can anticipate a lower cost and high density hybrid SSD which combines two types of MLC flash memories. For example, 2-bit/cell may replace SLC and 4-bit/cell may be used in place of 2-bit/cell. Unlike the conventional hybrid SSD which combines SLC and MLC of 2-bit/cell, we have to consider the verify level control criterion for both hot and cold data. We propose that criterion 2 will be appropriate for 2-bit/cell flash memories which mainly deal with hot data. For 4-bit/cell which usually stores cold data, criterion 1 with interleaving will be suitable considering the sequential speed performance and the reliability.
In this article, we investigated the verify level control criteria of ISPP for MLC flash memories. These criteria are formulated and solved by convex optimization. Criterion 1 can minimize the overall BER, however it requires interleaving in multipage architecture which reduces the speed performance. Criterion 2 is suitable for multipage architecture especially for 2-bit/cell flash memories. The problem of criterion 2 is that the error rate degradation will increase for more bits per cell.
Based on these advantages and disadvantages of verify level criteria, we investigated the application of verify level control criteria for the hybrid SSD. By selecting the proper criterion considering the architecture of the hybrid SSD, we can achieve both reliability and speed performance.
The verify level control criteria and the proposed formulation of optimization problems can be extended to other emerging memories such as PCM which are modeled by unimodal distributions.
The authors declare that they have no competing interests.
K-T Park, O Kwon, S Yoon, M-H Choi, I-M Kim, B-G Kim, M-S Kim, Y-H Choi, S-H Shin, Y Song, J-Y Park, J-E Lee, C-G Eun, H-C Lee, H-C Kim, J-H Lee, J-Y Kim, T-M Kweon, H-J Yoon, T Kim, D-K Shim, J Sel, J-Y Shin, P Kwak, J-M Han, K-S Kim, S Lee, Y-H Lim, T-S Jung, A 7MB/s 64Gb 3-bit/cell DDR NAND flash memory in 20nm-node technology, ISSCC Dig. Tech Papers (pp), . 212–213 (2011)
S-D T Kim, J Lee, H Park, B Cho, K You, J Baek, C Lee, M Yang, M Yun, J Kim, E Kim, H Jang, S Chung, B-S Lim, Y-H Han, A Koh, 32Gb MLC NAND flash memory with Vth margin-expanding schemes in 26nm CMOS, ISSCC Dig. Tech Papers (pp), . 202–204 (2011)
C Trinh, N Shibata, T Nakano, M Ogawa, J Sato, Y Takeyama, K Isobe, B Le, F Moogat, N Mokhlesi, K Kozakai, P Hong, T Kamei, K Iwasa, J Nakai, T Shimizu, M Honma, S Sakai, T Kawaai, S Hoshi, J Yuh, C Hsu, T Tseng, J Li, J Hu, M Liu, S Khalid, J Chen, M Watanabe, H Lin, et al. A 5.6MB/s 64Gb 4b/cell, NAND flash memory in 43nm CMOS, ISSCC Dig. Tech. Papers (pp), . 245–246 (2009)
N Mielke, H Belgal, I Kalastirsky, P Kalavade, A Kurtz, Q Meng, N Righos, J Wu, Flash EEPROM threshold instabilities due to charge trapping during program/erase cycling. IEEE Trans. Device Mater. Reliab 4(3), 335–344 (2004). Publisher Full Text
K-D Suh, B-H Suh, Y-H Lim, J-K Kim, Y-J Choi, Y-N Koh, S-S Lee, S-C Kwon, B-S Choi, J-S Yum, J-H Choi, J-R Kim, H-K Lim, A 3.3 V 32 Mb NAND flash memory with incremental step pulse programming scheme. IEEE J. Solid-State Circ 30(11), 1149–1156 (1995). Publisher Full Text
T-S Jung, Y-J Choi, K-D Suh, B-H Suh, J-K Kim, Y-H Lim, Y-N Koh, J-W Park, K-J Lee, J-H Park, K-T Park, J-R Kim, J-H Lee, H-K Lim, A 117-mm2 3.3-V only 128-Mb multilevel NAND flash memory for mass storage applications. IEEE J. Solid-State Circ 31(11), 1575–1583 (1996). Publisher Full Text
K Takeuchi, T Tanaka, T Tanzawa, A multipage cell architecture for high-speed programming multilevel NAND flash memories. IEEE J. Solid-State Circ 33, 1228–1238 (1998). Publisher Full Text
G Dong, N Xie, T Zhang, Techniques for embracing intra-cell unbalanced bit error characteristics in MLC NAND flash memory, IEEE Globecom Workshop on Application of Communication Theory to Emerging Memory Technologies (pp), . 1915–1920 (2010)
D Mantegazza, D Ielmini, A Pirovano, B Gleixner, AL Lacaita, E Varesi, F Pellizzer, R Bez, Electrical characterization of anomalous cells in phase change memory arrays, IEEE International Electron Devices Meeting (IEDM) (pp), . 1–4 (2006)
F Bedeschi, R Fackenthal, C Resta, EM Donze, M Jagasivamani, EC Buda, F Pellizzer, DW Chow, A Cabrini, GMA Calvi, R Faravelli, A Fantini, G Torelli, D Mills, R Gastaldi, G Casagrande, A bipolar-selected phase change memory featuring multi-level cell stroage. IEEE J. Solid-State Circ 44(1), 217–227 (2009)