Quantcast
Channel: Cadence Custom IC Design Blogs
Viewing all 828 articles
Browse latest View live

Virtuosity: Sweeping Multiple Config Views

$
0
0
Before IC6.1.7 ISR10, you could sweep multiple views in ADE for only one block in your design. What if you have more than one block that has multiple views that you want to sweep? Well from ISR10 onwards, you can do that. Here's how.(read more)

Virtuosity: Sweeping Multiple DSPF Views in ADE

$
0
0
Wouldn't it be great if you could have a view for your DSPF files and sweep them in an ADE session without having to add them as simulation files? Well now you can! You can create a DSPF view just like any other view, schematic, layout, extracted - and this can be easily included in any ADE simulation. You can also combine this with the config sweep feature to enable you to sweep several DSPF views at once. Just make note that the top-level test bench must be a config. Let's see how to do this...(read more)

The Art of Analog Design: Part 3, Monte Carlo Sampling

$
0
0

In Part 2, we looked at Monte Carlo sampling methods. In Part 3, we will consider what happens once Monte Carlo analysis is complete. Of course, we will need to analyze the results, so let’s look at some of the tools for visualizing what the Monte Carlo analysis is trying to show us about the circuit.

First let’s review the results from the previous blog. The circuit being simulated is a Capacitor D/A Converter, or CAPDAC. The CAPDAC is used in a Successive Approximation ADC to generate the reference levels for comparison. The mismatch of the unit capacitors in the CAPDAC contributes to degradation of the CAPDAC SINAD (Signal-to-Noise and Distortion ratio) and is an important contributor in determining the overall SINAD of the ADC. This CAPDAC is used in a 10 Bit ADC. Based on the error budget for the ADC, if the CAPDAC has a SINAD of 60dB or better we will be able to meet our ADC SINAD target. The CAPDAC SINAD was simulated using Monte Carlo with auto-stop, yield target of 60dB for SINAD, yield of 3s or greater, confidence level of 90%, and Low Discrepancy Sampling, LDS, method. The simulation required 1755 samples to meet the 90% confidence requirement level.

In the last blog append, we looked at the. The effect of process variation on SINAD distribution was plotted, see figure 1. To help understand the how CAPDAC performance compared to the specification,. The specificationthe pass/fail limits have been overlaid on top of the distribution, green is pass and red is fail.

Figure 1: CAPDAC SINAD distribution

The plot also has bars showing the mean value, s, and the values of standard deviation from -3σto +3σ allowing us to visualize how much margin the CAPDAC has relative to the specification. For the CAPDAC there is almost 2s close margin between the specification and the upper limit of the specification, -3s limit, of the distribution.

One observation from looking at the distribution, is that the distribution appears to have a long tail. In statistics, distributions with long tails means that the distribution has a large number of occurrences far from the central part of the distribution. Looking at the distribution, we can see that on the positive side of the distribution, there is only one point that is > +2s from the mean. While on the negative side of the distribution, there are many data points, < -3s from the mean. Next, let’s apply another tool, quantile-quantile plotting. The purpose is to test our simulated distribution and is a Normal (or Gaussian) distribution. A quantile-quantile plot is a technique to evaluate if two distributions are the same by plotting their quantiles against each other where the quantiles are points taken at regular intervals from the cumulative distribution function (CDF) of a random variable. The 0-quantile of distribution is the median, it is the value where half the samples in the distribution are higher in value than the median and half of the samples in the distribution are lower in value the median. Since the distribution is skewed, the mean value will not be equal to the median value.

Figure 2: Quantile-quantile plot for CAPDAC SINAD

If the simulated distribution is a straight line when plotted against the reference distribution, the Normal distribution, then the distributions match and the simulated distribution is Gaussian. As expected, the simulated distribution is not a straight line when plotted against the Normal distribution (see Figure 2). The distribution is only Normal in the region from -1s to +1s of standard deviation. Another way to look at the effect of the long tail is to consider how the CAPDAC yield compares to the expected yield of a Normal distribution. For the CAPDAC, there is 1 failure for 1755 samples. The worst-case value of CAPDAC SINAD is 59.85dB, -5.2s from the mean value. Using the Normal distribution, the expected failure probability for 5s deviation from the mean value is 1 failure per 3.5 million attempts. The effect of the long tail, non-Normal nature of the distribution, is a significant reduction in the yield compared to the yield when the distribution is a Normal distribution. Using quantile-quantile plots provides a powerful tool for visualizing whether the simulated distribution is a Normal distribution or not.

Next, let’s look at another measurement that is useful for designers. First, let’s determine the process capability index or Cpk value. The Cpk is a statistical measure of process capability which is the ability of a process to produce output within specification limits. For the CAPDAC, the Cpk is one of the outputs in the Virtuoso ADE Assembler results window (see Figure 3). The Cpk can only be output if a specification has been defined.

The Cpk is defined as the ratio of the distance from the mean value to the specification in standard deviations over the distance from the mean value to the actual distribution limit in standard deviation. For the CAPDAC, the numerator is 4.6s, the distance from the mean value of 61.15dB to 60dB in sigma, see sigma to target. The target yield was 3s so the denominator is 3s. 

The less precise way to think about Cpk, is to think of it as a measure of design margin. It tells us how much margin we have between the actual limit of the process and the user’s expectation for the process.

To summarize we have looked at two tools for visualizing the results of  Monte Carlo analysis and using the tools to identify problems. Plotting distributions allows us to understand how well centered a design is. Quantile plots allow us to look at the distribution and identify if it has a long tail since a long tail can translate into poor yield. And by using Cpk we can quantify how much design margin we have. In the next blog post, we will start to look at what we can do to identify and correct issues. 

Virtuosity: Power Filtering!

$
0
0
Finally, we have filters in the Corners Setup form, Results tab, Outputs tab, Data View and Setup assistants in Virtuoso ® ADE Explorer and Virtuoso ® ADE Assembler. But, they are not just for finding basic strings like vdd or 1p. They can do so much more; filtering for values within a range, finding strings containing all or any of the words you specify, filtering for prefixes or suffixes, and so on. Let's see what advanced filtering these filters are capable of.(read more)

Virtuosity: Can I Speed up My Plots?

$
0
0
If your Virtuoso ® ADE Assembler, Virtuoso ® ADE Explorer or Virtuoso ® ADE XL setup contains multiple sweeps or corner points, or maybe the transient simulations are time consuming, then plotting waveforms using Plot All may consume significant time and memory. Here, Quick Plot will help you out. Quick Plot will plot outputs in Virtuoso Visualization and Analysis, faster and with much less memory usage.(read more)

The Art of Analog Design Part 5: Mismatch Analysis II

$
0
0

In Part 4 of the series, we looked at applying mismatch analysis as a design tool. In Part 5, we will continue to look at mismatch analysis by applying the technology  to other types of designs..

The first case we will look at is a circuit without a DC operating point. A dynamic comparator, see Figure 1, doesn’t have a quiescent operating point making it difficult to analyze.

 In this case, the offset voltage is measured using transient analysis. A positive and a negative staircase is applied at the input and the input value which results in the output switching being recorded, the average value of input levels is the offset voltage. To increase the resolution of the offset voltage measurement, the step size needs to be small. In this case, the step size of the staircase ramp is 100mV. A Verilog A module was used as the signal source to generate the staircase, see Figure 2. For more details about measuring dynamic comparator offset Voltage, please see the ADC Verification Workshop Rapid Adoption Kit in Cadence online support.

Looking at the comparator, we would expect that the mismatch of the p-channel input transistors is the primary source of offset voltage.  After the Monte Carlo analysis, we will use scatter plots showing the random variable causing mismatch for three transistors: NM2, NM3, and NM4, see Figure 3a. For the devices in the differential pair, NM2 and NM3, we can see that there is correlation between the offset voltage and the input transistors, the correlation coefficient is r about 0.5. For the current source transistor, NM4, there is no correlation, the correlation coefficient r about 0, between the offset voltage and the transistor’s variation. So, the scatter plots are consistent with our expectations about how the devices are impacted and the statistical variation.

Again, we can see the utility and the limitations of the scatter plot. Qualitatively the scatter plot allows us to visualize the relationship between the inputs, statistical variables, and the outputs measured values. However, it is difficult to extract quantitative information from the results. So, while we can use scatter plots to confirm what we already know, they don’t really provide any additional information to designers.

We will use mismatch analysis to analyze the relationship between variations on offset voltage. The mismatch analysis results are shown in Figure 4. Again, we see that offset voltage has a non-linear, second-order relationship with the statistical variables. We can also see that most of the variation, 99.935% is accounted for by the mismatch results. We can see that ~90% of the offset voltage is due to the input transistor variation. Mismatch analysis considers the variation at the statistical variable level: NM2.rn2 contributes 30%, NM3.rn2 contributes 29%, NM2.rn1 contributes 17%, and NM3.rn1 contributes 16%. While our naming convention could be more explicit, you can think about the variables as the individual contributions to variation: gate oxide thickness variation and gate length variation. Another observation is that there is another source of offset voltage variation, the cascode transistors, NM0 and NM1. While not significant, it useful to know that mismatch analysis has enough resolution to identify small contributors.

Mismatch analysis provides designers a tool to analyze the effect of mismatch qualitatively and quantatively.  

To summarize, the mismatch analysis is a useful tool to analyze the results of Monte Carlo analysis. In this case, we analyzed the effect of variation on a dynamic comparator. Traditionally it is difficult to analyze a dynamic comparator because it is not a linear circuit with a DC operating point. Perhaps more than anything else, the ability to analyze circuits that designers have not been able to analyze in the past is the true value of mismatch analysis.

The Art of Analog Design Part 4: Mismatch Analysis

$
0
0

In Part 3, we started to explore how to analyze the results of Monte Carlo analysis. In Part 4, we will consider the question, what is the relationship between process variation and the circuit’s performance variation? The tool for exploring the relationship process variation and circuit performance variation is mismatch analysis in the tool Virtuoso® Variation Option (VVO). 

Let’s start by looking at a simple example that shows the sources of offset voltage of a two-pole operational amplifier, see Figure 1.

Figure 1: Two Pole Operational Amplifier

Looking at the design, we would expect that mismatch of the p-channel input transistors are the primary source of offset voltage. First, let’s look at the Monte Carlo simulation results for the op-amp, see Figure 2.

Figure 2: Monte Carlo Analysis Results

The results show that the offset voltage is ~7.3mV. While Monte Carlo analysis tells us how much offset voltage there is, it does not tell us anything about the source of the offset voltage or how much improvement can be achieved. So, what are the sources of the offset voltage? After Monte Carlo analysis, we can plot the relationship between threshold voltage of input p-channel transistors, M17 and PM5, and the n-channel transistors in the first stage load current mirror. The scatter plots in Figure 3 show that there is no correlation between threshold voltage and the offset voltage of the operational amplifier since the correlation between offset voltage and the device threshold voltages is effectively 0.

Figure 3: Scatter Plots, Threshold Voltage versus Offset Voltage

Now let’s try using contribution analysis, see Figure 4.

Figure 4: Mismatch Analysis Results

Mismatch analysis shows the relationship between the threshold voltage and the offset voltage. The reasons that the scatter plot showed no correlation was because it looks for linear correlation. Mismatch analysis reports that the dependency is second order, the label shows R^2, The results show that most of the variation, 99.997%, can be explained by the threshold variation of the M17, PM5, NM4, and NM6. The results also show that ~70% of the offset voltage variation is due to the p-channel variation, the contribution from M17 is 34%, and the contribution from PM5 is 34%. The other source of offset voltage variation is the n-channel threshold voltage contribution of 30%.

Let’s use this information and see if we can improve the design. Since the p-channel contributes most of the offset voltage, we will try an experiment. We will increase the p-channel transistor area by 16x, length by 4x and width by 4x, keeping the W/L ratio constant. Increasing the device size should decrease the effect of p-channel mismatch by a factor of four.

Figure 5: Monte Carlo Analysis with 16x P-Channel

The effect of scaling the p-channel transistors on the offset voltage of the op-amp is to reduce the offset voltage from 7.2mV to 3.7mV. Doing some math, the p-channel offset contribution is ~6.4mV and the n-channel contribution is ~3.3mV. Verifying the offset voltage, the initial offset voltage is (6.42) + (3.32) = 7.2mV. After device sizing, the offset voltage is ((6.4/4)2) + (3.32) = 3.7mV.

This example shows how mismatch analysis can be used to understand the effect of process variation on circuit performance. While we understand qualitatively that input transistors are the primary contributor to offset voltage, mismatch analysis provides us a tool for qualitative analysis of variation. In the next blog, we will apply mismatch analysis to additional circuits.  

The Art of Analog Design Part 5: Response to Frank’s Question

$
0
0

In the comments to blog #5, Frank Wiedmann asked about the correlation between the results of mismatch from Monte Carlo analysis and DC mismatch analysis. It is a fair question and here is a short blog to explore the topic. The example may not be realistic, but it is a useful for exploring the effects of mismatch on a circuit.  

Let’s start with a simple circuit—A resistively loaded differential amplifier with cascodes, shown in Figure 1. The process is the Cadence® 45nm GPDK. This test circuit gives us a good platform for exploring the effect of mismatch on circuit performance. The GPDK includes models for Monte Carlo analysis and the results are easy to share.

  Figure 1: Differential Amplifier

As background, DC mismatch is an analysis that estimates the effect of mismatch on circuit performance from a single simulation. It is considerably faster than using Monte Carlo analysis. The drawback is that only the DC operating point effect of mismatch is considered, so we could not use it for the dynamic comparator, see part 5. Originally, the model card needed to be modified for DC mismatch analysis. DC mismatch used different mismatch parameters than Monte Carlo analysis. Since about 2012, DC mismatch analysis reads either the stats block or Monte Carlo process variations. The original DC mismatch parameters are still supported for backwards compatibility. For Virtuoso® ADE Explorer users, look in the Analyses tab for dcmatch. One point to keep in mind is that DC match simulates the offset voltage at a specified output due to process variation, however, it can’t be used for derived measurements.

For Monte Carlo analysis, used Low-Discrepancy Sampling and 1400 iterations to generate the distribution. From experience, this number of iterations should give a reasonable approximation to the expected distribution. The standard deviation of the output offset voltage is 10.557mV. The Monte Carlo analysis results are compared with the dc mismatch analysis results in Table 1.

Figure 2: Output Offset Voltage Distribution

The DC Mismatch analysis was run using the stats option, that is, the statistical information in the stats block is used for the DC mismatch analysis.  

 

 

Monte Carlo

DC Mismatch

Offset Voltage

10.56mV

10.42mV

Contributors

M1:rn2_18

M1:rn2_18

 

M0:rn2_18

M0:rn2_18

Table 1: Comparison of Monte Carlo and DC Mismatch Results      

The comparison results show that the offset voltages are close but are not quite identical. The difference in the results comes down to the approximation that is used when performing DC mismatch analysis. DC mismatch analysis assumes that the output distribution is Gaussian. The assumption allows us to estimate the variation without requiring the many iterations required by Monte Carlo to calculate the actual distribution. This is an example where the assumption breaks down because the tails of the distribution are not Gaussian. The output referred offset voltage is plotted using the normal quantile-quantile plot, shown in Figure 3. The results show that the tails of the distribution are not Gaussian, see the areas in the green boxes.

Figure 3: Quantile-quantile plot of Output Offset Voltage

 

One other item to notice is that DC mismatch and Monte Carlo mismatch analysis report the same contributors. The contributors are the random variables that result in the largest variation in the output offset voltage.

The summary is that DC mismatch provides a reasonable approximation to Monte Carlo mismatch results and can be used for predicting trends and worst-case corners. The limitation is that DC mismatch relies on the assumption that the distribution is Gaussian. Asa result for signoff, Monte Carlo analysis is the appropriate choice.


Virtuosity: Saving, Loading and Sharing ADE Annotation Settings

$
0
0
The whole ADE annotation flow was overhauled way back in IC6.1.6 but at that time there was no way to share the annotation settings between designs, or to automatically load them. Well, in IC6.1.7 ISR13 we have added the ability to do both! (read more)

The Art of Analog Design Part 6: Response to Frank’s Question to Part 4

$
0
0

In the comments to blog #4, Frank Wiedmann asked about the correlation between the results of mismatch from Monte Carlo analysis and DC mismatch analysis. It is a fair question and here is a short blog to explore the topic. The example may not be realistic, but it is a useful for exploring the effects of mismatch on a circuit.  

Let’s start with a simple circuit—A resistively loaded differential amplifier with cascodes, shown in Figure 1. The process is the Cadence® 45nm GPDK. This test circuit gives us a good platform for exploring the effect of mismatch on circuit performance. The GPDK includes models for Monte Carlo analysis and the results are easy to share.

  Figure 1: Differential Amplifier

As background, DC mismatch is an analysis that estimates the effect of mismatch on circuit performance from a single simulation. It is considerably faster than using Monte Carlo analysis. The drawback is that only the DC operating point effect of mismatch is considered, so we could not use it for the dynamic comparator, see part 5. Originally, the model card needed to be modified for DC mismatch analysis. DC mismatch used different mismatch parameters than Monte Carlo analysis. Since about 2012, DC mismatch analysis reads either the stats block or Monte Carlo process variations. The original DC mismatch parameters are still supported for backwards compatibility. For Virtuoso® ADE Explorer users, look in the Analyses tab for dcmatch. One point to keep in mind is that DC match simulates the offset voltage at a specified output due to process variation, however, it can’t be used for derived measurements.

For the Monte Carlo analysis, Low-Discrepancy Sampling was used. To generate a "good" distribution, 1400 iterations were used. From experience, this number of iterations should give a reasonable approximation to the expected distribution. The standard deviation of the output offset voltage is 10.56mV. The Monte Carlo analysis results are compared with the dc mismatch analysis results in Table 1.

Figure 2: Output Offset Voltage Distribution

The DC Mismatch analysis was run using the stats option, that is, the statistical information in the stats block is used for the DC mismatch analysis.  

 

 

Monte Carlo

DC Mismatch

Offset Voltage

10.56mV

10.42mV

Contributors

M1:rn2_18

M1:rn2_18

 

M0:rn2_18

M0:rn2_18

Table 1: Comparison of Monte Carlo and DC Mismatch Results      

The comparison results show that the offset voltages are close but are not quite identical. The difference in the results comes down to the approximation that is used when performing DC mismatch analysis. DC mismatch analysis assumes that the output distribution is Gaussian. The assumption allows us to estimate the variation without requiring the many iterations required by Monte Carlo to calculate the actual distribution. This is an example where the assumption breaks down because the tails of the distribution are not Gaussian. The output referred offset voltage is plotted using the normal quantile-quantile plot, shown in Figure 3. The results show that the tails of the distribution are not Gaussian, see the areas in the green boxes.

Figure 3: Quantile-quantile plot of Output Offset Voltage

 

One other item to notice is that DC mismatch and Monte Carlo mismatch analysis report the same contributors. The contributors are the random variables that result in the largest variation in the output offset voltage.

The summary is that DC mismatch provides a reasonable approximation to Monte Carlo mismatch results and can be used for predicting trends and worst-case corners. The limitation is that DC mismatch relies on the assumption that the distribution is Gaussian. As a result for signoff, Monte Carlo analysis is the appropriate choice.

Virtuosity: Read Mode Done Right

$
0
0
Because of the ease with which you can set up complex sweep, corner and Monte Carlo simulations, the Virtuoso ADE tools are frequently used to perform verification and regression simulation runs. Those runs are most commonly done by accessing cellviews in read-only mode (RO), so that the “golden” simulation setups are not modified and there is no need to check out the cellviews from the design management (DM) vault. Opening an ADE XL view in read-only mode allows you to run simulations, but when you exit, the simulation results are deleted. In addition, if any small modifications are made in RO mode—correcting a typo, or adjusting a path—those changes are lost unless you save them to a new cellview, which is inconvenient and can lead to confusion. In Virtuoso® ADE Assembler and Virtuoso® ADE Explorer, working with maestro views in read-only mode is more powerful and flexible. The added functionality of RO mode also enables Virtuoso® ADE Verifier to run the maestro views it needs for verification so that the end users’ work is not disrupted.(read more)

Simplifying the Memory Design Process

$
0
0

On today’s SOC designs, the memory control logics and memory arrays take up a lot of real estate in terms of area and as a part of the larger system since it mostly determines the performance of the application. Regardless of the processors and the interconnect, the memory system provides the instructions and operands and the application cannot be executed any faster than the memory system can handle. Due to this constant demand for increasing memory size for higher performance at advanced nodes, design and verification engineers are faced with multiple challenges.

One of the biggest challenges in memory design is time to market. The designs need to be completed, and ramp to yield, in a very short time frame.  But in trying to complete this process, memory designers often face multiple tool and flow challenges as well.

One of the flow challenges is that different tools are used for design, verification and model creation steps. For example, the memory cell design needs very accurate SPICE simulator as well as extensive variation analysis.  But how does a designer ensure consistency and accuracy across multiple tools?  For example, the FastSPICE tool used to do margin analysis needs to be consistent with the tools or scripts used to generate Liberty timing and power models. These models convey the performance of the memory to the SOC designer, and so it’s very important that they accurately represent the design.  So ensuring consistency across the tools and flows makes this process even harder and longer for designers.

Additionally, there is a significant number of PVT corners that designers need to cover as they move to advanced nodes. Our research shows that about 196 PVT corners are needed to accurately characterize the designs at 16nm and below while 12 PVT corners are needed at 90nm.  More PVT corners means more verification time, which puts additional pressure on time to market.  And finally, while performing memory characterization accurate timing, circuit simulation, power and leakage, and performance metrics must all be met it and currently the only solution is through the use of various point tools.  To solve these tool flow challenges, we have created a memory design, verification, and characterization solution. This means that our customers can focus on delivering their memory designs on schedule with the right performance and power, rather than on tools or flows.

The new Cadence® Legato Memory Solution is the industry’s first integrated solution for memory design and verification. It provides a one-stop shop for all memory design, verification and characterization needs, eliminating the complexity of piecing together point tools for multiple design and verification tasks.  The Legato Memory Solution delivers up to 2x runtime improvement while meeting demanding design schedules.  This solution provides a new standard for completion of memory design on-time and with accuracy.   

See also the Breakfast Bytes post Legato: Smooth Memory Design.

Virtuosity: All New XStream In - The Translation Expressway

$
0
0
A layout design has to go through several iterations and multiple data exchanges across tools for different types of processing during the designing process. At each stage, a large-sized, hierarchical layout needs to be imported into Virtuoso. Therefore, you need a translation tool that is fast and reliable. The new IC617/12.2 XStream In translator with its latest performance upgrades ensures this and a lot more. Read on to find out…(read more)

Art of Analog Design Part 7: Mismatch Tuning

$
0
0

In days of future past, we looked at DC mismatch analysis and compared it to Monte Carlo analysis for analyzing the effect of device mismatch on the offset voltage of a differential amplifier. We found that DC mismatch does provide good estimates of the effect of mismatch with the limitation that the offset voltage has a Gaussian distribution. Since DC mismatch analysis only needed a single simulation to generate an estimate, we can use it for design exploration. For example, when looking for the worst-case corner for offset voltage, we can use DC mismatch analysis to accelerate simulation time.    

Suppose that we wanted to find the device size that meets our design specification for offset voltage. Let’s start with the same differential amplifier and assume that the offset voltage should be 1mV. How can we find the optimum gate width for this offset voltage 1sigma value? One option would be to perform DC mismatch analysis and sweep the n-channel transistor gate width. Let’s set a specification of a target offset voltage of 1mV and look for the gate width that will meet our offset voltage specification.

Figure 1: Parametric Sweep of Device Size vs. Offset Voltage

In this case, we swept the number of fingers for input pair and can see that the we can significantly reduce the area without compromising the offset voltage of the amplifier.

There is actually one alternative to the parametric sweep approach for tuning offset voltage. We can use mismatch analysis to perform the same task. In the Mismatch Contribution window, you can click on the Mismatch Tuner icon, see the red box on Figure 2.  

Figure 2: Using Mismatch Tuner to Size Transistors for Offset Voltage

When you click on the Mismatch Tuner icon, you get slider bars that you can adjust and the results in the Contribution Analysis window are updated. What we see here is that by reducing the gate width of the input transistors by 60%, then the offset voltage is 1mV. This result is consistent with the results of the parametric sweep of DC mismatch analysis. We can reduce the size of the input transistors by 60% and still meet our objectives for offset voltage.

 So, which method should I use? If all you are interested in is offset voltage of a linear analog circuit, then using DC mismatch with parametric sweep may be sufficient. However, in most other cases, this option is not available. Consider the dynamic comparator, it does not have a quiescent operating point so we can’t use DC mismatch to estimate the input stage scaling. In this case, mismatch tuning can be used. Suppose you need to make a choice to achieve a 500uV offset voltage, you can either scale the devices or add additional circuitry to calibrate out the offset. After running Monte Carlo analysis, see figure 3, the current offset voltage of the comparator is about 1mv, good but not good enough to meet the target. 

Figure 3: Dynamic Comparator Offset Voltage

So, let’s try using the mismatch tuner, see Figure 4. In this case, we see that we need to increase the device size by 4x to reduce the offset voltage level to an acceptable level. Based on this result, the designer needs to decide which approach to take: scaling the input devices, or adding an offset calibration, to better optimize area and power. So, we can use mismatch tuning to give us insight into how variation impacts offset voltage. Another use case to consider is suppose you have several parameters to trade-off: offset voltage, power supply rejection ratio, common-mode rejection ratio, and bandwidth. In this case, mismatch tuning allows you to envision interaction between device scaling and multiple parameters. So, while the two approaches overlap, using the mismatch tuner is a more general solution for analyzing the effect of mismatch on circuit performance.


Figure 4: Dynamic Comparator Offset Voltage Mismatch Tuning

One thing to keep in mind when using either dc mismatch analysis or mismatch tuning is that these techniques rely on mathematical techniques to estimate the effect of mismatch. These results should be verified with Monte Carlo analysis. In this case, after using mismatch tuning the results were checked. Before sizing the offset voltage was 938uV. Mismatch tuning suggested that by increasing the device size by about 4x, the offset voltage would be reduced to 480uV. Monte Carlo analysis shows that the actual offset voltage after tuning was 406uV, see figure 5. 

Figure 5: Dynamic Comparator Offset Voltage Monte Carlo Results after mismatch tuning

Over the last two blogs, we have looked at DC mismatch analysis. In the previous blog, we compared the results from dc mismatch analysis to Monte Carlo analysis as a tool for estimating offset voltage. Then in this blog we looked at using dc mismatch as a design tool to improve our design. In the next blog, we will take a similar look at AC mismatch analysis.         

Dealing with AOCVs in SRAMs

$
0
0

Systems on Chip, or SoCs as they’re more commonly called, have become increasingly more complex, and incorporate a dizzying array of functionality to keep up with the evolving trends of technology. Today’s SoCs are humongous multi-billion-gate designs with huge memories to enable complex and high-performance functions that are executed on them. It is quite common to have about 40% of an SoC’s real estate used for Static Random Access Memory (SRAM). SRAM design is a complex and highly sensitive process, and what we want to design in the silicon is often different from what actually comes out of the manufacturing process. This is due to Advanced On-Chip Variations, or AOCVs.

AOCVs occur in the device manufacturing processes, and there are two kinds:

  1. Systematic Variations: These are caused by variations in gate oxide thickness, implant doses and metal or dielectric thickness. They are deterministic in nature, and exhibit spatial correlation – i.e., they are proportional to the cell location of the path being analyzed.
  2. Random Variations: These are random, as the name suggests, and therefore are non-deterministic. They are proportional to the logic depth of the path being analyzed, and tend to statistically cancel each other out given a long enough path.

As can be deduced, the effects of these variations are getting more pronounced as process geometries are shrinking, and so dealing with them in an effective manner is crucial to the proper functioning of an SoC. And therein lies the rub.

Traditional Solutions for AOCVs in SRAMs

AOCVs need to be modeled effectively, so their effects can be taken into account for the ultimate SRAM design to be successful. This means the design needs to be simulated to account for the random and deterministic process variations. Most companies deal with this in one of the following two ways:

  1. Running a Monte Carlo simulation on the full memory instance RC extracted netlist

This approach involves creating a simulatable instance netlist from the instance schematic, and running Monte Carlo simulations on the complete netlist, multiple times. This will give us the most accurate results. However, this is an incredibly CPU and memory intensive approach, with run times lasting several days. Additionally, it will require huge runtime memory requirements and will need bigger LSF machines.

  1. Run Monte Carlo simulations on the critical path RC netlist

This approach involves reducing the netlist drastically by identifying repetitive cells in the memory and replacing them with a load model. Then you create a critical path schematic for each component to be simulated and run Monte Carlo. While this approach is definitely much faster than the previous approach, it still involves several thousand nodes and instances, and runtime is still in the order of a few days. Additionally, it requires time to create critical path schematics for different components and to ensure the setup is correct. Creating a critical path involves manual effort and is error prone, making it a less than ideal solution.

So what is a designer to do?

Enter, the approach used by our customer, Invecas. Their solution is based entirely on the Legato Memory Solution, specifically Liberate-MX runs, with Spectre simulations. It relies on re-suing the characterization database from Liberate-MX runs. This means, there is no additional time spent on setting up the environment. It also involves reusing the partition netlist created by the Liberate-MX flow. Liberate has the inbuilt intelligence of identifying the dynamic partition, and activity factor. This approach results in the least amount of runtime and memory required.

So how does this work?

Liberate runs a Fast-SPICE tool under the hood to identify the worst-case path that is active and toggling, and extracts only that path to work on. Then an accurate SPICE run is performed, to provide the accurate .libs. Generating these accurate .libs is already included in the Liberate MX flow and available today. Invecas then modified this flow for AOCV, by taking this partition, with all the accompanying setups and nodes, and adding a couple of commands for Monte Carlo runs. The script now runs Monte Carlo on the greatly reduced partition, and returns AOCV models with all the derating values in a matter of hours, instead of days, or even weeks.

The comparison of results between the three approaches can be summarized below.

Method 1

FULL INSTANCE SIMS (Considers 300MC runs)

Method 2

CRITICAL PATH SIMS

Invecas Method

PARTITION NETLIST SIMS

Invecas Method Improvement over Method 1

Invecas Method Improvement over Method 2

No.of Devices

7440000

17000

560

13285.71

30.36

No.of Nodes

22400000

317000

12300

1821.14

25.77

No.of RC elements

22000000

231000

12000

1833.33

19.25

RUN Time (Hours)

350

84

1.45

241.38

57.93

RUN Memory (GB)

50

10

1

50

10

 

The side-by-side testing clearly shows, that the Invecas method using the Legato Memory Solution has greatly reduced the number of devices, nodes and RC elements that the Monte Carlo run uses, from several million, to a few thousand. This automatically reduces the runtime and memory requirements by several orders of magnitude, thereby solving the biggest problem faced by the designers today.

Please visit our page to find out more about this process, or to read about the Cadence Legato Memory Solution.


Virtuosity: Organizing Waveform Families

$
0
0
When plotting waveforms in Virtuoso Visualization and Analysis across sweeps you might want to group plots with the same values together, or display each corner in the same color etc. Of course, you can right-click on the plot and select Copy to or Move to and move the plots manually, but did you know there was an assistant to do this for you?(read more)

Virtuosity: Using the Expression Builder to Plot across All Corners and Points

$
0
0

Virtuosity

The Expression Builder has simplified writing complex expressions and has the ability to plot or evaluate particular points and corners. But we wanted it to do more, recently we added the ability to plot or evaluate across all points and/or all corners. 

This makes it really easy to see your plots across sweeps or corners from different histories or tests.

There are new options for All under the Point and Corner drop-downs.

So this makes it easy to plot this clip expression for one point across all corners:

     

Or one corner across all points:

Or all points across all corners:

 Tables

Related Resources

About Virtuosity

Virtuosity has been our most viewed and admired blog series for a long time that has brought to fore some lesser known, yet very useful software and documentation improvements, and also shed light on some exciting new offerings in Virtuoso. We are now expanding the scope of this series by broadcasting the voice of different bloggers and experts, who would continue to preserve the legacy of Virtuosity, and try to give new dimensions to it by covering topics across the length and breadth of Virtuoso, and a lot more… Click Subscribe to visit the Subscription box at the top of the page in which you can submit your email address to receive notifications about our latest Virtuosity posts. Happy Reading!

Virtuosity: CDNLive India—Our Window to KYC!

$
0
0


In line with the recently-implemented mandate in India requiring banks and financial institutions to regularly run “Know Your Customer (KYC)” cycles, CDNLive India has become a reliable event for the Technical Communications Engineering team to regularly touch base with customers, and to ensure the team knows their customers in order to exceed customer expectations.

The Publications Infrastructure and CPG Technical Communications teams, both a part of Technical Communications Engineering, jointly ran a “booth” during CDNLive in India. The main attraction for the approximately 100 Cadence customers who visited the team was the upcoming Cadence Help 3.0 release, with its much-enhanced search functionality and improved performance. A big draw was the impressive desk calendar that the visitors could win by correctly answering questions to a quiz based on upcoming Cadence Help functionality. All visitors gained from the live demonstration and preview of the new help functionalities.

Not only was the event a win-win for customers, the Cadence representative teams had much to gain from first-hand views on what customers think about the next planned Cadence Help upgrade, and future online delivery platform the team is targeting. The teams captured customer pulse through an online survey and engaged with experienced and relatively new Cadence product users to fully understand customer requirements in terms of content accessibility, the infrastructure these organizations support, and their overall organizational setup.

Together, the online survey findings and the information gathered by the team through detailed discussions with stakeholders from small and large customer accounts gave a good window to explore what’s possible. The team is now using the collated data to validate their assumptions and plans about future CH upgrades, and is meticulously running through the collated information.

While it wasn’t on the agenda, the team also gathered customer feedback on Cadence content, content delivery and accessibility, and preferences. The team also took the opportunity to educate customers about blogs, more specifically about Virtuosity and Virtuoso Video Diary blogs that are collaboratively being written by the CPG Technical Communications team and other members of the cross-functional teams.

- Rishu Misri Jaggi

Virtuosity: Can I Plot Signals with Different Axis Units in the Same Window?

The Art of Analog Design Part 4: Mismatch Analysis

$
0
0

In Part 3, we started to explore how to analyze the results of Monte Carlo analysis. In Part 4, we will consider the question, what is the relationship between process variation and the circuit’s performance variation? The tool for exploring the relationship process variation and circuit performance variation is mismatch analysis in the tool Virtuoso® Variation Option (VVO). 

Let’s start by looking at a simple example that shows the sources of offset voltage of a two-pole operational amplifier, see Figure 1.

Figure 1: Two Pole Operational Amplifier

Looking at the design, we would expect that mismatch of the p-channel input transistors are the primary source of offset voltage. First, let’s look at the Monte Carlo simulation results for the op-amp, see Figure 2.

Figure 2: Monte Carlo Analysis Results

The results show that the offset voltage is ~7.3mV. While Monte Carlo analysis tells us how much offset voltage there is, it does not tell us anything about the source of the offset voltage or how much improvement can be achieved. So, what are the sources of the offset voltage? After Monte Carlo analysis, we can plot the relationship between threshold voltage of input p-channel transistors, M17 and PM5, and the n-channel transistors in the first stage load current mirror. The scatter plots in Figure 3 show that there is no correlation between threshold voltage and the offset voltage of the operational amplifier since the correlation between offset voltage and the device threshold voltages is effectively 0.

Figure 3: Scatter Plots, Threshold Voltage versus Offset Voltage

Now let’s try using contribution analysis, see Figure 4.

Figure 4: Mismatch Analysis Results

Mismatch analysis shows the relationship between the threshold voltage and the offset voltage. The reasons that the scatter plot showed no correlation was because it looks for linear correlation. Mismatch analysis reports that the dependency is second order, the label shows R^2, The results show that most of the variation, 99.997%, can be explained by the threshold variation of the M17, PM5, NM4, and NM6. The results also show that ~70% of the offset voltage variation is due to the p-channel variation, the contribution from M17 is 34%, and the contribution from PM5 is 34%. The other source of offset voltage variation is the n-channel threshold voltage contribution of 30%.

Let’s use this information and see if we can improve the design. Since the p-channel contributes most of the offset voltage, we will try an experiment. We will increase the p-channel transistor area by 16x, length by 4x and width by 4x, keeping the W/L ratio constant. Increasing the device size should decrease the effect of p-channel mismatch by a factor of four.

Figure 5: Monte Carlo Analysis with 16x P-Channel

The effect of scaling the p-channel transistors on the offset voltage of the op-amp is to reduce the offset voltage from 7.2mV to 3.7mV. Doing some math, the p-channel offset contribution is ~6.4mV and the n-channel contribution is ~3.3mV. Verifying the offset voltage, the initial offset voltage is (6.42) + (3.32) = 7.2mV. After device sizing, the offset voltage is ((6.4/4)2) + (3.32) = 3.7mV.

This example shows how mismatch analysis can be used to understand the effect of process variation on circuit performance. While we understand qualitatively that input transistors are the primary contributor to offset voltage, mismatch analysis provides us a tool for qualitative analysis of variation. In the next blog, we will apply mismatch analysis to additional circuits.  

Viewing all 828 articles
Browse latest View live