Advantest Talks Semi
Dive into the world of semiconductors and Automatic Test Equipment with our educational podcast, Advantest Talks Semi, where we explore the power of knowledge in this dynamic field. Hosted by Keith Schaub, Vice President of Technology and Strategy and Don Ong, Director and Head of Innovation for Advantest Field Service Business Group, at Advantest, this series features insightful conversations with experts and thought leaders in the industry.
In today's fast-paced environment, continuous learning is essential for staying ahead. Join us in these thought-provoking discussions, where you can learn about the latest trends and cutting-edge strategies being used in the semiconductor industry. Explore how innovative technologies are revolutionizing testing processes and shaping the future.
Stay updated on the ever-evolving semiconductor industry with Advantest Talks Semi, and gain exclusive insights into the future of technology.
The views, information, or opinions expressed during the Advantest Talks Semi series are solely those of the individuals interviewed and do not necessarily represent those of Advantest.
Advantest Talks Semi
How to Advance Chip Design with Digital Twins and EDA
Transform your understanding of semiconductor design and testing with the latest episode of Advantest Talks Semi. We are thrilled to introduce our new co-host, Don Ong, and welcome Ron Press, and Marc Hutner, from Siemens as they share groundbreaking insights into Electronic Design Automation (EDA).
Ron Press is the Senior Director for Technology Enablement at Siemens, where he leads a team dedicated to ensuring the successful implementation and use of Siemens' Tessent DFT and test product capabilities. His responsibilities include enabling application engineers, developing reference flows, and supporting customers in utilizing Siemens' released products effectively.
Ron began his career at Raytheon, working on AI and neural networks, and in 1993, he co-authored a paper on using neural networks to address built-in self-test (BIST) false alarms, highlighting his early contributions to AI in semiconductor testing. He worked as a Design for Test (DFT) architect at Raytheon before moving to Harris RF, where he focused on communication systems. Ron later joined Mentor Graphics, which was subsequently acquired by Siemens. Throughout his career, he has integrated AI and machine learning into semiconductor testing and design, significantly advancing the field.
Marc Hutner is the Director of Product Management for Yield Learning Products at Siemens. In his role, Marc focuses on yield learning and silicon debugging, aiming to identify yield limiters and improve semiconductor testing processes.
Marc joined Siemens over a year ago, bringing more than 20 years of experience in the semiconductor test industry. Before joining Siemens, he spent two years at proteanTecs as the Senior Director of Product Marketing for IP and Analytics for silicon health. Marc’s background includes working as an architect for test equipment and developing innovative concepts in semiconductor testing. His extensive experience positions him as a key figure in advancing Siemens' yield learning and debug capabilities.
Don Ong is the Director and Head of Innovation for the Advantest Field Service Business Group, a division of Advantest Inc. In his role, Don is responsible for driving the strategic development and implementation of innovative solutions that enhance Advantest’s Field Service business and operations. His responsibilities include identifying emerging technologies, leading cross-functional innovation projects, and fostering a culture of continuous improvement to optimize service efficiency, customer satisfaction, and business growth. Don works closely with internal teams and external partners, including startups, universities, and third parties, to ensure the successful integration of new technologies that meet the evolving needs of the semiconductor industry.
With over 20 years of experience in the semiconductor industry, Don has held multiple roles, including program management, product, application, and test engineering in both Silicon Valley and Asia. He holds a Bachelor's degree in Electrical and Electronics Engineering from Nanyang Technological University in Singapore and a Master’s of Studies from the University of Cambridge in the United Kingdom.
In this episode we will discover how scan technology and automatic test pattern generation (ATPG) have revolutionized the testing process, significantly boosting defect detection rates. Mark delves into digital twins, explaining their pivotal role in creating virtual representations for real-time system optimization.
Thanks for tuning in to "Advantest Talks Semi"!
If you enjoyed this episode, we'd love to hear from you! Please take a moment to leave a rating on Apple Podcast. Your feedback helps us improve and reach new listeners.
Don't forget to subscribe and share with your friends. We appreciate your support!
Welcome back to another exciting episode of Advantest Talk Semi. On today's episode, we'll explore how digital twins - virtual replicas of reality - are being harnessed within Electronic Design Automation (EDA) to push the boundaries of what's possible in chip design and manufacturing. Digital twins, much like a sophisticated physics simulator, can replay past events, reflect current conditions and even predict future outcomes. Meanwhile, EDA is the essential software suite for designing, simulating and verifying complex integrated circuits, forming the backbone of modern semiconductor innovation. The convergence of these two powerful technologies is where today's conversation will focus. But before we launch into today's episode, I have some exciting news to share with our listeners.
Keith Schaub:Joining the program as a new co-host is Mr. Don Ong, Director and Head of Innovation for the Advantest Field Service Business Group. Don is a driving force behind the strategic development and implementation of innovative solutions at Advantest. His role involves identifying emerging technologies, leading cross-functional innovative projects and fostering a culture of continuous improvement. His work includes close collaboration with internal teams and external partners such as startups and universities and third parties, which you'll hear more about in upcoming podcasts, so be sure to watch out for that. Together, Don and I will be diving into the latest trends and innovations in the semiconductor industry. So Don want to say a quick hello to the listeners.
Don Ong:Thanks, Keith. Hi, everyone! Happy to be here.
Keith Schaub:Don, welcome to the show. In this episode, we'll uncover how Siemens EDA is pioneering the use of digital twin technology to revolutionize semiconductor design and testing technology. We'll discuss how these digital twins are created, how they continuously evolve through real-time data, and the transformative impact they have on improving product quality, accelerating time to market, and achieving sustainable manufacturing. To guide us through this journey, we're joined by two esteemed experts from Siemens: Ron Press, Senior Director of Technology Enablement at Siemens. Welcome to the show, Ron.
Ron Press:Thank you, Keith, happy to be here.
Keith Schaub:Glad to have you! And Mark Hutner. Mark is the Director of Product Management for Yield Learning Products at Siemens, where he plays a crucial role in enhancing yield learning and silicon debug processes. Welcome, Mark.
Mark Hutner:Thanks, Keith, and it's a joy to be back on the show.
Keith Schaub:Yeah, I think this is your third time now. We wanted to kick things off, Ron, with just bringing the listeners up to speed on the evolution and history of EDA. So EDA's been around 50, 60 years. Walk us through that and bring us to the present of how Siemens thinks about and looks at EDA presently.
Ron Press:When semiconductor technology first started, the devices were pretty simple and you had a function the device was supposed to achieve. As devices became more and more complex, it became pretty difficult for an engineer to figure out how to make a set of sequences and patterns so we could test this device to make sure it was manufactured okay and there was no defect within it. So decades ago someone came up with this idea of scan technology. Because the complication of the semiconductor design usually takes you many, many, many tester clock cycles to be able to get from an input pin to an output response. And in today's world, it's millions and millions of cycles. So the idea was we convert all of the registers, which are flip-flops or latches, into controllable and observable elements. We call these elements a scan cell, so in test mode, we can control it and shift into it. And then we line them all up as giant shift registers and these we call scan chains. So with these scan chains, we can put the circuit into a test mode and load up in today's world millions, up to hundreds of millions of controllable points and we load the values you want from a tester and then be able to capture the functional response, and then unload those values, and see if is there a potential defect or not.
Ron Press:What this enabled was, instead of having to know this entire, really complex semiconductor design, we break the design into these loadable scan elements and then simple combinational logic between it and we call this structural design for test.
Ron Press:So we're not actually testing the overall function with our scan test. We're breaking it into these pieces and testing the structure and if the structure works, the overall design is going to work. By doing this we can do automatic test pattern generation or ATPG, and with ATPG it's pretty easy for tools to get 99% detection of all these potential defects. And for the modeling for us to be able to figure out what are these patterns, we need to make and load these values. With the scan cells, we do a very simple digital twin of what a real defect looks like in semiconductors, and the first digital twin modeling fault model was just stuck at a one or stuck at a zero, and then, as manufacturing and tolerances kept evolving, we realized we needed to have more dynamic fault models. So we're looking for some type of a transition defect, that's, a gross delay, or even a small delay, a more subtle defect, and then we start modeling other types of conditions, such as opens and breaches.
Keith Schaub:You mentioned something there: digital twin, which is the focus of today's podcast. So I want to understand first what is digital twin is.
Mark Hutner:Digital twin is really this virtual representation of a physical system that allows you to do all these kinds of real-time or monitoring or optimization kinds of experiments. So it's how do I represent this physical system into a simulation? And the question becomes at what level do you make that simulation work? So what Ron was describing was really at the chip level, the combinational logic in the device. You could very easily make a digital twin of a higher-order system. So your digital twin could be the tester talking to the device and there you would have the test program stimulating the device and we would have a larger order simulation looking for different kinds of faults Maybe something like test flow would be a good example of that and complex decisions.
Mark Hutner:Now for larger systems, people are doing this for factory floors or assembly of higher level products. So another example of this is, there's some really good videos of the Boeing 777 where they were looking at how to construct a plane and all the interference and all the modeling that would go along with it and the conditions. So it could be thermal, it could be other effects as well. So there's lots of conditions. It's really what is the goal of the models that you want to get and then really mitigating that risk of these systems. Our Siemens CEO Roland spoke at CES and he mentioned digital twin and sustainability as part of that talk.
Ron Press:And there's a TED Talk that has our digital industries CEO Cedric where he talks about how sustainability is a key initiative within Siemens and to help with sustainability we can use a digital twin and even at a high level.
Ron Press:We look at we're modeling something physical.
Ron Press:So instead of having to take physical resources, spend time doing experiments with physical resources, changing, re-experimenting, and taking a big period of time to get to a solution with a digital twin, we can do this modeling virtually in software to get to a solution With a digital twin, we can do this modeling virtually and get to a solution much faster and not waste resources.
Ron Press:And that's the type of thing we can do within the DFT and test world, because with scan diagnosis it's basically a digital twin for failure analysis. So instead of having to take a physical device that's failing and look at it and try to figure out what's going wrong inside it, we can do a virtual failure analysis using our software and when something fails on the tester, we'll look at the results we'll figure out. Because we have these hundreds of millions of scan points, we can figure out pretty precisely where that defect likely occurred from, and then we can do this on millions of devices within several days. So with all this information, this is a great, vast piece of data that we can now apply in machine learning to find out. Are there systematic yield limiters that are hard for us to recognize?
Don Ong:So a digital twin, like you mentioned, is a replica of a physical system in the digital world. The EDA is already in the digital world.
Mark Hutner:So, in this case, why does the EDA need a digital twin? The EDA side has really been leading the charge towards digital twins because we've started by creating simulation models. So it's really one of the first kinds of digital twins and how to be effective. I think the question is how do you expand its use of the models? It might be simplification of the models at different levels. The industry has brought up this new terminology around digital twin, but there's so many more ways that you can use this information that it's really important to sort of capture the essence of what we're talking about a digital twin and then extending to these new use cases.
Ron Press:Yeah, and I think I have an example in the ATPG world where, as semiconductor devices advance and get more complex and the fabrication technology becomes smaller and runs faster, we find there's new defects that start to occur that our old modeling doesn't cover sufficiently, and so one of the things that we have now is a newer type of digital twin of a defect is a small delay defect.
Ron Press:We used to just have a timing-aware model to test small delays. It looks like the delays through a circuit and then we just figure out, how do we test from start point to end point between these scan cells for these potential defects? But now we'll also include the modeling of the technology cells, which are the design cells that convert us from a regular logic gate AND gate, an OR gate, a flop, xor, something more complex. These are the ones we used to fabricate the semiconductor device, and these technology cells will do modeling of those cells and figure out what are all the potential physical defects within the technology cells and what are the delays within that technology cell. Now we can combine those delays with our timing-aware analysis to say what are the biggest delays within that technology cell. Now we can combine those delays with our timing- aware- analysis to say what's the biggest delay path through the circuit and automatically create tests to make sure that we're not letting any small delay defect escape our manufacturing.
Don Ong:How can the integration of digital twin into EDA workflow enhance what it's supposed to do in the first place the co-simulation of design of multi-domain systems, going into electrical, thermal, mechanical and what challenges might arise from ensuring that all these simulations are accurate and synchronized across all the domains?
Ron Press:For us, it's been important to work with our partners, often industry leaders, to make sure when we come up with a new model whether it's a fault model or a way to do our diagnosis or yield learning we need to validate that, and we'll validate it in software, but we'll also we're not going to release it to the industry until we validate with a partner and they produce real silicon and prove: is this model accurate? Is it finding what we're trying to find? Is it giving us the result we're looking for? And so that's one of the things, I think that at Siemens we're pretty good at, is, that we publish a lot of industry results using these technologies we develop with our partners, to show what's the cost to do the work as well as what's the type of result you might result in.
Mark Hutner:So an example of this is, we had an ITSFA paper, which is for a failure analysis conference back in 2021. And we demonstrated with Qualcomm a 2.4% improvement on resolution. So you couldn't really do that with just the technology that we produce. It's really: prove it with real silicon and that's why people trust us, our technologies, when we deploy them. It's not only developed a technology and a thesis on it, It is the real data and demonstration.
Keith Schaub:How much of savings do you actually get out of it? Is it like 10%? Are you saving like 50% of the time to market, for example? Or what can you say with regards to the type of savings or the type of benefits that
Mark Hutner:So, if you think about it in terms of failure analysis, just the time it takes you to do a single failure analysis, you first have to find a part that has a problem and then, that's usually on a tester or on a board. Just the mere fact of identifying the part and pulling it from a board trying to create a fault, can take weeks or even a month. And then there's the question of how to actually stimulate and then measure it and then even maybe de-layer the part to diagnose it to a physical thing. So you're talking about tens of thousands of dollars and many, many weeks of time. And that's if you get lucky. And most of the time it's about 50% of the time you'll get fault not found from those.
Mark Hutner:So you really find a collection of parts that kind of have similar failure signatures and hopefully get one of them. So it does affect your time to market in certain markets like automotive, where they have a policy that says it's ISO 26262 that says, within a certain number of weeks, you have to get to a root cause. So for certain markets, be it automotive or medical, you have to do this analysis a lot quicker, and that's also where a digital twin comes in. It's how do you get to that answer quicker. That meets the requirements for certain kinds of regulated markets as well as gets you to time to market for other markets. So there are real benefits for the customers by incorporating a digital twin or simulation or this virtualization.
Ron Press:So one other thing that digital twin gives us the opportunity to do is because we have a lot of stuff happening in the digital world. We have a lot more information available. We can get that information quicker. So in the diagnosis world like Mark was talking can have millions of diagnosis reports and use that and apply it to machine learning In the ATBG and architecture world, what it allows us to do. So we have a technology called Streaming Scan Network. So this is a technology where we have all your individual core designs. You're going to do your DFT and set it up so you can do your test patterns, create test patterns and cores as they're being developed and the amount of time it takes to do your tradeoff, to figure out how I'm going to get my IOPins bandwidth to each of these cores, which cores to test in parallel, how to optimize them, how big their patterns are. There are a lot of variables you have to figure out. You know how do I solve this overall problem.
Ron Press:But with Streaming Scan Network technology, SSN, what we do is we use a packetized data communication. So we use whatever many pins are available. We'll use packetized information to deliver it to the cores. So any core number of bits demand in parallel is fine because it's packetized communication. We can deal with one core needing one bit, another core needing 64 bits, and the bus width can be any number of bits, from one to whatever bigger number.
Ron Press:But by doing this now we basically don't have to solve it as a human.
Ron Press:We have digital twin, where all the information about how many bits this core needs, how big the patterns are, all that optimization now can happen in software automatically.
Ron Press:And so, it used to be, you'd have to do all these trade-offs, figure out how to architect the design. Some of our partners that have published on this have said that they had a 10x productivity improvement in just designing their DFT architecture because we removed all these variables, so that the user doesn't have to figure this out and the architecture of the core doesn't need to worry about it, because later on, when you say, here's my overall chip design, these are the cores I want to test and the patterns to apply, the software does all this optimization automatically, and that's basically kind of a simple version of a digital twin. We take it to the next step by being able to put that automation and software, instead of a human, do the work, and I think there's a correlation in the diagnosis world too, where we can apply this to machine learning, so the human doesn't have to figure out what's going on. Software will figure it out for us.
Don Ong:How does that digital twin in a factory setting interacts with the digital twin within EDA and how does that help enhance the whole data transfer and how do you use that kind of data?
Ron Press:So for Siemens corporate, we've been working on digital factory, which is a digital twin throughout the factory and there's a digital thread which is the information flow throughout the factory. And there's a challenge where we're trying to bring this to the industrial metaverse. This is at the corporate level, where the idea is that you have multiple digital twins interconnected so people can work with other people on the same digital twin at the same time. This is industrial metaverse and this is something that Roland spoke about, our CEO. Now, within EDA, we have the idea of putting together an AI information data lake, which is at a company site. We'll have the different tools that we provide be able to share vast amounts of information, this digital twin, information simulation, and information across different tools. This doesn't exist yet. We have information flow, but this data lake is something that we're working on and the opportunity is good because now you can apply all sorts of machine learning and AI to this vast amount of information, not just at one toolset, but across toolsets.
Don Ong:And that's how I would imagine that when the data comes back from the digital twin of a smart factory, you can apply machine learning, and that will help improve the whole simulation and ultimately design in the EDA domain.
Ron Press:So what we get from our machine learning on the scan diagnoses and Mark can describe more of this but we'll apply machine learning and it'll be important to pick the right data set. We're not going to do machine learning on scan diagnoses across two different technology nodes. We're going to pick something that's relevant, that we want to analyze maybe an excursion lot or excursion wafer where suddenly the yield goes down and then we'll apply that with our machine learning and from the results of that we're going to say this is where we think the root cause problem is. The software is going to tell us this is the problem and it's going to tell us here's the best eye to look at under the microscope to prove really what's going on there. And once they prove this they can go back and say we need to improve something in our process or we need to improve one of our physical design rules that says this geometry might be the problem. It might not even be a machine, but it's a geometry in our design.
Mark Hutner:And so there's a trick here about collecting the right amount of data for every part, at every insertion, where you can do this kind of volume ML analysis across lots and wafers. And that's what we've been doing at Siemens for more than 10 years with a number of technologies. Part of it is our yield insight product line and something that we call RCD, and then RCAD is the next extension past it. The RCAD one is really doing cell-level modeling, like Ron was talking about earlier. We look for all these kinds of trends across millions of parts, across millions of vectors of tests, or tens of millions or a billion lines of tests, and then collecting that and analyzing it and coming up with the best possible candidate to prove it. And, by the way, once you do that, once you've seen it in one or two parts, that's where the digital twin comes in. So when we report that as an excursion, as an error, you then trust the virtualization that we do.
Mark Hutner:One other topic that's probably worth touching on is about the data lakes, the different test insertions, and that information flow. There's a really good industry paper that gets updated every year or two from the heterogeneous integration roadmap and it talks about that information flow and Advantest, as well as Siemens, are part of that, and it's an industry-wide collaboration talking about the local data lakes for like wafer test and package test, SLT and into the field. So if you're interested in that kind of stuff, it's a really good reference in terms of how people are thinking about the data across different stages.
Don Ong:We hear a lot about talking about the data going from smart factory different task insertion data point going back into the EDA digital twin. What's your take about data going from EDA digital twin into smart factory or how can the data from EDA Digital Twin help the smart factory implementation?
Mark Hutner:At the factory side it's really more sort of the excursion signals and when you want to take action. So is this consistently an issue on the edges of a wafer or particular lot or particular machine? So it's how to boil out these signals to something useful where the factory can use it, versus it's just a random fault. So we're trying to help find those trends, mark the data so that way somebody else at the next level for that smart factory can do something useful with it.
Mark Hutner:So there are things that we look for within our tools that help our customers, and then there's other manufacturing partners that are looking for other things. Like for instance, you know, when you're talking about a high volume manufacturing part, there'll be 20 testers. There might be, you know, three or four insertions. It's how to understand that one of those lines had a different issue. Maybe it was a little bit warmer because the temperature controller wasn't quite working well. But how do you understand those other effects in the system become interesting for the smart factory and that goes back to modeling.
Ron Press:I think it's important for the viewers to understand is, when we're talking about this vast amount of information, your information on your designs, we're not talking about sharing it with other people and optimizing outside of your location. So, a lot of the work we do with this data lake and information is to optimize with AI, you know, be as smart as we can - but within your site. So, you know, I've heard sometimes people talk about pulling information from all these different sites up to a central place and re-optimizing. That's not a core of what we're trying to do, because we don't want to release your information outside of your designs. So the optimizations for these digital twins would be local. Digital twin is the enabler, so we'll have this digital twin capability, but it'll be local at your site where you're working on it.
Mark Hutner:To Ron's point, the core of the diagnosis engine runs with unsupervised learning. So it's algorithms that we've trained and that we've proven to ourselves that work. But the first step is training on the product that's under test. So those are all very local to the problem that you're trying to solve versus we're, you know, trying to deploy final trained models for everything. So and that's the tricky part with this is there'll be differences in design styles and technology choices, the library choices of the designs themselves, so everybody's a little bit different, which is why it's not just like the trained voice recognition model, for in a very narrow field. We need to make sure that all of our methods are very extendable, and this has been the theme for EDA for a really long time. How can I solve the generalized problem? Just like with ATE it was. How do I solve the generalized problem for tasks?
Don Ong:So what is the impact on the computational demands? What are the key considerations and potential trade-offs when balancing the need for real-time simulation capabilities in EDA, integrated digital twin, with the computation demand and the resource constraint?
Ron Press:That's a really good question! I'll let Mark answer.
Keith Schaub:We have hard questions here.
Mark Hutner:It's an interesting question and it happens a lot. And this comes back to something I said earlier about the what do you want to model and what do you want to get out of the model? And this comes back to using the right level model so it's not too heavy when you get to the highest level of the factory. So it's what are the signals that you want to model? And then, what analysis do you want to do? So you're not going to run scan diagnosis for a chip at the top of a factory. That doesn't make sense because there's, you know, many, many gigs and you'll get to petabytes of data from each of the parts once you get to the lot level. So you really want to get to what are the signals, so that way you have the right level of computation to model the things that you want to model. So you kind of have to think through the problems that you're trying to solve and that gets you to a reasonable computational load.
Mark Hutner:And, by the way, what happens is what does the customer really care about? They really care about the time to an answer. It's not really totally about computational load, it's how quickly you can model something and how quickly can you answer something and then to what resolution you want to model it. I'll tell you in my past lives, when I was more of a chip level designer, I had to make trade-offs between Verilog, FastSpice and Spice and it was the exact same trade-off through that stack where you would decide what you want to model at each level. And, by the way, a fall model, like an ATPG model, is a similar kind of abstraction. So you're not using the detailed model of the chip, you're doing a reduction so that way you can model the things that you want to simulate quickly and generate tests quickly.
Keith Schaub:How do you see this unfolding in either the short or maybe even the medium
Ron Press:So I think one way to look at it too is, as semiconductor design gets more complicated and more complex, EDA has to work at a little bit higher level of abstraction. Now, what we're doing inside might be more complicated, but we have to make it a little bit simpler for the user to be able to get their work done within the same schedule or a shorter schedule. So, always trying to bring you to a simpler, higher level of abstraction. That's one of the primary goals and one of the things that we see moving forward. You know, I mentioned this data lake between the different EDA tools, not just DFT tools, the yield tools, but embedded analytics, other tools. We'll be sharing more information across this data lake and that's going to give us more opportunity to apply AI. That's not all worked out yet, but some of it exists, but there's more opportunities there.
Ron Press:The other is when we take new methods that are efficient, such as the streaming scan network is a packetized data delivery, so we can expand that to not just be scan test. We can expand that to be loading up all sorts of constructs throughout the design. So it's a high speed bus that's already available. So we'll be building on top of some of these existing features, more capabilities, and internally it might be more advanced having a packetized data delivery. But externally you have some target and then we in software figure out how to deliver it through these methods that we have already.
Ron Press:And that goes with 3D designs. They're getting more popular, there's more complexity in them, but we're using these kind of structural, simple methods like SSN and IJTAGS, a plug-and-play technology that we already embed. We use that as some of the foundation for how we're supporting 3D. And so all this information, all the patterns, everything else, it's available, it's modeled and we're just going to be smart about how we apply it. It's modeled and we're just going to be smart about how we apply it. Make it easy for the user to retarget their intent and their patterns to the 3D stack.
Mark Hutner:So taking it one step further of the short-term and the medium-term. It's really how Siemens and Advantest are working together. There's examples of this right now with something like SSN, where we're providing and jointly developing test methods to take advantage of that network and really do the adaptive test and things like on-chip compare and what do you do for the data collection. Now these chips are getting more complex with more and more cores. There's different voltage rails. There's optimization of voltage rails for performance.
Mark Hutner:It's really how we're capturing this information and how it's being used in the test program to do adaptive testing and collecting the right information and testing in the ways that provides ROI. The point about working together is getting these workflows together so you can very quickly get from these test program chunks that we're developing and actually implementing it in a test program and getting to their customer goal of really how do I get bring up in the smallest number of days possible, because that brings you to time to market and time to ramp, and that's the thing that our customers, our joint customers, care about: they want to make sure that they get the highest possible coverage, to not let bad parts out, but they also want that time to ramp to be as short as possible and get to that data collection and characterization. And that's another area where my group is responsible for Silicon Bring Up and Yield Learning. For that reason, it's all about the data, the structures and how they get used.
Keith Schaub:I want to thank you, Ron and Mark, for taking us through this exciting journey of the intersection between EDA and digital twins. I, for one, learned a tremendous amount, and I also want to give a nice shout out to our new co-host, Don Ong. Thank you for joining the show and looking forward to a future podcast with you. And that does it for another episode of Advantest Talk Semi.