Advantest Talks Semi
Dive into the world of semiconductors and Automatic Test Equipment with our educational podcast, Advantest Talks Semi, where we explore the power of knowledge in this dynamic field. Hosted by Keith Schaub, Vice President of Technology and Strategy and Don Ong, Director and Head of Innovation for Advantest Field Service Business Group, at Advantest, this series features insightful conversations with experts and thought leaders in the industry.
In today's fast-paced environment, continuous learning is essential for staying ahead. Join us in these thought-provoking discussions, where you can learn about the latest trends and cutting-edge strategies being used in the semiconductor industry. Explore how innovative technologies are revolutionizing testing processes and shaping the future.
Stay updated on the ever-evolving semiconductor industry with Advantest Talks Semi, and gain exclusive insights into the future of technology.
The views, information, or opinions expressed during the Advantest Talks Semi series are solely those of the individuals interviewed and do not necessarily represent those of Advantest.
Advantest Talks Semi
Advantest, proteanTecs, and PDF Solutions harness AI power for yield, quality, and reliability
Advantest, proteanTecs, and PDF Solutions are teaming up to develop solutions that improve semiconductor manufacturing yield, reliability, and quality.
Advantest has created a real-time data infrastructure (RTDI), called ACS that transforms testing. proteanTecs is pioneering deep data analytics for advanced electronics monitoring, while PDF Solutions is a leading provider of comprehensive data and analysis solutions for the semiconductor ecosystem.
Together, they aim to address challenges for common customers, like increasing complexity, shrinking process nodes, and demand for higher quality control in order to minimize cost and expedite time to market. The collaboration leverages AI and machine learning to enable real-time decisions for quality and reliability with an interoperable open ecosystem that is designed to be secure and allows for scalable semantic models, reusable models, and impactful ML ops.
Advantest, proteanTecs, and PDF Solutions have a history of successful collaboration, with proven solutions already in the market, including Dynamic Parametric Test, which is designed for the early detection of anomalies in the wafer acceptance test, and a die-to-die solution that collects and analyses high-speed interconnections between die. All three companies aim to look forward and develop applications that span different stages of testing from scribe line testing to system-level tests.
In this podcast, we discuss how AI and machine learning are being used in the semiconductor industry, particularly in the automotive and high-performance computing sectors. We discuss how these technologies can improve quality control, optimize testing processes, and help alleviate supply chain bottlenecks. They also emphasize the importance of understanding device performance through measurement and analytics to make better decisions about test flow and yield. Optimizing throughput, quality, and yield is seen as a key factor in addressing supply chain challenges.
Finally, we discuss the potential challenges and opportunities for the semiconductor industry as AI technology advances. Emphasizing the need to make advanced AI/ML analytical techniques more accessible and easier to integrate into semiconductor operations, particularly in test operations. We discuss what the concept of "mostly good die" is, where products can operate robustly despite not having all transistors working perfectly, as this concept highlights the importance of understanding how to repair and optimize chip performance in real-time.
Don’t miss out on this exciting opportunity to learn more about our collaboration and the latest trends leaders of the semiconductor industry are working on!
Thanks for tuning in to "Advantest Talks Semi"!
If you enjoyed this episode, we'd love to hear from you! Please take a moment to leave a rating on Apple Podcast. Your feedback helps us improve and reach new listeners.
Don't forget to subscribe and share with your friends. We appreciate your support!
Keith:
Welcome to Advantest Talks Semi, the podcast that immerses you in the captivating realm of semiconductors and uncovers the innovations driving the industry. In this episode, we'll delve into the game-changing impact of silicon and synergies and the omnipresence of AI as they reshape and propel the semiconductor industry forward.
Advantest, proteanTecs, and PDF Solutions are collaborating to develop comprehensive cutting-edge solutions that elevate semiconductor manufacturing yield, reliability, and quality.
Advantest, a leading automatic test equipment manufacturer for the semiconductor industry, is transforming testing with the first ever real-time data infrastructure, RTDI, called ACS.
proteanTecs is pioneering deep data analytics for advanced electronics monitoring - from production test to the field, while PDF solutions stand as a leading provider of analytics, data management, and software solutions for the semiconductor industry.
Today, I'm honored to be joined by three esteemed experts in the semiconductor industry. We have Ken Butler, senior Director of Business Development for ACS at Advantest. Vishnu Rajan Engagement Director for PDF Solutions and Marc Hutner, senior Director of Product Marketing at proteanTecs.
Collectively, they are tackling challenges such as increasing chip complexity, shrinking process nodes, and the demand for higher quality control to minimize cost and expedite time to market.elcome to Advantest Talks Semi. Thank you, guys, for being here. We're talking about silicon and synergies and how AI is reshaping and propelling the industry. Let's kick it off with how does the collaboration between Advantest, proteanTecs and PDF Solutions leverage AI to revolutionize semiconductor manufacturing and testing. Ken, we can start with you.
Ken:
Well, Keith, we have a unique opportunity here because we have partnered with two different companies that have an excellent ability to be able to apply analytics to real-world situations and particularly in the AI and ML space. And now we can take those analytics and put them in our real-time data infrastructure, so that we can apply them virtually at any point in the manufacturing and test flows, in order to be able to leverage those capabilities to make far better decisions with our products for quality, for reliability and the other things that you mentioned at the outset.
Keith:
Thank you, Ken! Marc?
Marc:
ML is at the core of what we do. We've coded and designed silicon features with our ML to kind of extract different insights from the silicon, and it could be things like process, or it could be things like data for libraries or even how the chip is operating in the field.
So, there's lots of things that you can learn with ML and we're gonna talk again today about how we can leverage them into test in new and amazing ways.
Keith:
Thank you, Marc! Vishnu, let's go to you.
Vishnu:
Sure. Thanks, Keith. The primary area of overlap between us is really how we can use the massive scales of data that are collected more intelligently with AI and machine learning. This becomes really important moving forward, especially for multi-chip modules where the number of interactions you have is simply just beyond the human scale. Advantest brings the infrastructure for running these complex algorithms on the testers.
proteanTecs brings on-chip sensor data which is becoming increasingly more valuable for leading-edge devices. From the PDF side, we bring 30-plus years of experience in handling semiconductor data and algorithms. So, the combination of those is really the kind of thing that you need to be able to do machine learning on modern devices.
Keith:
Thank you, Vishnu. And speaking of infrastructure, Ken, what key challenges does the RTDI, the real-time data infrastructure, address? And how does it set itself apart from traditional approaches?
Ken:
Thanks, Keith. There are really four areas that I'd want to bring to your attention in answering that question. One is, as the name implies, real-time. So, we have a streaming data capability so that we can access information very, very quickly. Milliseconds type latencies in order to be able to make decisions in-situ, while the part is still being tested and is able to change, test content, declare die outliers: do those kinds of things all in real-time. The second element is security. What we're finding with a lot of our customers, mutual customers, and partners is that as you develop these leading-edge analytics capabilities, these are competitive advantages that you want to protect. And in a disaggregated manufacturing environment, who knows who has access to those kinds of things?
So, you need to be able to communicate data and IP in a very secure way. And RTDI really addresses that need for security. It's built from the ground up. The third thing I would point out is the open ecosystem idea. And that is that rather than being tied to any one specific supplier, the idea is that we're trying to create an ecosystem. It's like an open-source type model where everybody can contribute equally and so you can work with the best suppliers. And it's why the three of us can be here today collaborating.
And the final thing is the idea of reusability. So, we can take applications that for example, were developed in the context of validation on a bench setup. And we can immediately take that code and reuse it in a production capacity without having to retarget it at the specific hardware that it's gonna run on because of the containerization strategy that we use to build these applications. So, those are the four areas that I would highlight.
Keith:
Thank you, Ken. Marc?
Marc:
Thanks Keith. That's a really good question. And I'm gonna pick up on a couple of the themes that Ken talked about in terms of real-time with proteanTecs and our deep data analytic solutions.
It all starts within the silicon, and we have five agent types. But, in there, we have many, many more agents. And what an agent is, is really about doing a parametric measurement and a parametric measurement all across the die. And how do you use that for test? And how do you do that for reliability and efficiency? If you take an application like optimizing the voltage operating point, are you optimizing for the minimum voltage for minimum power? Or are you gonna optimize the combination of frequency with power?
And by embedding the measurement into a ML framework, we can then do those tradeoffs and create those insights which could then be used in fusing the part or it could be used in the field or it could be used for other test steps. So, there's lots of things that we can do because we've taken these measurements and done something more with it, with a predictive model and it's all reusable. So, you could also use it at different test steps, which is something that Ken talked about too. We can reuse the same models in different places, which is the infrastructure that Advantest has put forward. We've considered these things with our partnerships about how to use each other's capabilities.
Keith:
Thanks, Marc. There's a tremendous amount of new data and new data sources being integrated into the RTDI and feeding these algorithms.
Vishnu, how do analytics data management and software provided by PDF Solutions contribute to elevating yield reliability and quality in the semiconductor manufacturing process?
Vishnu:
Thank you for the question, Keith, it's a great question.
A lot of the value in doing machine learning is to ultimately drive an earlier decision.
In other words, don't wait until final test, package test to find out something has gone bad. You want to know that earlier upstream. What that really means often is that you need to be able to ingest a lot of different data types from a lot of different places. Exensio has what we call the scalable semantic model that allows us to ingest a line and have ready end-to-end production data with high volumes so that you can then use that data in a machine-learning model as Ken described. But you want to have that data available in real-time, you need an infrastructure that can do that and that's what Exensio provides. In addition to that there's also the, as we call it, the ML ops. So, the back-end infrastructure that allows you to build models, deploy, and monitor your model. Like: do I need to retrain it? All those kinds of things.
Those are all the kinds of things that we know are necessary for production-scale machine learning. And that's what we bring with Exensio.
Keith:
Thank you, Vishnu. Let's move over and discuss some of the unique challenges posed by shrinking process nodes. And how does your collaboration address these issues? Marc, let's start with you.
Marc:
That's a very deep question which we can take in many, many different ways. But let's start with, on-chip variation.
And then how to understand it, how to use it to your advantage, and how to compare it to even pre-silicon where you've done a bunch of simulations. One of the things that we can do is: we have a way to look at process classification. So, we've placed monitors across the die and then we can create models that can take that and then compare it to other applications. But it can be used at a very early stage like Vishnu mentioned. We can understand the variation across a die and guess what as new process nodes come online, that variation is a lot more. So how do I understand that impact to the applications that I'm being applied against?
So, there's lots of different things that you need to consider. But I think the importance of what the models can tell us and how the chips can be used will be more and more important. And in that light, I think there's other test measurement data that can come into play and this is where PDF Solutions comes in. So, we'll have additional data sources that can be used. And I also expect, as you kind of compare that to how the tester is operating and how the test flow is operating, it will become even more important with all of these pieces of data. We'll be able to optimize test time and optimize test applications going forward because we'll be able to learn a lot more.
Keith:
Thanks, Marc! Ken, what's your Advantest perspective?
Ken:
Well, I really resonated a lot with what Marc said about variation and increasing variation. And I think I would add to that list - Marc and I are both involved with the heterogeneous integration road map - is the idea that we're integrating chiplets into large diverse die, the access mechanisms that we have are becoming increasingly limited. And, all of this boils down to increasingly subtle defect mechanisms that are harder and harder to detect both in terms of: are they even visible at time zero or is it more reliability mechanism that's gonna fail over time with stress, or is it just these rare instances like silent data corruption that we talked a lot about at the International Test Conference last year that are rare events that are hard to find. And so, we need all the tricks and we can come up with, including and betting agents in the part, really advanced analytics in order to be able to process the data to be able to find these things. And that's what we're collaboratively doing, and pulling in all the right data sources that PDF provides to us. So, if we put all those things together, now we can do a lot better job at trying to track down and detect these very subtle defect mechanisms that we otherwise might miss just using conventional methods.
Keith:
Thanks, Ken, Vishnu. What are you seeing at PDF?
Vishnu:
Thanks a lot, Ken. I think the observation from our side, from what we see is that as you noted, chip complexity data volumes have all gone up. The mechanisms you're trying to find are increasingly more subtle. And I think the net of that is that, as you call it, the conventional rule-based methods while they might be necessary for certain industries, say for automotive, you have to run DPAT, it may not be sufficient and you need to look beyond that to find, “wait a minute, even though I ran this set on standard quality rules” - “Why am I still getting customer returns?” What is that next level of failures that's underneath that, that I am not capturing with my conventional methods? I think that's really where the three of us working together can really attack those kinds of problems. And I think as Marc mentioned, not only improve throughput during test, but have the end result where your outgoing quality is improved as well.
Keith:
Thank you, Vishnu. Ken, from these collaborative efforts what are some of the success stories that you can share with the audience?
Ken:
OK. I've got a couple of examples come to mind very quickly. With PDF Solutions, we have collaboratively developed a solution we call Dynamic Parametric Test or “DPT.”
And the idea here is that this is done in the FAB at wafer acceptance test or sometimes you might hear it called e-test or parametric test where you're applying these tests and as material is moving through the FAB and you're doing these things, there's maverick material that might show up and you want to catch it right then and there. But oftentimes, what happens today is that the material escapes, and then you have to go - and once you've detected after the fact - you go grab the wafers, put it back on the chuck, go collect additional information. And with dynamic parametric test working with PDF Solutions, now we can instantiate a set of rules and those can apply to every device in a particular process technology or every set of programs in a particular family of devices.
And then you can apply these rules that say if I detect a particular anonymous condition, I'm gonna respond in the following way. And the following way could be, I'm gonna collect additional data. I'm gonna shmoo across voltage. I'm going to hit additional sites around the die. And that way I've got all the information right then and there to go off and send back the people who are gonna do a root cause analysis. So, you can eliminate the anomaly before it really impacts a lot. And we've had one customer that's really doing a lot with this particular technology and I'm sure it's going to take off with others as well. But it's a very exciting technology that our customers are really interested in and asking us actually to move it, not only from a parametric test, but also to probe, similar kind of idea applies to probe. With proteanTecs, Marc has done, and his colleagues and he talked a lot about this die-to-die solution that we've been working on.
And the idea here, and these guys, and Marc can explain it better than I can, and I'm sure Vishnu can explain the other, but the idea is basically that you have these very high-speed interconnections between die say, and a chip-based design. And you want to be able to go and collect a lot of information about the health of that particular link.
And I might use it in terms of computing what I'm gonna do for spare lanes or diagnosis of the performance looking at eye diagrams and those kinds of things. And this is something that we can collaborate on between their capabilities running in our environment in order to be able to really do a great job of collecting this kind of information and reacting to it and using it.
Keith:
Thank you, Ken. Marc, do you want to elaborate on that?
Marc:
Yeah, the die-to-die interface, it's really an area where there really wasn't any access. So, you have two chiplets that are talking to each other and now you have the ability to take data from one side or both sides and then look for things like outliers or even a test coverage problem, where you might not have tested those spare lanes. So, we give you the information you didn't have before and then we're applying these algorithms to look for outliers. And for us, we can also do outliers on timing margin path leakage detection where we're combining all of these measurements in wonderful ways.
Another example where our models are running on the tester is we've taken test data from a later step and brought it back into wafer test for stuff like silicon grading to get performance information. So, there are lots of ways that you can combine it and then use the compute capacity of the tester to get better information. I think that we would be remiss to not mention that the ease of deployment within the tester is also a really important aspect. What we've noticed with our infrastructure that we've done jointly together is it can be up in hours, it's not weeks of coding. So, there's really a benefit to having standardized the infrastructure pieces and then how it gets deployed.
Keith:
So, Vishnu, let's go over to you! Comments?
Vishnu:
Ken mentioned Dynamic Parametric Test, which is one of the things we've worked with Advantest on, and as Ken noted it's deployed out in production now. One of the other really interesting things with working like a company like Advantest is the fact that Advantest does scribe line testing, wafer probe testing, functional testing and system level test. So, the idea that looking forward, you can begin to look at applications that go from one test insertion to another. And as Ken mentioned, some of the things about DPT trying to move those forward into wafer probe, this is the kind of activity we want to continue to foster and work towards and we expect to do more of this, you know, in the coming years.
Keith:
So, Marc, can we come back to the outliers? I'm really curious as to what types of outliers or what sort of things you are seeing?
Marc:
So, in terms of the kinds of outliers, if you look at a die-to-die interface, something like an HBM interface where there's say 1000 pins or so, is it just a single pin that's sort of out or is it a group of pins because it is grouped? Is it only happening on a subset of parts that are produced? So, we can start to get those trends. And it's because we're looking at it as a per die per interface analysis, we can start to look for those different kinds of groupings.
You know, is it a layout problem that's leading to that as well? So, there are lots of ways that you can by taking the measurement, come to some level of understanding of what's the design problem, what's the manufacturing problem is.
Keith:
Great. Thanks, Marc. The next question I have is around high-performance compute (HPC) and automotive. These are two of the fastest-growing segments in our industry and also are two that benefit the most from AI and machine learning in the semiconductor industry. I'd like to understand why that is. And can you provide some examples of how it's being used in those segments? Vishnu, can you provide some insights there?
Vishnu:
Absolutely. So let me talk a little bit about automotive and you know, in my view, also very closely related is medical. These are devices where outgoing quality is of the utmost importance. And so, in cases like this being able to make a real-time decision to ensure the outgoing quality can be enormously valuable. Let me give an example: an example might be, let's say you are testing some parts and you start to see something that maybe doesn't look so good. Maybe this lot or this wafer looks a little off.
You know what I'd really like to be able to do in real-time? I want to turn on more tests and make sure that the rest of that wafer or the rest of that lot is okay. Being able to both determine what is good and bad, that's one place where machine learning can come into play and the other piece is also being able to have the infrastructure to make that decision in real-time. So, these are the kinds of applications and use cases that were readying and some of which are available today.
Keith:
Vishnu. That was great. Marc, over to you.
Marc:
I want to pick up on something that Vishnu mentioned. It's also about different design features that are in the silicon. So, how can I make a better decision of when to run more tests or why those tests might be failing?
So, things like incorporating voltage temperature monitors and comparing that to how the test runs and then saying, hey, maybe we're on the hairy edge of things working and then using that to then change the test flow becomes very interesting. I think there are some more details like this in the Advantest/proteanTecs (insert link to white paper) white paper about how to use the edge computer going forward. So, it's a great resource to go look at as well.
Keith:
Marc, that was great. Ken, over to you.
Ken:
Both these guys said it well. Vishnu, I think he covered very well the automotive space and the need for quality in terms of parts per billion type quality levels, zero defect mentality. Earlier in the podcast, he mentioned DPAT, which is the standard sort of algorithm that's been around forever that automotive companies tend to want to have their IT suppliers do.
Earlier in my career when we were looking at that question, what we found was, that algorithm, while still well accepted in the industry, is fairly crude in terms of, it takes a lot of overkill in order to be able to catch all the defects. And what you need is a better scalpel to be able to surgically remove the defects without huge amounts of overkill. And earlier in my career, we moved away from DPAT and towards this thing called location averaging, which was a more, somewhat more sophisticated statistical capability to be able to process the data and locate the defects without a huge amount of overkill. And now - and that was many years ago - we need to move forward to the next generation where we're gonna move to even more sophisticated algorithms with AI and ML that are getting even better and better at being able to ferret out the defects and find the ones that we were missing with simple variable deep pattern location averaging type approaches.
And for the high-performance compute, I would say that cost containment is probably their biggest issue. They're having to have more and more and more content that they have to apply to the device. No matter how big and beefy we build the testers, you probably cannot fit all the test content that they want to use.
So, then the question comes down to: How do you optimize the test flow using analytics to be able to make sure that every die gets exactly the test that it needs? Die “X” needs this list of tests, it gets this list of test, die “Y” needs that list of tests, it gets a different list of tests. And we have to kind of customize the testing on the fly in order to be able to simultaneously improve quality but also accelerate time to market and keep the costs at a reasonable level. And that's, how we're trying to help the HPC community collectively.
Keith:
Thank you, Ken. I heard the word optimizing quite often. And I'd like to ask in the context of the ongoing global chip shortage. How can these collaborative solutions help alleviate supply chain bottlenecks to meet the growing demand of semiconductor chips?
Ken:
Well, Keith, what I would say is that if you have supply chain shortages and you can't build up as much material, then every die loss is a serious outcome. And so, you want to be able to maximize the screening case ability and minimize the overkill. So, you have to use more sophisticated test methods in order to be able to make sure that every die you can find is a good die that you can actually make it provably good and be able to sell it and don't kill the ones that you might possibly be able to sell. Even if it's a bin 2 type scenario where it's a diminished device. That takes a lot of intelligence in the test process in order to be able to do that.
Keith:
Thank you, Ken. Marc, over to you.
Marc:
So, it's all about the device learning and understanding your margins and this is where going into the die and having different features because then you can sort of break it down into a lot of ways where you can have a higher yield by saying you understand the different effects. So, it goes to something that Ken was saying about overkill. Normally you would set your limits so high that you could guarantee operation. But by doing these kinds of measurements and bringing in the analytics, you can actually improve your yield by saying, I understand these various parameters and how the die is working.
And that could then help you decide whether it's better to apply a more stringent limit for a high reliability application or is it something where you can guarantee a performance window by breaking down the margins in different ways. But the only way you can get there is really by having these other features and really understanding how the die is performing.
Now, the other thing about doing it that way is then taking it into the field using those same measurement techniques and understanding the health of the silicon as well. The reason why we had higher margins to start with was we wanted to guarantee a product life: If you can then apply applications into the field and then change how it's operating over time, you achieve the same goal without having the very high margins at the beginning but understanding its operation and doing a per die assessment versus the stringent statistical yield limits what we used to provide.
Keith:
Thanks Marc. An excellent insight. Vishnu?
Vishnu:
The way I would look at this is in terms of making the best use of your available test capacity. So, if you think of three things that you're mainly typically trying to optimize say throughput, quality and yield. Those are common three maybe along with throughput goes cost. That kind of thing where you're trying to do this optimization, is essentially an optimization problem at that point. That's perfect for like a multivariate type of analysis where you're saying: balance these three things and if you're gonna turn certain tests off, this is what I'm trying to optimize for, this is my available test capacity, things like this. So, this is where the advanced analytics and being able to make those decisions really comes into play.
Keith:
All right let's move on then to talk about what's coming in the future. So, as AI continues to advance, what potential challenges do you foresee in the semiconductor industry? And how do you and how will your company adapt and innovate to stay ahead of the curve? Vishnu?
Vishnu:
Thanks, Keith. Really great question. We're definitely at a pretty exciting time in semiconductor analytics. Over the recent years, data volumes have gone up a lot, chip complexity has gone up a lot, and I think for the most part, the data management systems now have mostly caught up to that. You know, we can handle the data volumes and so forth, but I think what's next now is what are we doing with all of that data. And so, we're now at a really right point to drive a lot of advanced AI/ML analytical techniques that can make use of all that data and drive real-time decisions increasingly more upstream. That's really where you know PDF Solutions is focused in terms of test operations.
Keith:
Vishnu, that was great. Ken, over to you.
Ken:
This is a great question. I've read some articles lately that the gist of the articles would argue that in smart manufacturing, perhaps our colleagues in the wafer FABs have been a bit more aggressive in adopting some of these newer technologies than what we're seeing in semiconductor test. Semiconductor test has been perhaps a little bit more conservative. And, in my past experience, a lot of people are hesitant to take on a technology where they don't really feel like they fully understand it. And I think the challenge for us and maybe not just Advantest, but all the companies, is to work through all the difficult integration problems and to simplify the application of these things to make it relatively easy, to be able to incorporate these kinds of ideas into the test operation to kind of break down these barriers a little bit. Vishnu mentioned the ML ops. I think ML ops is a big part of this or whatever, building a model, training, a model, keeping it trained up, you know, we have to make that simple enough that virtually anybody can do it. And I think if we start to break down these barriers, simplify the mechanisms that you use to deploy the technology, then I think that we'll see the adoption rate will go up. And I think it needs to because we really do need these kinds of technologies in order to be able to cope with the technologies that are coming at us as we've been talking about through the last 30 minutes or so.
Keith:
Thank you, Ken. Couldn't agree more. Marc?
Marc:
Yes, for sure. And this is a topic that Ken and I work on in the HIR road map. And it comes to a theme that he's kind of talking about, which is traditionally for test, it's known good against a set of known parameters where and we've taken it as a per parameter, per measurement thing. And one of the things we discussed in the HIR road map is the mostly good die going forward. And it's gonna be very interesting what good means, what good operation means going forward and being able to leverage, combine all of these tests to a better operation metric. So, I think that's something that we're really focused on. We've combined a lot of measurements in interesting ways. I think there's a lot more ways that would end up optimizing test time or optimizing coverage. So, I think we're gonna have more complex models. And in that is a theme that both of them were talking about, which is really the trust of using these models and its impact to your products. I think there's lots of good debates that we're having about the use of ML technology going forward. And I think we are gonna end up breaking down those barriers thanks to our various approaches combining within a tester.
Keith:
Thank you, Marc! And to follow up on that, you mentioned mostly good die. Can you describe or explain what does that mean? Mostly good die versus good dies?
Marc:
To the customer it will seem it’s a good die. But think about it in terms of memories today: how many memories today are perfect? Zero. They all require repair.
So, from a customer perspective, it's really about how to be resilient enough that the applications that are running on those dies are really robust. But it's being able to operate like in the case of memory robustly regardless of if all the transistors are working. So, it's understanding those conditions and then being able to repair around them.
Keith:
Thank you, Marc. And Ken, we can start with you: With Semicon West just around the corner can you give us a sneak peek on what new innovations that will be on display at the show?
Ken:
Well, Keith, as you know, Advantest always has a booth at Semicon West. We’ll be talking about ACS and the real-time data infrastructure there as we talked about at the beginning of our podcast. There is also the Test Vision event that is co-located with Semicon West. It'll talk about several new innovations and things that we're working on. So, we're excited at the opportunity to participate in that particular event.
Keith:
Thank you, Ken. Marc?
Marc:
For ProteanTecs, we're participating not only in Semicon and hopefully in Test Vision. We're also at DAC where we – like in previous years – demonstrated physical devices as part of our booth and we're looking forward to demonstrating partner-based solutions at the upcoming event.
Keith: Thank you, Marc. Vishnu, let's go to you.
Vishnu:
Thanks Keith. Indeed, we will be at Semicon West. Really our plan there is to be there to meet with customers and discuss our solutions really around three areas: Test operations, in-fab analytics and also our Cemetrix products which goes beyond equipment manufacturers, beyond test.
Keith:
Thank you, everyone. We are going to have to leave it there. I'd like to thank my esteemed guests, Ken Butler from Advantest, Marc Hutner from ProteanTecs and Vishnu Rajan from PDF Solutions. See everyone next time on Advantest Talks Semi.