Episode 4 features two experts from PDF Solutions – Vishnu Rajan, engagement director, and David Park, vice president of Marketing – who discuss big data’s impact on the semiconductor supply chain and what’s needed to tear down the inefficient data silos and move toward a fully connected data ecosystem.
KEITH SCHAUB: I’m Keith Schaub, Vice President of Technology and Strategy at Advantest. And you're listening to Advantest Talks Semi. The semiconductor supply chain is extremely complex, yet the quality and reliability requirements continue to advance. Luckily the digital transformation that is occurring as part of industry 4.0 is generating massive amounts of big data, enabling globally scalable and sustainable supply chain optimizations. However, unlike a continuous streaming river of data, much of the semiconductor data is highly fragmented into disjointed data lakes. Today we have experts from PDF Solutions, Vishnu Rajan Engagement Director and David Park, Vice President of Marketing to talk about what's needed to tear down these data silos. Welcome Vishnu and David. Supply chain shortages. Why do we have them today so badly? What’s different about today’s landscape? David?
DAVID PARK: Hey Keith. Well, thanks for having us on the podcast today. I think the, everything right now seems to trace back to the COVID-19 pandemic by forcing everyone to go home appears to have throttled a lot of the supply chain in terms of the manufacturing throughput that has historically been available. It made it more difficult, more challenging, for companies to continue manufacturing and packaging chips and the volumes they had historically been able to do. And then the compounding factors with everyone moving to a work from home, you had a lot of people start to consume electronic products or devices in much greater quantities and that caused a big change in consumption. It's obviously caused a lot of problems and other market segments like automotive as well. And it will probably take a while for the industry to catch back up to where things are back to a more normal state.
KEITH SCHAUB: I've seen in the news where six months ago or so, most of the predictions were that we'd be coming out of this by the end of this year. But now the latest reports I've been reading show that it could be sometime in 2022 before we really have everything balanced out.
DAVID PARK: There's definitely a cascading effect to everything that's been going on and it will take a while to I think stabilize and then get back to more normal levels.
KEITH SCHAUB: Okay, let's talk about some of these electronics. In a recent New York Times article, it was reported that the iPhone has sold more than one billion phones, making it the most profitable and bestselling product for Apple. What's interesting is we see more than 200 chip suppliers around the world that supply chips and other components into the iPhone. There's this company website called “I Fix It” – not affiliated with Advantest or PDF Solutions and they report all the different chips and manufacturers that supply into it and I thought it would behoove us to kind of start there and then we can dive down into the actual manufacturing of the chips. So just to read off some of the chips and suppliers, there's of course the A14 Bionic SOC, which stands for system on a chip. There is memory from Micron and Kioxia, there's LTE transceivers, modems and 5G from Qualcomm, wireless charging from S. T Micro. And then there's all the sensors like Bosch accelerometers that when you're standing on the street corner looking at your maps and you move around the map updates live. So this list goes on and I apologize for any supplier out there if I didn't mention you specifically. But the point is you can imagine the complexity involved with having to line all of these chips up to merge into a complicated smartphone for the end customer. And so I thought what we could do is kind of talk through that process because that's where Advantest and PDF Solutions start to play a much bigger role in the supply chain. Vishnu, my question there is, can you tell us a little bit about front-end processes from the front-end into the test life cycle?
VISHNU RAJAN: Absolutely, thank you Keith. For many chips these days you know the fabrication process, as you noted, is complex. Often it is several hundreds of process steps to simply get to the point where the chip can be initially tested. The front-end fabrication process can take several months depending on the complexity of the chip and how many process steps are involved. And then once it reaches a point of being tested, you can then, you know, obviously you then do your validation of hey, did we get what we expected? Is it meeting the necessary performance that we expect from the part? Is the yield sufficient? Basically, is the chip overall healthy, after that is then a whole packaging process. So you've tested the chip you said okay now it meets the criteria I need, now you then go into a packaging flow to actually make it into something that can be put into an iPhone and that's a whole another flow. And after that then there is an additional testing step which is typically called final test where the package part is tested to make sure that before it actually shifts to the end supplier, say Apple or whoever it might be, that the part meets the necessary criteria to actually be shipped.
KEITH SCHAUB: I understand on the front-end piece of it, it's got to go through 2-300 process steps and I also understand it gets measured at various steps and a lot of data gets collected before it moves on through the process. And the same happens with test and generally as it comes out we have to test that wafer and we generate a bunch of data and like you said, some of the parts will pass, some will fail, the passing parts will move on and they will get assembled typically into a package or multi-chip package. And of course we have to test them again and this is a recurring theme. We assemble, we test and this assemble and test process has multiple steps and that's what we call the product test life cycle. There's a ton of data that gets generated from the beginning, all the way from the front end throughout the product test life cycle. And I'd like for you to talk some about that data.
VISHNU RAJAN: I think one thing that's definitely different between say now and say a number of maybe 10, 15 years back is the sheer volume of data that we’re often managing these days. 10, 15 years back if you had say 10, 20 million roads of data, you probably quite literally may not have been able to analyze it. Quite literally, the infrastructure may not have existed to be able to handle that volume of data. Nowadays however, it's a very different scenario. Systems actually exist to handle really large volumes of data so you don't need to necessarily, say pick and choose hey, I only want to collect this data or this data as an engineer or as a product. Someone that's managing the product you can, you can do hey, I'm going to run a lot of testing to the extent that I have capacity and “give me all the data” and I will use that and analyze it to find out what things I may need to then go and adjust.
DAVID PARK: To add on to what Vishnu said, I think other thing that has changed obviously the volume and velocity of the data that Vishnu talks about is absolutely true. One of the other underlying things that has happened that's allowed us to have these for lack of a better word, data silos from the past, is we had very discrete steps that were kind of mutually independent. We've gone from having these monolithic integrated circuit chips where things were on these nights hermetically sealed packages, to these chiplets that get aggregated together in a larger package or substrate for a system on system and package or a multi-chip module. And the other reason why a lot of the things Vishnu talked about are so important is that in the past, the typical output historically was like a printed circuit board and you could test the printed circuit board and if a chip was bad, you simply disordered it and stuck it in a chip, retested it and you’re done. You can't do that in a system and package or multi-chip module. So the ability to make adjustments as Vishnu talked about is very important and that means that you have to have the ability to move data across silos so that you can get data to the right place at the right time for the people who need it.
KEITH SCHAUB: What I'd like to better understand is how this business model plays a role in the data silos. It would be good if we could explain a little bit about the business model of an IDM versus a Fabless Model and the differences of an OSAT versus a foundry. So maybe Vishnu, if you could help us understand that a little bit better.
VISHNU RAJAN: Sure. What many folks maybe more traditionally familiar with, like you mentioned, Keith is called the IDM model. So, this is where one manufacturer is doing everything from the design, through the manufacture, through the testing, through the packaging and out comes this finished part. An example, one that many folks will probably familiar with is Intel. Right. So Intel, they do everything effectively in house, everything from the design to the manufacturing, they have their own manufacturing facility, they do their own testing and then out comes a chip that you can buy from wherever you purchase a computer chip from. A different model now is the Fabless model and this is where a company is providing a design. So, if you think of the entire chip production as design, manufacture, testing instead of doing that all in one company, imagine now that's going to happen essentially in three different places. So you have one company that provides a design, that gets manufactured at a foundry, that then gets tested at an OSAT. So these are happening now in three different places. An example would be a design is provided by Qualcomm. The manufacturing is done at TSMC and the testing is done at an OSAT. So you have basically three different companies now that are involved in making a chip and getting it out the door. So these are two very different models. The Fabless model, I think over time and in recent years has become more common where you have a variety of companies that can provide designs. You have companies that are very dominant in the manufacturing process and then depending on your testing needs, you have a variety of places that you can pick from for testing.
KEITH SCHAUB: I can imagine that the fabulous model like you said, it allows the players for that particular segment to focus and specialize in that particular segment. And in doing so, the goal is to be more competitive whether it's lower cost, increase yield, faster time to market, whatever it is. I guess I can also imagine though that this creates difficulty with the data that goes along through that process. There are three different companies or entities and all are part of that process and those are globally all over the world. So, the data that is generated by each are different formats, they're stored differently, their accessed differently, they have different permissions, different ownerships. And I can imagine that that's a big challenge when the data is what's necessary to make those improvements like David was suggesting.
VISHNU RAJAN: Yeah, absolutely, that's a really great point, Keith. You know, when you think of the IDM model where everything is done, call it under one roof, you can reach back and forward across different sections of what you have under your roof. This is definitely more difficult in the fabless, foundry, OSAT model. Let's say you have something as an end customer that you get and you say, hey, this part that seems to be performing a little different than I expected, you may need to reach back 3, 4 layers deep into how the chip was made in order to find out what actually happened. So, absolutely in terms of both the raw infrastructure of what's needed to aggregate and synthesize and integrate all those different data pieces together, that is definitely more complex. And also you begin to get in some interesting questions around well what data do you have the rights to? Can you actually get to that data that you would like to be able to get to.
DAVID PARK: And as Vishnu said doing that within an IDM is really not an issue because everything is within your four walls. And if there's any data that you need you simply ask for it and you get it because for the most part you're one big happy company. But in the fabless model, you are talking at least three if not more companies and some of that information may be considered very company confidential.
KEITH SCHAUB: Yes, it would seem that this data feed forward data feed backward concept, Vishnu you described, the data is being generated by one entity but yet it's needed by another entity and is in effect highly valuable to that other entity. But the data may not even belong to either of those entities. So that could be very challenging. Whereas it feels like the IDM should have a distinct advantage with this new era of big data and utilizing that data for this digital transformation. I don't know if you're seeing that with your customer engagements but just seems like that would be a natural outcome.
DAVID PARK: I definitely think a lot of customers are very interested in moving towards a digitization digital twin type of infrastructure with their supply chain. It actually gets very nuanced when you start digging into the details when most people talk about industry 4.0 or the Industrial Internet of Things, the IIOT, really the easy low hanging fruit is just improving your operational efficiency and how you do things like preventative maintenance. Because the worst thing that can happen in any high volume supply chain is you have something in your supply chain breakdown and it starls up the rest of your supply chain while you fix this whatever thing that's gone wrong and having the ability to visualize good operating signals for lack of a better word, to know when things are going slightly askew so that you can take care of them proactively instead of reactively is really I think a huge primary goal for a lot of these companies that are going through these digitization processes. But for semiconductor, which I think is much further along this journey than a lot of other industries. When you talk about things like data feed forward and data feed backwards from my perspective they’re solving two separate types of problems. The data feedback word that Vishnu has already alluded to, that's really about improving the way you do things or being able to find the root cause of problems if God forbid something happens in the field and you have to figure out where did this problem actually start from because it's not acceptable to say oops bad part, we’ll ship you a new device. So being able to do rapid root cause analysis and move from a final product, through assembly, through final test back into the foundry is a huge benefit across the whole supply chain. It's not just uncovering what happened, but it's also figuring out how to keep this from happening again. And one of the biggest benefits of a data feed forward process, is the ability specifically for manufacturing test operations, assembling packaging, in the simplest terms is to create two populations. Test more and test less is kind of how I describe it. You can have parts that are coming out of your supply chain that as you test them, they are so good. It's going to be a great device. Why waste test resources on something that it's never going to fail. But there's other devices like, you know, it's not perfect. It works. But I don't know if I want to send this to my top customer or maybe I want to put into a supply chain that it's not a mission critical supply chain. So maybe consumer devices versus automotive and the ability to reclaim or recapture some test time that you don't need a gun under perfect devices and spend them to do much more detailed binning for how you want to disposition some of these other devices. It's a really efficient way for companies to do a better job of making sure that for supply chains that are again, mission critical ADAS, (Advanced Driver Assistance Systems), antilock braking systems in a car versus a smartwatch, being able to leverage data across the supply chain and move it forward from again, out of some one companies four walls into another company's four walls so they can do a better job. I think that's a huge benefit of these modern big data infrastructures and analytic tools that not only help companies collect and clean and pair the data for analysis, but then make it readily available to query both from a an interactive as well as an automated method to do these types of complex analysis.
KEITH SCHAUB: So the way you describe that, the test more the test lest, in Advantest we call that test rebalancing. And what we mean by that is, it's no secret that quality requirements of chips in general, specifically for automotive are increasing really any chip, the quality requirements are just getting more and more stringent. And if you want higher quality, well then you have to do the things like you just described where you only have a certain amount of test budget. And one can't just arbitrarily test everything to the extreme at that point, it's unaffordable. So you have to be smart about how you're going to allocate your test budget across the test life cycle, whether you do more testing at wafer, and less at system level test or vice versa. So in some cases you would want to do more testing at system level test and maybe less at package etcetera, etcetera. But having the data infrastructure and the data capability to go forward and backwards, that's what really empowers the industry and the supply chain to rebalance and optimize across the entire product life cycle. So I think that's really important to improve the quality.
VISHNU RAJAN: Yeah, I completely agree with you, Keith. I think what we see more of is, you know, not necessarily folks looking to reduce the slash, you know, the test time too massively right, but like you said, you know, I like the word used rebalancing, right? How can we make more efficient and smarter use of the test time we have so that we focus it on the right things and while preventing escapes, for example, right? That's your kind of worst case thing is you don't want something to escape that shouldn't have escaped. So how can we say, for example, reduce the things we know are good. Focus it more on escape prevention.
KEITH SCHAUB: And that leads into what is next? What's coming to enable these new ways of improvement? These new ways of optimization that are necessary to be competitive in an industry 4.0 and for that discussion, I'd like to invite both of you, Vishnu and David to come back.
DAVID PARK: I would love to do that.
VISHNU RAJAN: That you much, Keith this was a pleasure.
KEITH SCHAUB: Now it's time for Junko’s top three takeaways. Junko Nakaya, Advantest Global Marcom Team joins us to talk about Virtual VOICE 2021. Junko, go ahead.
JUNKO NAKAYA: All right, thank you Keith. So, as you know, Advantest annual developer conference, VOICE, took place last month. It was the first time in 15 years that VOICE was virtual. So I'm happy to tell you that overall it was very successful and we are still gathering VOICE feedback and finalizing the data, but I feel pretty confident about my top three list today. All right, so my takeaway number three is that among the nearly 70 technical papers delivered live during VOICE, those related to 5G mm wave and the age of convergence were the most popular. And these topics represented two of the four important new tracks we actually added to the program this year. Then my second takeaway is that virtual VOICE was truly a global event. We had over 300 attendees representing 15 countries and 13 sponsors from five countries. In addition, the call for papers yielded more than 160 abstracts from 10 countries. Alright, so to wrap it up, my number one takeaway is that even with the virtual format VOICE delivered a great value to our attendees. Post-event feedback surveys indicate that more than 90% of respondents felt that virtual VOICE was valuable and that the technical program met their needs. And speaking of the technical program, I want to congratulate the team from Microchip and the Advantest Team partnering with R&D Altonova for winning the best paper awards. Also, congratulations go to Derek Lee of NVIDIA who won VOICE’s first visionary award for his active involvement with the conference for over a decade. So it was a difficult decision to hold VOICE virtually, so we really appreciate all the support the event received and it was great that it gave us the opportunity to introduce voice to more than 20 new companies that had not attended VOICE before. So we look forward to seeing these new faces and all our longtime supporters in person at VOICE 2022 next May in Scottsdale, Arizona.
KEITH SCHAUB: Thank you, Junko. I look forward to the next Junko’s top three takeaways. To round out this episode. Hira is here with some Advantest updates. Hira.
HIRA HASSAN: Thanks Keith. On June 30, Advantest announced that the results of a clinical study on the identification of COVID-19 viruses using our nanoSCOUTER™ fine particle measurement instrument have been published in Nature Communications, a peer-reviewed scientific journal. Keith, I understand we might cover this in an upcoming podcast?
KEITH SCHAUB: Yes, that’s right. The initial results look highly promising with more than 100 samples identified in less than 5 minutes. This could be game changing viral detection technology. We’ll be telling everyone more in a podcast later this year. Until then, please visit Advantest.com to read the press release.
HIRA HASSAN: Thanks, Keith. Our listeners can also catch up with Advantest at SEMICON Southeast Asia, which will take place August 23-27. Finally, be sure to connect with Advantest on Twitter, Facebook and LinkedIn for all the news and much more. Keith that’s the latest. Back to you.
KEITH SCHAUB: Great, thanks Hira. That does it for this episode. Hope you enjoyed it and see you next time on Advantest Talks Semi.