Advantest Talks Semi

Reinventing Semiconductor Packaging: AI, Physics and Geometry in Action

Keith Schaub vice president of technology and strategy at Advantest Season 3 Episode 6

In this episode of our podcast, “Reinventing Semiconductor Packaging: AI, Physics and Geometry in Action,” we explore how cutting-edge technologies are transforming the way chips are built. From leveraging AI for smarter designs to applying physics and geometry for precision, discover the innovations shaping next-generation semiconductor packaging.

#Semiconductors #ChipDesign #TechInnovation #Electronics #AIinSemiconductors #AdvancedPackaging #GeometryInAction #PhysicsDrivenDesign #FutureOfTech #EngineeringExcellence #SmartManufacturing #NextGenChips

Thanks for tuning in to "Advantest Talks Semi"!

If you enjoyed this episode, we'd love to hear from you! Please take a moment to leave a rating on Apple Podcast. Your feedback helps us improve and reach new listeners.

Don't forget to subscribe and share with your friends. We appreciate your support!

SPEAKER_01:

Hello and welcome to another exciting episode of Advanced Talk Semi. Today's episode is all about how advanced AI models are transforming semiconductor design, especially in the cutting-edge world of 2.5D, 3D packaging and chiplets. I'm excited to introduce our two distinguished guests who are at the forefront of innovation in this space. First, we have Hardik Tabara, the visionary co-founder and CEO of Vinci4D. Hardig and his team are revolutionizing advanced semiconductor packaging by seamlessly integrating AI, physics, and geometry. Their breakthrough approach optimized design simulations for 2D, 3D packaging, tackling critical challenges like thermal management and precise dye alignment, key factors in advancing high-density chip integrations. Joining Hardig is a good friend of mine, Ernestine Fumak, the founder of Brave Capital. I've always appreciated my conversations with Ernestine as she has a unique ability to identify groundbreaking technologies and is a thought leader in how we can harness AI to reshape semiconductor design and manufacturing. Ernestine has a track record of successfully investing in various deep tech startups as an angel investor and venture capitalist, and Vinci40 is one of her investments. Together, Hardig and Ernestine offer a powerful combination of technical expertise and strategic vision that's set to redefine the future of semiconductor packaging and the industry at large. Let's dive into our discussion and uncover how their innovations are addressing some of the most pressing challenges in the industry. So to kick things off, let's discuss the current landscape. The semiconductor industry is under tremendous pressure to innovate as manufacturers move to even smaller nodes, thing 3 nanometer and below. With advanced packaging and chiplets becoming the norm, there are significant challenges from thermal dissipation and die alignment to ensuring overall reliability in high-density integrations. Hadik, can you share your perspective on the biggest technical challenges facing the semiconductor industry today, particularly in advanced packaging?

SPEAKER_02:

Yeah, so AI is basically transforming how we design and manufacture parts. And that's sort of happening across the board, whether it's for semiconductors, electronics, all the way to large machineries and equipment. AI demand is basically driving us to create bigger and better chips. We want to put more memory close to the GPUs. We want to increase the memory bandwidth so we can train bigger models and we can infer bigger models and we can do it at all places, not just in the cloud and on the edge. So what it basically means is we want to create better devices faster. And AI can accelerate sort of co-optimization, whether it's for power, performance, areas, thermal constraints, using chiplets and dice from various different places. So the challenge for us is how do we integrate all of these things when while at the same time ensuring that we can dissipate all this heat? The device is getting powerful, means we are generating more heat. And now we need to dissipate that heat from bigger and bigger packages. And the packages that get bigger, they uh create more challenges, like there is mismatch in material, there is mismatch in coefficient of thermal expansion. All of these could lead to warping, which creates reliability issues. So, yes, we have a huge advantage ahead of us in terms of the 3D packing that we can do, and we can bring all these chiplets together to create better devices faster. But with that, um, also has come new type of challenges that are within the design land, which spans all the way from design to understanding their physical performance and reliability of the spots. So the way we think about it is yes, we have uh we can create better designs, but we also need to empower our designers and engineers with better tools so they can go over design ideas faster, better understand the actual physical performance, whether it's in terms of hot spots, thermal conductivity, heat flux, whether the spots are going to warp or not, how reliable is my design, and go through those design iterations faster to come up with better design. So it's an exciting area. It's an exciting time that the world is looking at the semiconductor industry to power the AI demand. Um, and we can in turn actually use some of the AI tools to enable designers to create better designs faster.

SPEAKER_01:

Right. So while we're on this topic, what inspired Vinci4D to bring together AI, physics, and geometry as a unified approach to tackling these challenges?

SPEAKER_02:

So, not just semiconductor, we just think about a hardware design process. In that, there are three key concepts. A designer creates some type of a shape, whether that shape is a heatsink sitting on an MI300 chip, or it is a specific layout of a DRAM or a layout of an RDL or a substrate that is important for advanced packaging. So as a designer, create shapes, whether they are at millimeter scale, centimeter scale, or nanometer scale. Then we all worry about how these digital shapes are gonna behave in the physical world. Like how are they gonna dissipate the heat or how are they gonna uh sustain the load that is being uhsked to that that is coming from the environment? Or and the last piece that is important is how is this part going to be manufactured? So we want to create a tool that can accelerate the design process so we realize that the basic thing we have to do is enable these designers to understand the physical performance of the parts. Because the creativity of the designers comes in the form of creating shapes, but physics is a universal constraint. In other words, how does the heat transmit through advanced package is the same for Apple, it's the same for AMD, is the same for Nvidia. So, physics, if it's universal, we want to create a model that can enable designers to understand the physical performance of the part extremely fast. Make that performance analysis accessible, faster, accurate, and enable them to vary the geometry. So that's where the key concepts come. Like, can we unite the AI? Like how the AI models, the transformers can learn any concept with the two basic concepts that are important for hardware design geometry and the physical performance governed by appropriate physical phenomena. So that's where the sort of the inspiration of Vinci4D comes from. But then instead of focusing on all the parts at the same time, we decided to focus our attention to specific problem of dissipating heat within semiconductors. So that's where we are creating AI models that can handle really, really wide variety of geometries. Um, again, like I said, be it at the DRAM level, be it at the advanced packaging level, be it at PCB level, handle this wide variety of geometry and understand and answer questions about its physical performance. So that's our goal and that's where we're getting started.

SPEAKER_01:

That is really interesting. So let's dive a little bit deeper into this. So, Hordik, can you walk us through the foundational AI model at Vinc4D and how exactly does it integrate AI, physics, and geometry to redefine simulation in semiconductor packaging?

SPEAKER_02:

Yeah, so I'll start with the concept of just physics simulation and then we'll talk about how we are applying it to the type of parts that are relevant for semiconductor industry. So we want to teach physics to AI model. That's our goal. And the reason is that if we if there existed a magic wand AI model that can answer questions about physical performance for any given design accurately and fast, the hardware design process would be better, faster. And innovation at the end of the day moves at the speed of iteration. So we want to speed up the iteration. So that's our goal. So what we did is we created an AI model by showing it lots of data. I'm talking hundreds of thousands of data points, which are where each data point is a physics simulation. So there is a geometry, there is material properties like thermal conductivities, there are boundary conditions about whether the temperature is held constant, what is the heat flux, and there is also a power map. So we imagine those things, solve the equation, and create a temperature field, which is the solution of the partial differential equation. And then we create, like I said, hundreds of thousands of data points like that. And we train an AI model that is similar to image or a vision model, but instead of predicting like what is the image, it predicts the temperature field. It predicts the solution of the partial differential equation. And it's equally capable, it is uh it has hundreds of millions of parameters. So once it learns, it learns the concept of the physics equation. So it has learned what is a conduction as a heat equation. So at the time of usage, you can bring in an OSS or a GDS file, which is basically a layout file that could come out of your standard ECAD software. You can also bring heat sink, like a mechanical component. You tell us how that part is loaded. So you have to tell us where the material properties are, the power source, similar things that you are used to today. And the AI model will answer the question of physics, which basically it will solve for partial differential equation and give you the temperature field everywhere. And then as an engineer, we can derive important quantities like uh hot spots, what are local maximum temperature, where is the heat flux? And now we are at the place where you can use this information to create better design. So, how can I improve my layout? Can we do anything about RDL? Can we increase VR density somewhere so that my heat flux would be higher? I'll dissipate the heat faster and reduce the hot spot. So that's the goal, and that's how we created a model.

SPEAKER_01:

Do you have an example like uh how you guys have really implemented this in some of your customers today?

SPEAKER_02:

We are working with design partners today. Um of them are leading semiconductor industries. We have already published a um research in IEEE EPTC Advanced Packaging Conference that was held in Singapore uh in 2024, where we showcase that for examples, geometries like RDL or silicon interposer, these AI models can solve what how the heat transmits through extremely complex designs like that hundreds of times faster. And so instead of today's process where a layout or a design is created, then it goes through a simulation process. That means the design then is imported into a completely different simulation tool like ANSIS IcePack or Siemens Flow Thur. And then an engineer runs the simulation to provide the answer. So you can see there is a huge back and forth between a designer who is creating the design versus somebody else who runs simulation. And often these two tasks happen in a completely different tool. And what our process can do is we can directly serve the designer. So we can directly digest the files that come out of the design software and run physics simulation and give them the answer like, hey, what is the thermal conductivity of the part? If somebody is designing a layout, we can tell them the thermal conductivity. If somebody is designing a substrate and they have the power map, we can give the answer of a hotspot. So we are in the stage where we have deployed this software to a handful of design partners. They are running and giving us feedback and evaluating it, and we are getting ready for commercial release. But we have validated the answers coming out of our tool against the traditional incumbent finite element solvers that are commercially available, like ANSIS. So we've already validated that we can provide high-quality, accurate answers, yet do them hundreds of times faster and make it accessible to a broad range of people within the design process. So it shouldn't be just accessible to the simulation engineer, but everybody, all the way from layout engineers, chip architect, thermal engineers, advanced packaging engineers, everybody in the field so they can do all of their work faster, understanding the physical performance of the part.

unknown:

Right.

SPEAKER_01:

So Hardik for Vinci40 to stay relevant, so data analytics and machine learning are very key to refining your simulations. So how does this technology drive continuous improvement in your design outcomes?

SPEAKER_02:

That's a great question. Because one of the things we value a lot is design IP and privacy. In other words, we have trained a model without using any customer's data. The model is trained by us using synthetic data that we have created. We have validated the accuracy of the output, but we did not need data from customers or design partners to train it. Second, when we deploy the software, it's deployed on virtual private cloud of the customer. So customer can host our solution on their own cloud infrastructure or compute farm. That means at the time of usage, all of the design data stays secure behind the firewall. We never get access to the specific design data from any of the design partner or our future customers. That was an important distinction of our product because, again, um, we believe that all the leading edge companies, the most innovative thing they're doing is the design. So if we ask them to give access to us to create a model, it is sort of a no-go from beginning. So now having said that, how do we make our models continue to get better? So it happens in two ways. One is we do collect some data and metrics and logs that gives us an idea where our model did well and did not. So it does tell us that, hey, if the model was trained on these 500,000 data points, we still miss some coverage. We still miss the type of things that was needed to make the model work perfectly well for that particular use case. So we do get some metrics and we are able to derive enough information out of it, how to make our model more general and general and general. And these sort of ideas have been implemented in healthcare industry before, where the data privacy was equally important. So we are sort of using the same concept and applying here, but we do that in a manner where customers' design data will always be safe behind their secure firewall.

SPEAKER_01:

That is impressive. So it sounds like VinC40 is not just enhancing current practices, but fundamentally changing the way simulations are done.

SPEAKER_02:

Yes, we believe that instead of just using AI to provide an approximate answer faster, it's better to use AI to provide highly accurate answer faster. So our goal is to not just provide approximate answers. We want to provide accurate answers and we want to do it faster using the model. So that also means another aspect of our technology is we provide guaranteed accuracy. What do I mean by that? Whether we look at ChatGPT or Anthropic or any large language model, they all come with statistical accuracy. Right at the bottom, you will see ChatGPT or OpenAI believes most of the time we give the right answer, but sometimes they are not right. So that means you have to check for yourself. Now, when we spoke to early designers, they all were not really happy about that type of feature. In other words, let's say we can solve physics hundreds of times faster, but its burden goes to the designer to check if the answer is right or not. We can't, if we do that, that's not really helpful at all because yeah, I would have generated hundreds of times faster answers, but they have to check every time. So if they have to check every time, I actually increase their work. So the technology we have created is that you have an AI model that actually creates a guess. And then we have an actual physics-driven post-processing tool that ensures that that initial guess is correct. And if it is not, it fixes it and presents the right answer to the user. So we never let an inaccurate answer or a hallucination coming from an AI model go to the user. So the product experience is 80% of the time you are gonna get answered hundreds of times faster. 20% of the time, you're gonna get answer 30 times faster. But on an average, you are gonna get a lot faster iteration, and yet you can trust the answers that are coming from the system.

SPEAKER_01:

So are you saying we're gonna get uh the answer correct 100% of the time?

SPEAKER_02:

So that's the goal. See, the 100% doesn't really exist in my mind, right? Because there is always gonna be some distribution. I imagine there is always gonna be a designer that will imagine a geometry that we have never thought of, and it's completely out of the distribution of the AI model. So the last 5% of coverage is all is gonna be very hard to get. But we are definitely seeing wider and wider reach of the model. So I'll go far as to say that we have a conduction model that works on nanometer level, millimeter level parts, centimeter level parts like PCBs, and the same model also works on mechanical components like a gearbox or electrical housing, electronic enclosure, or a heatsink. So you see, once it has learned the concept of that particular physics equation, it can apply it to any part. But there will be some distribution that might be out in that case. The actual post-processing tool will still present the right answer.

SPEAKER_01:

Sounds like you have a very generic model there. That's impressive.

SPEAKER_02:

Yes. So one physics at a time. So I would say our approach is when we go from conduction to a different phenomena like war pitch, which is also equally important for semiconductor industry, we will train a different model. So we build one foundation model per physical phenomena.

SPEAKER_01:

I'd like to go a little bit sideway and talk to Ernestine for a little bit. So, Ernestine, from a strategic and investment perspective, what drew Brave Capital to Vinci40?

SPEAKER_00:

Yeah, sure thing. Um, so when investors evaluate a startup for investment, two of the most critical factors we consider include market opportunity and also if the right team is in place to execute on that opportunity. So why don't I talk about those two pieces as it relates to when I first met Kardi as he was just starting to build his company 1540? Um market, let's just take a step back. Um, if you think about how for the past couple of decades the tools that mechanical designers have used have barely changed. We've been stuck in the more or less same workflows since perhaps the mid-1990s. And there's a huge opportunity to take things up with new methodologies that can dramatically increase the productivity of design engineers and also shorten the time to market for new and also reworked products. Um, so Hardik and his team at 2040 recognized that this space is ripe for disruption. Um, he talked about their generative design copilot is able to revolutionize the process and make design work significantly faster. Um when I first met Hardic, as he was just starting his company, I saw a real opportunity to pave the way for widespread adoption of generative copilots and analytical design tools. Um today, as Hardik mentioned, he's starting with thermal design. Um so this is a field that naturally has more degrees of freedom than most engineering processes. And it's also an area where a copilot can provide early optimizations or simplified design trade-offs and in general just massively cut down on the compute power needed for simulations. Um developing pre-trained models that understand first principle physics and material properties as he's been talking about. Um he's allowing physical simulations to run in seconds instead of hours. And that means design engineers can fine-tune their designs in hours rather than weeks in. That's of course needed for semiconductor companies that need to tackle these sort of critical challenges in thermal management. Um as an investor, I'm always looking for founders solving those kind of big high-impact problems. Um, beyond the market opportunity, um, I'm always also looking at the people behind a startup and the ability to execute artifacts, all those boxes. So let me just drag on his behalf for a second. He spent eight years or so at carbon, the digital manufacturing startup. Um, he had increasing positions of authority and responsibility, starting initially as a software engineer and then eventually growing into the position of VP of software engineering. He also completed his PhD at Stanford. Um, his thesis was focused on building a matched generation tool for 2D and 3D geometries. Um, he's since been a guest lecturer, mentor for courses at Stanford. Um, he's a domain expert. He deeply understands the industry of the problem, he knows how to build a scalable product, and he's able to attract um top talent and build a strong team. So those are some reasons why it's a no-brainer to invest.

SPEAKER_01:

So beyond improving design efficiencies, what broader impact do you envision for the semiconductor industry as AI and more generic, like industrial AI, becomes more deeply integrated into manufacturing processes?

SPEAKER_02:

So as AI is bringing us more and more powerful way to analyze situations, um, it will not just transform design. I think it's gonna transform monitoring, it's gonna transform how that means how the parts are behaving in the field. We have all kinds of sensors on chips, thermocouples that are sending data as the workload changes, whether they are in the data center or anywhere else, right? And we are actually devicing devices that are monitoring a high, uh highly efficient GPU chips like H100, MI300, whatnot. And all of that data is now also going to provide insight all the way to design. But the same is also gonna happen in manufacturing. Some of the stuff all the way to yield detecting defects is already in play. We are using vision-driven AI to monitor the system, to detect the defects, so the bad uh parts don't get into actual device, you know, integrated and don't end up in our data center. Um, but I think that's gonna get even more interesting. So, for example, a couple of ideas have already reached our desk that if you can solve physics really fast, hey, can you we model the process of reflow? Which is very important as we know about how a big package of a chip gets attached to the PCP. And there are a lot of reflow models, it's possible to run physics simulation of the reflow, but it's expensive. But if we could do it really, really fast, can we do process parameter optimization? Like, can we really merge the things about the sensor that actually looks at a few different sporadic places? What is the temperature, what is the pressure, what is the concentration, and convert it into a true digital twin. So you can really see exactly what's happening into that PCB that is being manufactured. So you don't have to really cut up on only some of them to see whether the BGAs are detached or have they fused together, and this part is not gonna be as good enough anymore. A lot of those things are what happened what's happening today. But at the end, like I was saying, all of these are physics-driven phenomena. If we can solve physics really fast on the edge device, I think it's gonna enable a completely different opportunity of creating a digital twin, whether it's during the monitoring process, whether it's during the testing equipment, like advanced test, is building testing devices for SOCs, advanced packages all the way to PCB with specific measurements. We can actually potentially convert it into two digital twins. And now the designer or anybody who's access who has access to that information will create better process parameter optimization, better design, better integration. And so I'm actually quite excited about where we will use powerful chips to create more powerful models and bring that back into the system to create better semiconductor devices with higher yielding processes.

SPEAKER_01:

Thank you. How do you see AI-driven approach influencing broader design and analytics tools for the industry?

SPEAKER_00:

Yeah, definitely. Um, so the way I think about it is that AI models and agents have already fundamentally transformed how we think about software products and how we think about conceptualizing software products and also how we think about engineering and deploying them. So what's happening now is that the same revolution is now coming for other engineering disciplines, especially hardware engineering, where AI is going to have a huge impact on design and analysis tools. So if you think about the process of designing even a single part, it's very complex and the process oftentimes requires multiple tools, multiple workflows, and engineers have to juggle everything from CAD modeling, simulation software, optimization algorithms, performance testing, and a lot more. And this presents a huge opportunity for AI to automate and introduce agentic workflows that can help engineering teams create better designs with fewer resources. So we're already seeing the power of AI-assisted creativity in other domains where there's tools like Google's Gemini 2.0 allows users to not just generate an image, but also edit and refine it dynamically. Now imagine if you can just apply that same level of intelligence and flexibility to complex engineering parts where you're not just generating a 3D model or circuit layout, but you're also able to understand how the part will behave when stress tested under real-life conditions. And that's the future I'm excited for.

SPEAKER_01:

So looking ahead, where do you see the industry going and what are some of the emerging trends that could further complement Vincey's 4Ds approach?

SPEAKER_02:

So the at the end, AI sort of thrives on data, right? And as more and more data become available and industry collaborates, which I imagine is going to happen, we will see collaboration across manufacturers, fabulous companies to SaaS. As at the end, we are all under pressure to create an amazing device for all the consumers. So as the data becomes more available, there is going to be more collaboration across organizations. Um, we will see model capabilities expand beyond our thinking today. And I imagine a future where somebody will say, hey, I want to create uh an integration so that I have a CPU, GPU, um, HBM, and this is my requirement. I can work with these vendors. I want to do manufacturing with these companies that I work with and help me create a layout. And an AI will create the end design. All we human would have to do is check. We will go through rigorous checks and integration because the mistakes are in hardware are way more expensive than software, as we all know. Because the software you can release a patch, you can fix it live. Hardware doesn't really allow us to do that. So we will create rigorous checks, but we'll definitely see tools taking us to 90% of the design. My confidence comes from the fact that this is exactly where we are with software. We are able to go to 8020, 9010 for a lot of things in software engineering today with the specific high-quality tool. Those tools have been built on large language models, which at this point are becoming commodity. When I say commodity, it means they are really good. There are a handful of them available, and you can mix and match which you can use. So large language models becoming mature has enabled software engineers to use agents to create 80% of the way their features. The hardware equivalent for me is there is gonna be a large physics model, large geometry model that can do these kind of basic key workloads. Then will come the agents that will say, okay, help me optimize the design, help me optimize my DM. Down for this process, for warpage, for thermal cases. Then comes the evolution like, okay, don't just do design optimization. Hey, agent, help me create the design. That is sort of 80% of the way there. Me as an engineer, I'm gonna think about making sure it's right. I'm gonna be doing more creative thinking, and then we can send the files out for manufactured to check. So that's the future I imagine we'll go to, which is way beyond what I think we are imagining today. But on the other end, I don't think it's 10 years out. I think it's just five years out.

SPEAKER_01:

So going into that direction, right? So digital twin simulations are also gaining gaining tractions as well. In your view, how might they revolutionize testing and predicting maintenance in semiconductor manufacturing?

SPEAKER_02:

So, of course, I mean, if we can detect any problem before it leads to degradation of the device in any severe capacity, that's the best thing we can do. If we can design for it such that it never runs into that faulty scenario, it's even better. Digital trend is our way of saying that I want to know everything about that part, what's happening, what is the physical performance, are there any defect about to develop? Is it gonna have a reliability issue? Are we gonna have so much warping that we are gonna see detachment? Um so if we can have true understanding of the physical phenomena going through with the part under various loads, whatever the working loads are, whatever the power maps are, that information will enable us to create a better design. What I'm trying to say is that the test equipment plus digital twin will create such a high fidelity data that as a designer we'll be able to use them. Initially, it may seem like oh, we are creating a massive amount of data, but hey, we have a tool which is AI model that actually thrives on more data. The more high fidelity data we have, it will be able to digest it and guide the designers to say, hey, these are the type of things we can do to create more reliable products. So in some ways it's a little bit frightening that, hey, we are gonna create so much data by attaching testing equipment and digital trend. But on the other end, I think that data is gonna come in extremely handy when we couple that with the AI models to empower designers.

SPEAKER_01:

So, Ernestine, coming over to you. Um looking ahead and basically in Silicon Valley and what you're doing, uh what transformative technology or trend do you see that is coming in the next couple of years that will be impactful for everybody?

SPEAKER_00:

Yeah, um, well, beyond what we already talked about, um, a few things I'm constantly thinking about include one, I'm constantly thinking about how the shift towards autonomous driving is driving demand for new types of semiconductor chips. Self-driving cars need high performance AI chips to be able to process real-time inputs and make these sort of split-second driving decisions. Um, there's also, of course, the large touch-screen interfaces and immersive entertainment systems in these cars that also require powerful GPUs and AI chips to handle real-time graphics rendering and other interactive features. Um, also, if you think about the space and defense industry also pushing the limits here, where space travel needs chips that can survive extreme conditions. Um, also AI-powered defense chips are becoming a big deal for secure military-grade computing. Um, I think behind all of this, um, if you think about the global chips war and how that's also reshaping supply chains, uh, the CHIPS Act in the US is already pushing companies to build fabs in the US and Europe to reduce reliance on Taiwan and China. And I expect we'll see these sort of regionalized supply chains with FABs popping up in India, the US, Germany as well. Um, just some trends to think about over the next several years.

SPEAKER_01:

So, with technology advanced advancing so rapidly, how can companies best prepare to adopt these innovations like Vinci 40 without disrupting their existing operations?

SPEAKER_02:

See, the new innovation always comes with some risks because it's not validated enough, it's not um gone through the process. And hardware design in general is an extremely thorough process. And the reason is very clear because when we don't do the thorough checks, uh we run into reliability issues, and those are pretty hard and expensive things to fix. So, how can we think about the new technology, specifically like ours? Like we are bringing some new weight to design to the market. Um, what I tell designers, because of course I want to make sure, I want to enable them to use this so they can design faster, but it's also to say, hey, let's run checks and make sure on a handful of examples, not just one, let's run tens of examples through this new AI power simulation tool that we are building to ensure yourself it is giving you the right answer. And then once you establish this trust, you can start cutting your process towards it. So that also means there is a burden on entrepreneurs like us. Like, hey, how thoroughly can we check the tools that we are putting out there? And that's also the reason why we uh really got married to the idea of guaranteed accuracy or figuring out there is no hallucination in our product. It's impossible to say that my AI model doesn't have hallucination, and our AI model also has hallucination, but we are going extra miles to say we will not let that go to the user. And that's just one key aspect of it. There will be many other such things that just me and everybody else who is creating a new technology will have to do. And the more we do that, the more confidence everybody will have, and then there will be a mainstream adoption. And I would also say that's sort of what has happened to software co-pilots as well. There are a lot of people who are saying, oh, it's writing bugs. It's okay, let's create tests. Let's also create agents that can create tests. So that means every piece of the engineering problem, we should never forget the process. It's there, and the new technology in some way has to get integrated into those concepts that hey, we want to create faster. We still want to test every piece of decision that we are making. We want to make data-driven decisions and then create better products. So we can we should never retire from those concepts and bring in new technology. And that's sort of advice, my advice to my team. Like that's what we need to keep in mind. Because, yes, we may show something signy and say, hey, we are running simulation hundreds of times faster, but if it's not right, it's not really useful at all. Um, so this is a great question that as the new technology comes in, how do we create trust in it? And my answer to short answer to that is by providing a lot of lot of proof points, a lot of proof points that it is in the right direction and is actually gonna help.

SPEAKER_01:

So the next question is for both of you. So I like to uh go one by one. So, Hardig, if you have to pinpoint one breakthrough that could redefine semiconductor packaging, what would it be and why?

SPEAKER_02:

It's hard for me to pinpoint a very specific thing about advanced packaging, but one thing that I see is the chiplet assembly has really opened doors of creating not just one but wide variety of uh devices for different use cases while using the same basic components. So it's almost like we are playing, you know, like assembly on top of 2.5D or 3D packaging. Um, so the I'm excited about the possibilities. Like we can create lots of different parts, very specific SKUs, so specific use cases, whether it's for robotics, autonomous driving, where I don't see a place where we will be creating a lot of hardware parts that do not have chips. We are gonna have chips and electronics everywhere because we want to use AI everywhere, and that's the only way to get there. Um, a different way of saying is that we are physical human beings and we interact with everything that are physical products. Like even if we want to use iPhone, the first thing we do is touch. So there are chips gonna be everywhere, and if we can enable faster designs, we'll end up creating uh specific designs for a lot of specific use cases. Um that sounds like a great future. So I'm pretty excited about that in a general direction, and we want to play our part in enabling better designs faster.

SPEAKER_00:

For me, I've been thinking a lot about um standardization and how right now every company has its own way of doing chiplets. Um, but soon I can just imagine an ecosystem or marketplace where companies can buy pre-validated compute and memory, and because we'll have open standards, it'll be easy to plug and play chiplets from different vendors.

SPEAKER_01:

It's exciting to imagine a future where technological breakthrough not only improve performance, but also unlock entirely new possibilities for semiconductor packaging. We discussed the formidable challenges in semiconductor packaging, explore Vinci40's groundbreaking approach to integrating AI, physics, and geometry, and examine the strategic implications for the industry with insights from Brave Capital. Hardik and Ernestine, thank you both for sharing your expertise and vision with our audience. Your perspectives have shed lights on how industrial AI is not just a buzzword, but a transformative force in semiconductor design. For our listeners, if you are as excited about the future of semiconductor packaging as we are, stay tuned for the next episode as well. Thank you for joining us on Avan Test Talk Semi. Until next time, stay innovative and keep pushing the boundaries.