Advantest Talks Semi

Beyond Black Boxes: Meet AI that Justifies Its Choices

Don Ong, Head of Innovation for the Advantest Field Service Business Group Season 2 Episode 17

Unlock the secrets of AI innovation with our esteemed guest, Jem Davies, non-executive director at Literal Labs. Jem shares his transition from Arm to Literal Labs, revealing how the revolutionary Tsetlin machine sets new benchmarks in efficiency, power usage, and processing speed. 

Jem is a highly experienced business leader and technologist having previously served 18 years at Arm. He is an engineer and was an Arm Fellow holding multiple patents on CPU and GPU design. Latterly, Jem's career moved into business management and he became a General Manager first in Arm's Media Processing Groups, then the founding General Manager of their Machine Learning group. In addition to setting future technology roadmaps, he also worked on several acquisitions leading to building new businesses inside Arm, including the Mali GPU (the world's #1-shipping GPU) and Arm's AI processors. Jem left Arm in 2021 and currently is chair of NAG and a non-executive director of Literal Labs, BOW, CamAI, and Cambridge Future Tech.

Explore the crucial role of explainable AI and why it matters more than ever in today's regulated industries like healthcare and finance. Jem discusses Lietral Labs' Tsetlin Machine, which offers an intuitive audit trail of AI decision-making through propositional logic. This approach is breaking new ground by enhancing model efficiency without compromising on performance. We also tackle the challenge of unbiased training data and how tailored levels of explainability can make AI accessible to everyone, from everyday users to industry experts.

As we gaze into the future of AI, we tackle the pressing issues of bias, energy consumption, and the potential impact of quantum computing. Jem provides insight into how Literal Labs is pioneering tools to promote ethical AI development, mitigate biases, and democratize AI innovation. From practical applications like water leak monitoring to the potential for AI to evolve into a tool of unimaginable uses, we reflect on how the intersection of explainability, energy efficiency, and bias shapes a responsible AI future. Join us for an episode that promises to broaden your understanding of AI's profound societal impact.

Thanks for tuning in to "Advantest Talks Semi"!

If you enjoyed this episode, we'd love to hear from you! Please take a moment to leave a rating on Apple Podcast. Your feedback helps us improve and reach new listeners.

Don't forget to subscribe and share with your friends. We appreciate your support!

Don Ong:

Hello and welcome to another exciting episode of Advantest Talk Semi, where we dive into the future of technology and innovation. I'm your host, Don Ong, and today we are in for a fascinating discussion. Joining us is Jem Davies, the non-executive director Literal Labs. Literal Labs is a UK startup spun out of the University of Newcastle. It's reshaping the landscape of generative AI. Their groundbreaking work with Tsetlin machine is pushing the boundaries of AI by enhancing explainability, increasing energy efficiency and delivering faster processing compared to traditional neural networks. Literal Labs' approach directly tackles critical challenges like AI bias, transparency and sustainable AI development, offering edge-based learning that operates independently of the cloud.

Don Ong:

Jem brings an incredible depth of experience to this conversation. With 18 years at Arm, he's a seasoned business leader in technologies. He started as an engineer, an Arm fellow, holding multiple patents in CPU and GPU design. Over time, he transitioned into business leadership, taking on roles such as the general manager of Arm's media processing group, where he founded the machine learning group. Jem has been instrumental in setting technology roadmaps and guiding Arm's acquisition, including building the Mali GPU, the world's most cheap GPU, and advancing Arm's AI processor. Jem, it's a real pleasure to have you on the show. We're excited to hear more about your journey and the cutting-edge innovation Literal Labs. So, Jem, you have done some amazing work at ARM and especially the GPU domain. What brings you Literal Labs?

Jem Davies:

So, as you might imagine, after I left ARM, there were a number of people wanting me to do the same thing again, build another neural network processor and so forth, to compete against Arm or to do something like that, and I just wasn't interested. Arm is a great company. I've left on great terms. I have no intentions of trying to compete with them, but what really interested me about Literal was: it was something different.

Jem Davies:

So, Literal is using the Tsetlin machine approach as opposed to neural networks, and Tsetlin machines are a new way of doing AI. It's based on something called propositional logic and it is less compute-heavy. So we have, instead of lots and lots of matrix arithmetic that you get in these neural networks, we have ANDs, ORs, logical operations like that, and so, by its very design it is so much more efficient, which makes it faster, uses less power, and, particularly, as people are now starting to talk about the percentage of global power generation that's being used to execute AI workloads. Do you know what? I kind of think this is the right sort of innovation at the right time. I fancy helping out, and so that's what I'm trying to do.

Don Ong:

Nice, nice. But how did you discover Literal Labs? How did you come across it?

Jem Davies:

So, I'm also a non-executive director of a company called Cambridge Future Tech, and Cambridge Future Tech is a venture builder. We co-found and helped build predominantly deep tech-based startup companies. And one of the guys there was reaching out to me and saying hey, you know about machine learning, what do you think about this? And it was all academic research papers and we spent about a year going backwards and forth on these papers and I'm saying no, no, no, I really need a simple apples to apples comparison, and without that, this is just interesting research. There's no value here, there's no business here. And until you can come up with something that says, hey, look, x times faster or cheaper or use less power or something like that, just not convinced here. And this took a long time. But eventually they came back and they said, we've done this comparison. It's 9,998 times more efficient. Now, this is a very early number on a prototype system in a research laboratory of a university.

Jem Davies:

So, do I totally believe that number? No, of course not. There're huge error bars on that. But is it a big number? Yes, you've got my interest. Yes, I will come on to your board of directors. I will help you with this. I will help you get a world CEO to run this. I'll see what I can do great.

Don Ong:

So, you talk about, going from a lab in the university going into the business world and you mentioned, a huge number, 9. 000 over percent efficiency. There's a lot of startups talking about Tsetlin machines and trying to do the same thing as well. What's so special and unique about Literal Labs? Can you share some details about that?

Jem Davies:

Unashamedly, the guys who have founded this, are university researchers. They're both professors, they've spent their entire lives working in research. None of them, have run a business before. What I did, is, I managed to twist the arm of a friend, an ex-colleague of mine called Noel Hurley, to come in and become a CEO. Noel, he's been a CEO of a startup before. He was previously a general manager of running the biggest business inside ARM process of business, and so Noel knows what he's doing and, in particular, he knows the difference between a really clever invention and a business and a lot of deep tech startups.

Jem Davies:

We have this problem where you look at a very clever idea, but that's not a business. A product, which is what a business sells, is something that solves a problem that a customer has, who will pay you money for. He'll pay you money to make this problem solved, and so you then have to fill in the gaps from there, because you go from a piece of technology and invention to something that someone can actually use and that's a product, and then you build a business around that which is selling that product.

Don Ong:

So, you talk about business and unique value that you can bring that people will pay you money for. So, what is so unique? What's the value that you guys are selling here for Literal Labs?

Jem Davies:

So, implementing the Tsetlin machine approach gives us faster, more efficient processing. That's one thing. It enables us to do training on the actual end device as opposed to on massive data centers somewhere in Seattle. And thirdly and this is something that again is very much of the moment and is coming up is the concept of explainable AI.

Jem Davies:

So, if you train a neural network to perform the world's most important task, which is obviously identifying pictures of cats, you take a trillion pictures of cats. You train your neural network and then you show it a picture and it says, it's a cat or it's not a cat. You go: How do you do that? Why did you say that was a cat and it's not a cat? And people just look blankly at you. They can't answer it.

Jem Davies:

There will be some talk that a neural network works the same way as the human brain. And it's a bluff because we still don't know how the human brain works. If you talk to a research neuroscientist as to how the thing works, they'll talk a bit about some of the biochemistry across the nerve synapse gaps and this, that and this nerve connects to that nerve and yada, yada. We still don't know. How does a three-year-old child point at something with an incredibly limited vocabulary and go: cat - dog.

Don Ong:

I like to dive deeper into the explainable AI, like to just mention. So, as we all know, typically with the current Gen AI large language models, when we try to explain or validate it, we tend to either just control the input data by giving it weightage or importance, or we validate the output results and try to rationalize the answers from the model. The logic of how the model takes these inputs and arrives at the output is pretty much a black box. Therefore, we need explainable AI. With Tsetlin machine and Literal Labs is implementing this now, we see a potential opportunity that we can look inside this black box and figure out how they get from the input to the output.

Don Ong:

So, can you dive a little bit into this explainable AI? It's a growing trend, a growing focus in the tech industry. What is explainability and how is it so critical and how is it different from interpretability?

Jem Davies:

So, the use of neural networks is absolutely fantastic. We have only just scratched the surface of what they are capable of doing. But there's a difference between using a neural network model to improve the pictures that you take on your mobile phone, to improve the display and you just look at it and go, oh, that's pretty. There's a difference between that and something that has real consequences like medical diagnosis.

Jem Davies:

The decision coming out of the AI is for example: We need to amputate your leg. That has consequences. You know that'd better be correct. So, you dive into it. And so why did you reach that decision? And currently, the state of the art in neural networks is it's very, not impossible, but it's very, very difficult to get explainable answers out of it why it came up with that decision. Tsetlin machine approach is a whole bunch of propositional logic. It's if this, then this and, as a consequence, there is a trail, there is an audit trail of why this answer came up. And so, if you come and query the answer and say, well, why did you reach this decision? The Tsetlin machine approach, as done inside Literal Labs is to give the possibility of answering that question, and for certain particularly regulated industries like health and indeed financial services, then this is incredibly important.

Don Ong:

So, I like to challenge that a little bit. Historically, we all know that a simple model will allow explainability, but it's usually very slow and the performance is bad. And then what we have right now with the neural network, deep learning, machine learning: It's much faster, the performance is good, but we can't explain it. So, is that a trade-off over here with what Literal Labs is doing with Tsetlin machines? Is there a trade-off between highly accurate AI model and then making this model explainable? What's your take on this?

Jem Davies:

We are seeing an increase in accuracy all the time. So, if you look at the development of neural networks, then initially they weren't as good as so. The big thing was AlexNet just over 10 years ago, when it became better, more accurate at identifying objects than people, and since then they have just become more and more and more accurate, with fewer and fewer errors, and we're seeing a very similar trend with the Tsetlin machines. So, the first versions a few years ago were not as accurate as neural networks, and what we're finding now is that we're developing, we're sort of understanding how to use these tools much better and we are now getting absolutely comparable and, in some cases, better accuracy on these tests than you get from neural networks, and the pursuit of that accuracy has not significantly led to a loss of efficiency.

Don Ong:

So, one of the goals of explainable AI is to allow non-experts to understand and trust AI decisions. As you mentioned, if we have to amputate the leg, we better explain why. So how do you ensure that your models at Literal Labs are accessible and interpretable by a wide range of users? How can people easily understand?

Jem Davies:

That's the most excellent question. And the first, like all sort of security and trust, is the root of trust. You have to start from a good place, and some of that will be around the example data. So, just like neural networks, we have a situation where you can end up with bias in your model if you train on the wrong data. So, you know, if you only ever trained on pictures of ginger cats, you're never going to recognize tabby cats, and so it's really that simple.

Jem Davies:

You have to ensure that your example data, the training data that you use, scans the range of things that you wish to cover, and that's not just the variables but also the labeling. So, for example, if we look at identifying from x-rays, yes, you want a large number of x-rays, but they've also got to be labeled correctly. You know this is a good leg, that's a bad leg. You know this is a problem, and so accuracy has to be guaranteed throughout the chain of working. So, going back all the way to the data, all the way to the labeling and then to the process of creating the model based on that example later, so diving into that a little bit more.

Don Ong:

So, in the AI community we very often debate whether, how much explainability is enough. How do you determine that the right level of explainability for your model at Literal Labs is enough? How do you measure that?

Jem Davies:

What we provide is an ability for you to dive in to really as much explainability as you want. My personal view is most people won't care. Most people will work from a sort of brand or company trust like it's built by IBM, it must be good, you know that sort of thing. I would expect most people in the public domain to work on that sort of basis. But experts will have the tools to dive in and backtrack through the model and say why did you reach that decision? And we can provide them with as much fine-grained detail as they will need

Don Ong:

So, I think this is answering my next question as to like some experts will argue that AI explainability could lead to information overload, especially for the end user. So, that would be how you explain go by branding, it has to be good, and then the expert will then decide how much information they're going to take.

Jem Davies:

Yeah, that would be my guess. I mean, we're predicting really how it gets used in the marketplace in years to come.

Don Ong:

So, you mentioned earlier about the regulatory bodies, regulators they're looking into this, especially in the healthcare, in the financial services and in law enforcement. So how is Literal Labs preparing or influencing these evolving AI regulations?

Jem Davies:

Well, particularly in the UK, there is a consultation process going on right now with the government departments about how AI should be regulated and we're contributing to that process.

Don Ong:

Cool, so you're part of the conversation with them.

Jem Davies:

Well, me personally no. But yeah, we are, we and the company are, yes.

Don Ong:

Fantastic. So, I'd like to lean into the next part, which is about energy efficiency. As we all know, ai models, particularly those powering modern data centers right now, are consuming massive amounts of energy, and that costs millions of dollars run these data centers. And as AI models grow in complexity and size, the energy requirement gets larger and larger, so it's becoming a major concern in AI development. So, can you explain why AI, especially language model is consuming so much energy?

Jem Davies:

The sort of state-of-the-art large language model is a huge bunch of matrix arithmetic, and when I say huge, they're talking about 10 billion parameters. So, what that means is we are trying to multiply matrices together, multiply and add matrices contain 10 that 10 billion entries repeatedly, because you go through deep layers of these matrices to get through. And that is an awful lot of arithmetic, that is an awful lot of CPU cycles to execute to do that multiplying and that adding. And so what we are finding now is that the training costs of these large language models, where you run through the process, get an answer, then go back and modify it and better again and you keep iterating, going through and through they're talking about $100 billion to $1 billion just to train these models, and then if they are models that get used a lot in actual inferencing later, the energy usage for these things is absolutely massive.

Don Ong:

So, you mentioned earlier that Literal Labs, the model that you have, the hardware that you're designing, is energy efficient. So, can you talk about how Literal Labs Tsetlin machines that you guys are designing is helping and addressing the energy efficiency issue?

Jem Davies:

So, Tsetlin machine approach is fundamentally different to a neural network. A neural network is a lot of matrices, arithmetic, large matrices multiplied by large matrices, adding in other matrices, whereas the Tsetlin machine approach, based on propositional logic, is composed of simple logical operations like ORs and ANDs, and these, of course, are the lowest- level components of silicon-based semiconductors. So, a multiplier is constructed out of thousands of these individual gates, and you can immediately see. Therefore, it's thousands of times more expensive to use, and so the Tsetlin machine approach is inherently simpler. And then combine that with our experience and some of the technical experts that we've employed in designing low- power electronics, you can see that the hardware approach will be the ultimate in low- power. But we have a software approach first of all. We can run a Tsetlin machine on existing hardware today and we get big efficiency gains out of that

Don Ong:

So, you're saying that Literal Lab is building both. Tsetlin machine solution that we have is both software and hardware, so potentially you could be replacing what we have today in the data center.

Jem Davies:

It's a stretch goal, but yeah, why not? I mean, I think it's a phased approach. Firstly, we will produce software to enable people to try it easily and cheaply, to build it into their workflows, and to see the advantages of this approach. In parallel with that, we are also designing the hardware processor, which is to be even more efficient.

Don Ong:

So, what you're essentially saying about the energy efficiency we're using Tsetlin machines is that because your logic is vastly simpler and therefore, when you're processing the same amount of data, you use less energy.

Jem Davies:

Yeah, that's exactly

Don Ong:

Great, so I also understand that Literal Labs with Tsetlin machines machines, you're also moving towards H-based learning that reduce the dependency on cloud infrastructure. So how does this approach contribute to energy saving and why is it important?

Jem Davies:

So, at the moment, most neural network training is done using data centers. It's using a massive, massive amount of compute, and the only place that power is available, is in a data center, because, as well as the inference process being simpler, our training process is simpler as well. We can actually do the training and the inference on the same device, i. e. your edge device, and as a consequence of that, much, much less efficient, much less energy consumed.

Don Ong:

So, and as AI skills across industries, the demand for energy efficient model is likely to grow. What innovation or breakthrough in the energy efficient AI do you foresee in the near future?

Jem Davies:

I won't make specific predictions, but what I will say is that history is on our side. So if we go back to AlexNet, when that was the breakthrough and it suddenly was better at recognizing things than humans, in the 10 years since then we now have models neural network models that are 100 times more efficient for the same accuracy. So, we're using much more efficient methods even in neural networks. So, I think I've got a following wind behind me.

Jem Davies:

I can't predict exactly what will happen, but there are an awful lot of clever people will be focusing on making all methods more AI efficient. Certainly, we will. We're not going to rest on our laurels. We want to make it more accurate, more efficient, use less power, go faster and retain the explainability, and so yeah, I see no reason why we should not expect similar games.

Don Ong:

Yeah, and with your background at ARM and what ARM has done over the years with low-powered CPU and GPU, I think we can trust you on that.

Jem Davies:

I hope so. It's a culture thing. Everyone at ARM and of course by now there's a lot of people who've been through ARM and engineering: it's all about power. You think about power at every stage of the design. How can we use less power? How can we be more efficient? And it's a cultural thing and not everybody thinks like that.

Don Ong:

You mentioned that the neural networks, they're becoming more energy efficient. But Literal Lab Tsetlin machines, you guys are ahead. So how much ahead are you?

Jem Davies:

Well, of course, until we get a real product out there in the marketplace, nobody's going to listen or believe what I say anyway, but it's a big number. I mean, we are seeing, you know, 250 times faster inferencing using Tsetlin machine models, up to 1,000 times faster using the new dedicated hardware and much, much, much less energy consumption. As I say, people will go yeah, yeah, yeah, until they see a real product, and so they should. The gains that we're making, the claims that we are making, are so big. People should be cautious, and this is one of the reasons why we're taking this parallel approach with software and with hardware, so that people can actually try and believe us.

Don Ong:

So, an energy vision is such a big thing right now and you know it's a key requirement for our mobile phones, for our wearables. So Literal Labs addressing this, and do you think Literal Labs is going to get an edge over other neural networks application that's going to go on our mobile phone and our wearables?

Jem Davies:

I think the answer is a simple yes. We are working, as I say, on producing lower power, more efficient software methods and also lower power, more efficient hardware acceleration of those methods, and I think once people see how efficient this can be, it's a no-brainer.

Don Ong:

They just have to use it, right. That's exciting, but do you have any specific case study where Literal Lab's energy efficient solution and model have been applied and how much has it helped to reduce the energy? Any client's project that's ongoing?

Jem Davies:

Yeah, we have some client projects which I can't name. But anomaly detection is a really big field where you want to fit sensors out there in the world, in industry, in production lines, in pipelines, where you want low maintenance. You don't want to be changing the battery every six months, you want to just fit it and forget it and the ability to detect something unusual happening either in pipeline flow that could be a leak or, in some cases with audio detection, you can hear, using microphones, you can hear the sound of a leak. That sort of operation we're having great success with. Because with the energy consumption in those cases of devices that might be deployed for 10-year lifetimes, it is absolutely vital. If you have to keep replacing these devices in a pipeline every six months, it's just not feasible, it's not useful, nobody's ever going to fit it. If they say, maybe 10 years, I could conceive of that.

Don Ong:

So, you're touching on, sustainability. So, looking forward, how would you envision, like all this energy saving and like what you just touched on, this is going to play into the sustainable AI ecosystem? So, what else can you tell us about that? What Literal Labs is doing, how it's impacting and enabling a sustainable AI ecosystem?

Jem Davies:

Well, for those of us who've been working in low power for number of years now, longer than I care to speak about, we've been at this for longer than the word sustainable has been used. Sustainable is a relatively modern word in usage about this, and actually we've been all about using less power for decades, literally decades and it is always the right answer. It enables you to get more processing in a smaller space. It enables you to build things that are physically possible to build, because at some point, energy consumption just is too much. It breaks the laws of physics, and so it is a way of doing things and it's the gift that keeps on giving.

Jem Davies:

You use less power, so your device gets less hot, so you have to use less air conditioning to cool it, or you can build it in a portable device that doesn't make your hand hot or, worse, your ear hot, and the battery life lasts longer. And, you can do things faster because you can get more compute into the same space. It's a virtuous circle, always has been. It's what a number of us in this space have been working on, as I say, for decades, and now, of course, as people start realizing what a percentage of global electricity consumption is going on compute and compute-related issues: Suddenly this is fashionable. But we were doing it, but it wasn't fashionable.

Don Ong:

Yeah, and in the whole broader AI landscape, like what you just mentioned, low power we try to balance between low power and better performance. That's always a balance off. So how do you ensure that this energy efficiency is always prioritized without sacrificing other crucial aspects of AI development? Is that specifically what you guys are doing at Literal Labs?

Jem Davies:

Very rare in engineering terms do you, as an engineer, get given one constraint. You're always given multiple constraints. You can only spend this much power. You can only have this many engineers to design it. You have to get it out by Christmas. Whatever it is, there's multiple, multiple dimensions of constraint and, to be honest, this is just what we do. So, we will balance accuracy against trust and explainability against performance. It's just what engineers do so.

Don Ong:

One of the biggest challenges in AI today is the issue of bias, so AI models are trained on historical data, which can often reflect societal biases, and, as a result, AI systems might unintentionally discriminate based on factors like race, gender, and socioeconomic status. It has been a significant issue across many industries, so can you explain what AI bias is and why is it such a critical issue to address?

Jem Davies:

Well, if we go to my silly example of cats before, you know, if I only choose pictures of ginger cats, then I'm only ever going to recognize ginger cats, and it's like that. The choice of the data set that you train your models is vital, and there's a lot of talk about bad actors, you know, creating bias. I actually think that's a relatively small problem. I think the accidental or stupid bias is much more of a problem, and so really, what we need to be encouraging people to think about is, to think about that training set mindfully, thoughtfully. You know it's not: Here's a data set I just downloaded off the web. Or, here's this bit of web page I stole. You know it's like, you got to think about this stuff and you know if what you are trying to do matters, if it has consequences, then the training data and your training methods are important because you can end up with bias. And, I'm sure some people are doing this for various ends. Most people are just lazy.

Don Ong:

So, but we also know bias can occur in multiple stages of AI development. Like you just mentioned data to algorithms, to algorithm design. So, at what stage? So, you believe data is the most probable place where bias is going to happen versus the algorithm. What's your take on this?

Jem Davies:

Yeah, what we're seeing is that the example data is by far and away the longest pole. It's the one that needs to be addressed. We are hampered a little in terms of understanding the effects of bias in the algorithms, in neural networks because, as I say, we still kind of don't really understand how they work. They work really as a black box. So most of the efforts that have been put into checking the effects of those neural networks is to sort of put boundaries, boundary checks on the output and check if you should have a statistically even distribution between here and here, then check what you do, whereas if actually your answers are skewed in one direction, we know there's something wrong. We don't even actually know what's wrong, but we know something is wrong. The advantage of the Tsetlin machine approach taken by Literal Labs is, that we can actually see what's going on inside the models.

Don Ong:

Moving into that, how does Literal Labs ensure that the AI model that is being built on that with that flame machine remains fair and unbiased when handling very diverse data sets, particularly those that might be incomplete or imbalanced?

Jem Davies:

So, I actually need to sort of slightly challenge the question. What we will do is, we will produce a software and indeed hardware processes that other people can use. So, if it's garbage in, it's garbage out. It always been this. If people use bad models or they use bad example data, they're going to get bad results and again, and ultimately we can't really control that. Of course, we will do our best and we will encourage with best practice examples and with training of the people that are not of the model, and we will help to the best of our ability, but in the end, there's only so much we can do.

Don Ong:

That is true. And earlier you mentioned that the hardest bias to find is those that you put in unintentionally.

Jem Davies:

Unfortunately, the lower level capability that we're providing those models, and indeed the hardware, is really below that level. So we can provide the tools to enable people to see what's happening inside the model, but if they don't care or they don't look, and they're quite happy with the answers we can't stop them doing that.

Jem Davies:

It takes, like most things in our industry, a multi-layered approach to do important things. We're doing what we can and we're doing it in the right way, in the right directions, but we need others to make sure they're doing the right thing as well.

Don Ong:

Yes, so you're basically enabling people with explainable AI so they can try to tackle all these biases.

Don Ong:

have covered some fascinating grounds today, from innovative work at Literal Labs to the way you're tackling challenges like explainable AI and biases. As we look to the future, it's clear that AI landscape is rapidly evolving. I'd like to hear your thoughts on what lies ahead. Rapidly, what exciting advancement or trend do you foresee in AI, and how do you see Literal Labs playing a role in shaping that future?

Jem Davies:

I started when we called it machine learning, which I always was rather more comfortable than AI. But in essence, what we have here is a method of extracting information, useful information, from enormous quantities of data, and at the moment, people are focusing on the enormous quantities of data from things like pictures and videos and things like: is somebody trying to force my front door? Which is a real sort of compression of a vast amount of pixel data to a very simple quantity of information. We're only scratching the surface of that. The ability to use machine learning techniques, many of them, not just neural networks, not Tsetlin machines. You know, the Gaussian methods guys as well. There's a whole bunch of things that are still being worked on and researched on. Who are applying these techniques to new areas, really excited to see what comes next, because when you get a new tool like AI coming along, what tends to happen is people do something initially fairly obvious: let's you identify pictures of cats and then, along comes the other stuff, the stuff you haven't thought of. It's like oh, wow, you can do that. Oh, that's amazing. And this happens time and time again with these enabling technologies.

Jem Davies:

When we came along with mobile phones, it was like well, if we really miniaturize this and get this radio and this digital stuff to be sufficiently low power, then we can put a battery-powered telephone in everyone's hand. Nobody at that stage foresaw handheld GPS, handheld maps, putting a really low-power, high-quality camera in it. Nobody guessed ,we've got these really low-power accelerometers: We could put those in it as well and we could do gate analysis of people's running styles. Again, literally nobody saw that coming. And indeed, a smartphone of today versus, let's say, the iPhone of 2006, or indeed a Symbian phone from three years before that, they're unrecognizable. Yes, we call it smartphone, we use the same word, but they're unrecognizable. And so I get really excited about that enabling technology.

Jem Davies:

So, if you put this technology in the hands of some smart people, and there's a lot of smart people in the world, you go: Here, we can do this, what can you do with it? That gets me really excited. And time and time again, it's things I wouldn't have thought of. It's things the company I've been working in wouldn't have thought of. You get this layered approach we can do this. We're really good at this. Okay, here you are, knock yourself out, what can you do? And oftentimes the answer comes back wow, I would just never have thought of that, but isn't that

Don Ong:

Yes, I totally agree. So, now looking into the smart people like yourself and the smart people at Literal Labs. Looking back on your career, you play a very significant role in some technology advancement at ARM and at Literal Labs right now. So what excites you the most about the work that you're doing at Literal Labs and how is it shaping your vision, the future of AI?

Jem Davies:

I think we will see a transition to using Tsetlin machines. That is like what happened with neural networks. So if you think about pre and post neural networks, the world's changed. The use of neural network methods to do AI processing it's really quite radically different. AI processing it's quite radically different. And I think we will see a similar change from the use of Tsetlin machines from Literal Labs and we will look back and go yeah, it all came from there, it all came from those smart guys.

Don Ong:

And so, with what Literal Labs is doing with edge-based AI, you know, decentralizing a little bit from the data center, the cloud data center. So how do you see this edge AI shaping the future? You think this is going to take a major transition, that major transformation away from data center.

Jem Davies:

I think it will in some cases. So, the great advantage of the data centers is that they have usually right next door massive amounts of data. So, if you think about Instagram's data center, then it has a massive amount of pictures of cats. And so, for some applications, the geographical proximity to that training data will always be an advantage. But there's no reason why a Literal Labs processor isn't present in that data center as well. But for other cases where you can train your device, your handheld, battery-powered device, we're using the data around you. That, I think, is going to be transformational, because at the moment you can't really do that with neural networks. There's been some work done on things like reinforcement learning, which sort of helps sort of tweak, and optimize the networks, but actual ground up training is really quite hard on the device at the moment and that looks like we can enable that.

Don Ong:

That's cool and we mentioned earlier with the explainability. It's going to help with the regulation, so it's become a very hot topic worldwide. What do you think is the biggest challenge and opportunity as the government starts to regulate AI technology a lot more closely right now?

Jem Davies:

The problem with any government regulating any fast-moving technological field is they're always going to be shooting behind the puck, because if it takes you two years to produce a law which actually is pretty quick in some places then technology has just moved on. So, I think some regulation is required, but I think it is a difficult problem. I think it is very hard to regulate correctly and it's very hard to regulate something that is in the process of being invented and I don't have the answers, sorry.

Jem Davies:

If I knew what the answer was, I would tell you.

Don Ong:

No problem. So let's move away from the regulations and all this stuff. So, what do you think about quantum computing and what is the impact that you will have on like AI research and development in the next decade in and is that going to impact what Literal Labs is doing?

Jem Davies:

So, quantum computing is personally fascinating, but it it's still very early days. I mean, I can go out and buy a quantum computer but I still can't really use it. And so there's an awful lot of work to go to make it accessible and usable. And it's not until you have that capability that the researchers then try and build something on top of it, as we were talking about.

Jem Davies:

So, quantum computing, big focus of quantum computing is about cracking cryptography, so cracking codes, actually factorizing numbers into prime numbers, and people say, oh well, if you could do that, then we could also do protein folding, we could do models of proteins and see which one's work. But it's still very, very early days for that sort of thing. The effect of quantum computing on AI is very, very hard to envision because it's a computing capability that we hadn't previously considered quote-unquote, ordinary computing. Then it might enable things to be done that we can't currently do in an acceptable timeframe or power budget. But we don't know. And having to have vats of liquid helium in order to power our computers, it's not really a portable handheld devices just yet. There's a lot to be done on that one, but I'm excited, as everybody is, because this is frontier stuff. I can't see the other side of that frontier. Once we get that computing capability in the hands of clever people, I guarantee something interesting will happen.

Don Ong:

Yes, I believe that too. And talking about, into the hands of clever people and going back to the roots of where Literal Labs came from, which is a university research lab. So, AI has a very strong presence in both research and academia. What do you think about the partnership between industry, business and universities: how is that going to evolve to accelerate AI innovation?

Jem Davies:

As ever, the big problem to be overcome is understanding the other. So, the university doesn't understand the industry, the industry doesn't understand university. I exaggerate widely for effect, but the best examples I can see are where those two institutions have worked together hand in hand and each has respected not just understood but respected what the other does, and quite oftentimes there's been a lack of respect across that divide. And so it is. You know, it is as much a skill and it is as much of a hard problem to raise a lot of money from investors and build an engineering team and you set some boundaries on product design and actually get a product out there and sell it successfully. It is, as you know, those are as much hard problems as it is inventing something in the first place, and so what we need, as I say, for best efforts, is where those two go hand in hand.

Jem Davies:

And, the whole process of technology transfer from universities into startups or spin outs is. There's still a lot of friction in the system, and I would wish there was less because we have great research, we have great researchers, innovators, inventors, and utilizing that for business, for research, for products that will do good, and we look back on and go yeah, I was proud of that. That just becomes hard when you can't get the stuff out of the university in the first place, and I'm not solely blaming the university. Blame on both sides. I am not picking sides on that fight here.

Don Ong:

Right. So, what do you see like some of the future risks or challenges associated with AI not getting enough attention today? Then how should we prepare for them?

Jem Davies:

I think one of the risks at the moment with um, the costs of training these models being so high, is that only certain people can afford to do it. That frightens me. I don't like that. What is going to democratize that? And so I think that's a risk. I think it's a risk, that Literal Labs will ameliorate.

Don Ong:

Great! I'm looking forward to that. So, if we can get everything much cheaper and into the hands, in your words, of more people, more smart people, we will see a lot more innovation come through, a lot more exciting things. That's going to happen. One last question as you look to the future. So what are some of the most exciting projects or partnerships on the horizon for Literal Labs, and how do they align with your vision for the next generation of AI?

Jem Davies:

Well, we have a number of sorts of pilot projects undergoing at the moment. I talked a bit about monitoring of processes and pipelines. I think those are really exciting. Ultimately, it's probably stopping water leaks isn't going to get you many headlines in newspapers, but it is of critical importance in the developed world where anything from 10% to 60% of the water that comes out of the treatment works is actually lost before it gets to the households because of leaks. And I don't expect many headlines from "iteral labs helps clean water arrive at town, but it's not really a big screamer of a headline but it would excite me and I would look back on it and go, yeah, I was, I was proud to do that, be part of doing that. There are lots of other people doing the real hard work.

Don Ong:

I really appreciate what Literal Labs is doing. The intersection of explainability, energy efficiency and bias in AI is where some of the most exciting developments are taking place today, and startups like Literal Labs are leading the charge, addressing this pressing challenge with innovative solutions that promise to reshape AI landscape. And as AI continue to transform industries, these three topics will become very critical in ensuring that AI is not only powerful, but also ethical, transparent and sustainable.

Jem Davies:

We couldn't agree more.

Don Ong:

Do you have any last piece of advice for our audience, for our viewers?

Jem Davies:

Watch the space! And it's been an absolute pleasure, Don. Thank you very much for inviting me on.

Don Ong:

Thank you so much Jem, it was a great pleasure having you on our show! And that, ladies and gentlemen, bring us to the end of this fascinating conversation with Jem Davis. Today, we explore the cutting-edge work happening at Literal Labs, diving deep into the realm of explainable AI and the importance of transparency in machine learning models. We also touch on the critical need for energy efficiency as AI continues to scale, and how addressing biases in AI is vital for building more equitable and reliable systems. As always, the journey of innovation is ongoing, and it's conversations like this that remind us of the immersed potential and responsibility in shaping the future of AI in the semiconductor industry and beyond. Thank you for joining us on Advantest Talks Semi. Stay tuned for more insightful discussions on the technology and ideas that are shaping our world. Until next time, stay curious and keep innovating.