BrainChip Holdings (ASX:BRN) Scheduled Market Update Presentation, March 2020

Company Presentations

BrainChip Holdings Limited (ASX:BRN) CEO Louis DiNardo, Founder and CTO Peter van der Made, and Founder and CDO Anil Mankar provide an update on the company.

Agenda:
  • Health, Safety, Communication and Productivity Update
  • Sales and Market Update
  • Financial Update
  • Product Development Update
  • Research Update

Full transcript:

Lou: Okay everyone, I'm going to jump in now. It's six o'clock here in California. Welcome to everybody in Australia and Asia, I know it's your morning. Unfortunately, we can't have folks participate from Europe. It's the middle of the night for those folks. Today I have with us Peter and Anil, our two founders, CTO and chief development officer. I'm sure you know each of them. They are going to make some comments on product development as well as research. Clive, if there's any problems with audio just let me know.

Clive: Will do.

Lou: It's always a little bit uncomfortable because I'm sitting here looking at a screen and there's better part of 200 plus people listening. So, if there are any problems please let me know. Firstly, I would like to say that throughout the presentation I have tried, we have tried, to address all of the questions that have come in through email and the update. If we miss anything, I will certainly try at the end of the call to make sure we haven't missed anything but if we have missed anything, certainly just reach out to me directly and I'm sure most investors have my email address. You can reach out to me, you can reach out to Roger, Peter, or Anil. [inaudible 00:01:17] help fill in the blanks. Also, I'd like to thank the folks at Financial News Network, they do a great job helping us orchestrate this. This is not an easy task, I'm in northern California, Anil is southern California, Peter is in western Australia. So, I'm hoping this goes well. We have Peter live on the phone and as well as Anil. And certain slides that we go through I'll turn the dialogue over to them.

I am going to move forward. This is the standard disclaimer, you folks have seen it time and time again. But it is a prerequisite for us as a public company. Read it at your convenience. Many of you know exactly what it says but I'm not going to spend a lot of time on it

A quick agenda, we are going to talk a little bit about health, safety, communications, productivity as an update, because many, many questions about what's going on with Corona. Frankly, you'll see we will talk about it in detail. It's really not been disruptive to our business, it's not been disruptive to communication, it's not been disruptive to productivity. A little bit about cells and market update. Financial update is going to be very cursory because, as you probably know, we pulled this call in. We would normally not update until after the 4C commence would be late April, I think April 30th is the 4C. And then we would do an update amended to the 4C in a conference call. But there has been so much curiosity about where we are, and I think rightfully so, where we are with the development of the [AKETA 00:03:02], where we are with sales, and market traction. And the implications of what's going on with the global issue with the pandemic or Corona virus that we decided that we would pull this in and provide an update now. Not much in the way of a financial update but still close to book on quarter, the quarters not even over. But we will be closing the books very shortly.

 Anil will address the product development update. And Peter will address the research update. Again, questions, we've tried to cover the questions here, if we miss anything, absolutely feel free, as I know you all do, to send us an email.

Let's go forward. On this issue of Corona, I'm not going to dwell on this a lot because, frankly it has not had a great impact on us. On a personal level of course, I've got an 88 year old mother, I've got a stepson with diabetes. But with respect to how we run our business, because we are relatively distributed company to start, northern California, southern California, Toulouse France, our partners in Shin-Yokohama and Tokyo. Our communication in some respects, I feel like it's actually accelerated. People are working from home. We instructed our employees... California, New York, New Jersey, Illinois and whatever is going on with international travel restrictions, we instructed our employees to work from home several weeks ago. It has not been a disruption in communication. In fact my email lights up at two o'clock in the morning, three o'clock in the morning, all throughout the day. People are stuck at home, and I think we are actually getting a little bit more productivity, a little bit more communication than people in the office taking off for lunch and doing what you are doing in a normal office day.

Our communication with partners has been relatively unaffected. In fact, Socionext is not in lock down. They are actually going to work every day, they are in the office in Shin-Yokohama. So, that partnership has not been affected at all. Certainly, there has been some challenges with respect to customers, it varies by region. I was supposed to be in Shanghai and greater China for the month of February, Seoul, Korea, as well. We have lots of activity going on. But physio-engineering dialogue, conference calls, email traffic, so I would say little impact. Certainly, a different kind of communication. But nothing I think really affects how we're making progress with target customers.

A question that was asked by several, was, "What's the implication for the supply chain?" As we move in to wafer fabrication, as we move in to back end assembly test operations, I just literally got off the phone with our partner, there is really no supply chain interruption. TSMC says this is 28 millimeter fab in mainline China, frankly probably the safest place to be, it's a class 10 facility so you know you can't get sick in that place. But there's no indication of supply chain interruption, either through wafer fabrication or back end assembly. So, nothing to be concerned about on that front

In summary, I think productivity is really unaffected. Ironically in some respects communication, because we are all working 24 hours a day around the globe, maybe productivity is up rather than down. The lack of ability to travel, certainly there are some meetings that should be done faced to face but at this juncture it's really not affecting our performance.

Move on to the next slide. So a little bit, an update on sales and markets. I'm keeping this relative cursory. Certainly AI Edge, as I think you all understand, is our target market. That includes automobile manufactures, module suppliers to automobile manufactures, tier one global market manufactures for smart home and smart city.

Tier one automobile manufactures, we've got Detroit, we can't name names, but we've got Detroit where we are close to a proof of concept agreement. That is the first step. We'll get a little bit of money but more importantly it's validation that tier one automobile manufacturers are interested in developing a solution which includes a heated device in a module or some part of your infrastructure in the automobile. Automotive module suppliers, probably the most target rich environment for us. Most automobile manufactures don't develop their own modules. It pushed that down to tier one module suppliers. We have a proof of concept, it's been frustrating, I've talked about this before, but it's been frustrating. It's a large European tier one module supplier. Contract's taken four months almost five months now. But it's moving forward.

In the module arena, you could think about LiDAR, you could think about ultrasound, you could think about radar, you could think about standard pixel based cameras. In our case what we are finding is a sweet spot in LiDAR environment. So that's moving forward very nicely. Unfortunately, when you are dealing with large multinational conglomerates the legal issues sometimes take longer than you would like. But those are being resolved quite effectively now.
Lou: Smart home, rarely in this case in South Korea for us but Smart Home is a big deal for us. But Anil will touch on some of the aspects of what we can do with keyword spotting, what we can do with incremental learning. These are things that the Smart Home manufacturers, the global manufacturers, in Smart Home, as well as Smart City, should probably put those two together, are very excited about.

Okay, so now let's talk about Socionext. We've put a press release out and Socionext actually is releasing that same press release. They translated it into Chinese and it'll go out on the Chinese news wire, I think today or tomorrow. This is a reference design, Socionext has a very high performance, 24-core arm processor. You can think about it as an intel processor and AMD in this case, it's a home grown armed core, 24-core. What they do currently in this reference design is they offload, the multiply, accumulate or the mathematics that are required to a DLA, a deep learning accelerator. All it does is the math. What they have found very attractive and what they are offering [inaudible 00:11:04] in concert is move the entire network onto the AKETA device so that the sequencer doesn't have to run the network it can do what it is expected to do, which is the analytics, the middleware, the user interface. Let the AKETA do all of the hard work of the neural network in our case we don't have to do all of the multiply accumulates but it basically offloads the sequencer from having to do the neural network.

 It's a great application. It happens to be an edge server or a video analytic application. They will market directly to OEMs and they will market, in concert, they will market to ODMs where the subcontractors that build for the OEMs. The revenue will come directly to us. The reference design will show our chip, they will show the sequencer and whoever builds it, whether it's the OEM directly or the ODM, will come to us and buy the chip directly. So it's not a reseller situation. They will come directly to us. But we're also exploring further commercial opportunities in cooperation. As we've talked about previously, Socionext is the second largest ASIX supplier in the world. They know our IP very well. The opportunity to offer our IP in their portfolio as they approach their customers, that's another step in the process and potentially it could be a reseller of our IC. And we're not going to be able to cover the globe, whether it's Japan, mainland China, Southeast Asia, we're not going to be able to cover the globe ourselves. Having a partner such as Socionext would be a big benefit.

I think this has been a very powerful validation that the AKETA device in concert with this 24-core arm processor can be a very very powerful, it's interesting to use the word powerful because we are trying to use low power, but a very powerful solution for the OEM and the ODM requirements for etch servers in particularly in this case, the video analytics arena. I think I have touched on several of the questions that were asked about the announcement. Again, if I miss anything or we miss anything, please feel free to send me a note.

Again, financial update, I'm not going to do a lot here because we are, we have a 4C and that's scheduled for the 30th of April. And we will talk about cash then, and we will talk about expenses, we will talk about forecasted expenses. Rest assured that we are maintaining continuous expense control. We're controlling our head count. That doesn't mean that we are reducing our head count, it means we are controlling our head count not adding new heads. One of the interesting things that I think will resonate with shareholders that are interested in kind of the technical nuances of the manufacturing process is that we have decided that the wafer fabrication will be done on a multi project wafer. That is, historically I've called it a pizza mask, take a wafer, we get a slice, somebody else gets a slice, somebody else gets a slice. It expedites our turn time, our lead time through fab and it reduces our costs. I'll touch on that in a moment.

Travel. Travel expenses are going to be down dramatically because there is no place we can go. You know maybe that's good, maybe that's bad. But it certainly does help the cash flow, and we have certainly taken a great deal of effort in reducing discretionary legal advisory expenses.

Okay so here's my slide which is lots of questions about what do we do with respect to capital. We will require revenue and or investment to fulfill our mission of commercializing what is really ground breaking technology. We live in a world, because we are a public company, where if we were private and venture capital backed, people would be throwing money at us. In this case, we want to be very very careful and very very circumspect about how we raise money so that we can reduce the cost of capital as best as possible and increase shareholder value at the same time.

So we are looking at strategic investment. We are looking at debt. We are looking at convertible debt, we are looking at structure finance, we are looking at equity and certainly we are driving for as much revenue as possible. But I think it would be disingenuous for us to not be upfront and say, "Yes, we will probably have to do a capital raise." There will be some offset to that, minimize it, but whatever revenue we can bring in. But we want to reduce that cost of capital as much as possible and let our shareholders enjoy as much of that benefit as possible, as well.

Financial. Okay, this is just giving everyone a sense of the diversity and where we're located and what we're doing. We've got Silicon Valley, we've got sales, marketing and the executive office here. That's the yellow up in the top left hand corner. Southern California is where Anil runs, fundamentally, all of the hardware, software and research. Toulouse France again software and research. Hyderabad, India software development. That's a contract services organization right now. Perth, western Australia which Peter will speak to in a moment, new innovation center. That is for advanced research as well as applied research to support customers. We are evaluating Shanghai, China and probably would have moved more quickly had China not shut down on us in the last 60 days. Head count, 34 full time, about six contractors, one in western Australia, and the other in Hyderabad, India.

A bunch of questions about competition. Specifically there was an article about Intel and what they are doing with the second generation of [inaudible 00:17:54]. Look a CPU is slow, it's inefficient and it's an inefficient use of critical resources. A CPU should be doing other things. That's why you see DLAs, deep learning accelerators. That's why you see GPUs being used to offload to the matrix multiplication that is required by standard deep learning architectures. SOCs or system out chip, basics from other reported competitors, what we're seeing is an inefficient use case coverage. There are companies that are focused exclusively on voice and they are small networks, it's not scalable, it can't be used in any general purpose sense. There's some that are more targeted at video applications. There's some that use esoteric processes. They are using analog multiply accumulate functions or they are using sub directional logic. These are not portable to different fabs, they are not scalable to different geometries.

AKETA on the other is highly efficient, it's ultra low power. And these are the things that when we talk to the major league customers, potential customers in Korea or China or even in Europe or the US, highly efficient, ultra low power. And we are talking about hundreds of microwatts per certain applications. Hundreds of milliwatts for most applications. And even when you get to the very very large networks, a couple of watts.

When, if you use a CPU you are burning critical resources, if you use a GPU you're burning tens, if not hundreds, of watts. And a DLA which really is not scalable across multiple applications. So it's flexible and there's a complete network and it does not require a host processor, does not require external memory. It runs on its own and it offloads CPU to do what it is supposed to do without the inefficient use of power and a GPU, the inefficient system design in a DLA, or the inefficient use case coverage of an SOC.

Here, I'm going to turn over. I'm hoping that we got this technology down. Anil, I'd like you to jump in here if we got you live.

Anil: Yeah, you can hear me okay, right?

Lou: Yes, perfect.

Anil: Yeah. So product development update. We are actually done with all the work that we are working with Socionext and they are ready to hand over what is called the GDS2 data, that they, essentially, can make the mask. It will happen on next Tuesday. We'll start on a plate, that's the IPW that Lou talked about, it gives us faster turn around time. We will actually also bring the back end process for assembly and all that Socionext [inaudible 00:20:56]. We will also have a couple of our visible designs all of the software ready when the chip comes back. All of those things are on track now and you see the schedule that we are running to right now.

Lou: Excellent. Let me interject. Again, here we touched on the multi project wafer, which is, by default is a hot lack. So it goes through [inaudible 00:21:29] faster than the normal wafer. And we talk about engineering samples in Q3. Our target is very early Q3. Q3 is kind of a round number but given the cycle time through fab, cycle time through back end, this should be really early Q3.

Anil: Yeah. That's correct. We are on track and I think we feel very good about where we are with the project.

Lou: Okay so I jumped in the next slide. We put a big circle around the center section. Maybe you could describe that middle ground between traditional data based convolution and at the far extreme, when the world attaches itself to a NATO spiking neural network but our opportunity and advantages in event based convolution?

Anil: Yes, so what we do, there are multiple ways of solving the inference neural network. Either you can do a data based convolution where people do mathematics multiplication, systolic multiple max it depends on how many multiplex you have. So people have thousand multiplex engines, or six to four thousand like Google TV has. But those are just accelerator. The neural network actually runs on the CPU. And on the other extreme it's like Intel IE or IBM you'll notice pure spike in your network. You can actually do a spike in your network but you need spikes to be coming in. Most of the time you are getting data.

What we do, actually, is what we have done a while over AKETA is actually, a spike in the network. We actually take the middle ground where we can hold standard CNN and we bring them into an even domain, it's called a spiking domain. So we are able to do today, is neural network that is the DNN neural network into spiking domain and run them by efficiently at the same time in future we can have spiking neural network the same hardware. We have actually had a talk about the next flow, which is in Miami, so we can take a standard neural network map it to our AKETA in Miami.

We can take any network right now with our flow. We take advantage of sparsity of events, both in activation and brains. Most of the neural network that has cannot take advantages as far as city. And because of that, we run the full network on our AKETA. We can go between 500 microwatt to full watts. So what we have done is, we have taken our neomorphic competing elements, what we are marking to solve to those problems that people are having. Which is why I'm doing convolution on smaller events very efficiently in our hardware. So our IP is small, high performance, very low power, and off course we are learning, to it individual, go into details on my demo.

So we are targeting a market that's today, people want to do inference on the IPH device, in the power budget that they have. Not all the spiking technology can do that so we actually are bridging the gap between what DNS, what the spiking neural networks do. We solve to this all with our convolution, actually an AKETA very efficiently. Off course, the same hardware also does... I'll show you also a spiking neural network that is mostly let on.

Lou: Okay. So, moving on to the next slide.

Anil: Yeah.

Lou: Maybe describe a little bit more about the work load, because the ADE, the development environment, the feelcoprimo gamaray which provides emulation and soon to be engineering samples-

Anil: Yes.

Lou: Are really the flow the customers are looking for.

Anil: Exactly. So we are quite unique in the industry that we are the complete software development environment already, that also has a data chip simulator that actually emulates what a chip can do. And this is what, this actually uses complete industry standard flow using [inaudible 00:25:58] and Python. So that people who are using standard [inaudible 00:26:02]. We can actually map any of the networks that people have and AKETA in the software simulation and evaluate only AKETA cause we'll need. What will the perform be, what the power will be. So the customer that Lou talked about earlier, the automotive customers, they don't have to wait for our chip to arrive, we can completely analyze it now in our software. Actually, I'll show you simulation results from our software.

Now, the reason for doing that is, once we are flexible, we can scale our IP to whatever size that is required. And our customers can run the different types of network on our software and evaluate how big AKETA IP that they will need, how many cores are required. That can be analyzed, and then that helps them to really decide the size power, and everything else for the IP that they will use.

 We are unique in this complex [inaudible 00:27:13] right now. It's free, actually, from the website and quite a few of our customers are using it and we also use internally to evaluate all of our network analysis that we have done.

Of course, all of this AKETA IP has been tested completely in a PGA for the last six months. We actually are going further, and actually connected multilayer network that fit on the PGA, like six or seven layer networks, which are like keyword spotting. They're running on a PGA that gives a lot of confidence that what AKETA chip, that we're doing is verified, works in a PGA. We have tested all the learnings and everything else on the PGA platform.

Some of our customers might, when they license the IP, might want to use our PGA for verification. Our multi-chip is there, and off course they'll be using all the allotment platforms and evaluation boards that are currently being ready.

Okay. I'm going to move forward.

Yeah.

Folks, I hope this works, because we've got a live demo here that Anil's going to try to speak to. I may stop and start it so that he can catch-up.

Anil: Before you start, Lou, let me explain what we have done. So we, I talked about ADE, so we have actually taken something called Standard Mobile [inaudible 00:28:35], a network that has been toying on a GPU, CPU that is actually a useful place to get some images. After converting it onto AKETA and running on a simulator, off course it does classification from the original network. But I'm going to show you how the learning for AKETA reaches our unique feature, is actually used to show that [inaudible 00:28:59] start learning.

So, what we have done is, the network does classification, but we have removed the last layer, the last classification and replaced it with the AKETA with a learning layer.

Okay, Lou, please start the demo.

Lou: Okay, here you go.

Anil: So, Object classification and with increment learning. So imagine this is standard normal [inaudible 00:29:30]. And what they're showing is, you just give one object to the camera, and it will learn now this is the background. So we are giving it a label of, "Background." Then we're putting different objects in, either we are to label it. The network is learning from this one simple object. Off course, the network has to know what object we gave. We are labeling it in the last bottom row.

The moose, then we're actually showing a panda. And it is able to classify between different things. Because it's a police car, you actually label it as a police car. And a cop.

So we are actually showing one example of all of these objects. Now we are testing it, the same objects, AKETA's simulator is classifying it from a different angle. It learned from only one side, or even upside down.

Lou: Sorry, my fault.

Anil: Okay. So we label the red car. Different objects. Police car. Continue.

Here it was having difficulty distinguishing between the cop and police car, so we added one more sample for police car. And now it can distinguish between cop and police car properly.

This is all learning on the device itself. Now we want to make it a little bit more difficult. See if we can actually show it a picture. It learned from the small toys. See the elephant picture from a calendar, it detects the elephant. Now the elephant picture, it's able to detect that. Similarly, a tiger picture with a different background, but it detects that. Different scale, so the network is invariant to scale and other things, so that's what they already show.

So think about why the AKETA learning is important. Because you take a network, map it AKETA it's running on the [inaudible 00:31:49], let's say factory floor, and you want to classify different factory floors, want to have different set-up objects that they want to classify. So you can actually, once the network has been loaded onto AKETA, you can show, set-up 10 objects on one factory floor, and some different objects, different factory floor, they might have a different set of objects. So that is the unique... you have actually completely avoided retraining. If typically normal union network you want to add additional objects then you have to take multiple shots of multiple data of that object, go back to the cloud, retrain it, and bring complete new sets of [inaudible 00:32:32] and now you can have two more objects, but you had to go and retrain it.

 What we have done with the on chip learning in AKETA, we don't have to retrain. The complete cycle of going to retrain, we avoided it, and you can just object customize, personalize your network, personalize the device for different factory floors or different objects, you can do that.

Set-up the next demo, which is effective learning where we're taking spikes.

Yes.

Lou: Directly, and not having to do the conversion from a-

Anil:  Exactly. So the last demo that I showed you, we were taking standard frame less camera that actually send, or HD camera you'll get 2 million pixel per frame, you can do 30 frame per second. Lots of data you have to analyze. But we were taking that data on a one chip AKETA [inaudible 00:33:24] events, and we process it.

Now we have this demo that shows the different type of camera that actually doesn't send the full frame, but it only sends image, or pixels with intensity change that are spikes.
 
Start the demo. This is a Samsung camera. We thank Samsung for forwarding this to us. It actually does send events directly so we don't need to convert from frames to events in this.

Lou, can you start the demo?

Lou: Yes.

Anil: So, look at what it's doing here. We actually, take a Samsung camera and we're doing hand gestures. Here, what we did, we took nine gestures, train the network from the actual spikes that were coming from... Stop it a little bit. Can you pause?

So, what we did is, we actually train the network with nine different gestures directly coming from spikes. Once it learned, now it is showing us that the gesture it did not know it's got [inaudible 00:34:31].

Now you see here, what's really happening is when hand gesture like this comes, only where the bright spots are, you'll get only spikes for those pixels in the camera. Sorry, now go back. Continue running.

Okay, so now, yes, he's now training different gesture that the network did know, and be able to... Is it running, Lou? Yeah.

Okay, leave it now. Don't pause it, just let it run, yeah.

Lou: Okay.

So we are showing different gesture in front of the camera, and it's learning the first gesture just from the input [inaudible 00:35:22] coming in.

Now this it the second gesture it's learning. It's able to distinguish between those two gestures. Remember, this gestures they're not playing, the network was seeing it for the first time.
 So directly on device learning. You can use this basically, actually we work with [Tar Tar 00:35:55], they are using this gestures controlling robots. The movement of the robot. Similarly, you can use different gestures for TV channels, channel flipping or increasing volume. You can interpret gesture that are coming in.

 This DVS cameras are actually very high performance in terms of, because they are the same, such a low amount of information, only the pixel that change. They kind of work like 1 000 frame per second equivalent [inaudible 00:36:25] frame per second in information. They are good dynamic range. Very low power. In the whole network that we are running for this, it takes about 600 microwatts. So very good example of what we can do with AKETA and this is where we are working with our partners to really [inaudible 00:36:44] show them how much IP they can use and how the IP can be customized to their obligation.

Lou: Let's move on. Peter, I hope you're live. Can you speak?

Peter: Yes, yes. I'm here.

Lou: Okay. Just a quick update for everybody. There's a lot of curiosity about what we're doing in western Australia. So maybe you can speak to this side and give some sense of what our intentions are and what our goals are.

Peter: Sure. The western Australia research center, we're looking at the future of artificial intelligence. We're looking at where the market will be in say two years, 10 years from now.
So, that means that we are looking at building devices that are more like the brain. Chips that can make decisions and that can learn like human beings, as you just saw, we already in the current device, we're already doing that to a large extent. We will expand on what we have at the moment. We built this off AKETA one.

So in the WA research center we will form advanced research into artificial intelligence. Which goes well beyond recognizing an image or awarding an obstacle. For instance, when something is encountered that the chip has never seen, and it needs to take the right response.

So this is known as AGI, the next generation of AI. And the next generation of AI is AGI. AGI stand for artificial general intelligence, which means learning more like a human being.
So AKETA two will be an advancement of AKETA one, more capabilities, each generation of AKETA, so AKETA three, AKETA four etc. will be an advancement of each generation. And each is developed with the vision of where the AI market is going. But we don't develop these things in isolation, we develop this in dialogue with the eventual users.

To do that, we will be employing a number of computational neuroscientists. The BrainChip research center will also work with the companies and institutions in Perth. We are in discussion with several companies here now. And the aim is to make the BrainChip research center self funded, through joined development projects, government grants, tax intensives, that sort of things.

We will have four PhDs in computational neuroscience, and four PhDs with a background in AI applications. And these people will be recruited from universities around Perth like, Curtin, University of Western Australia Murdoch, and Edith Cowan University, who all have excellent PhD programs in neuroscience, in robotics, neuromorphic engineering and neural networks. There's quite a lot of talent here in Perth, and I have interviewed already two excellent candidates who are, I would like to have join the research center on a very short term, once we have secured some projects.
 
So, the facilities that we're looking at is not going to be in isolation. Either we will work with universities, things like the [inaudible 00:40:42] neuroscience center, and other institutions like E-Zone around Perth. We're in discussion with these people and we're looking at whether we can use some of their space. We'll need about 150 square meters.
 
We already touched on communications. BrainChip is a very distributed company and we have offices in the United States, Australia, France. And we always have be well connected, so we can connect through a virtual private network. I can connect as if I'm in the office in America. So, yeah?

 I think we should wrap up. This has gone on a little longer than expected. But I hope it was helpful for everyone. For interested investors, we did put, and this will be posted on our website or be lodged with DASX. But you can see, there's been a great deal of activity, and writing about neuromorphic generally, BrainChip specifically. So we just included this to make it easy for anybody to get to what's been published since, I think that's going back to January.

But the AKETA integrated circuit schedule tape out, tape note's kind of a misnomer. No-one's used tape for the last 20 years, but the GDS to file transfer as well as wafer fabrication starts on April 8th. Intellectual property is currently updated heavily. We've got lots of activity in Asia, the US, and Europe. And Peter just touched on advanced research.

I think we've covered all the questions. I actually have them all printed out here in front of me. I don't think we've missed anything. But if we have, I'm sure you folks are not shy, and you'll send us a note.
 So with that, let's say, this was a one off. I think it was important for us to have Peter and Anil participate, for us to have an update, not waiting for 4C in April. But there will be a 4C in April, there will be an update, we've got an AGM coming up. So there'll be lots of communication over the next weeks and months.

Thank you all for joining us and we'll talk to you soon.


For more information, watch the presentation.

Subscribe to our Daily Newsletter?

Would you like to receive our daily news to your inbox?