BrainChip Holdings (ASX:BRN) Year End Update 2019 Presentation

Company Presentations

BrainChip Holdings Limited (ASX:BRN) President and CEO, Louis DiNardo gives a year-end update.

Good afternoon, everyone. This is Lou DiNardo. I hope everyone can hear me. If anybody has a problem, they can send me a note or send Roger Levinson a note. Thank you for joining us on the 2019 Year End Update. We put a press release out yesterday. The update was lodged with the ASX yesterday morning. I hope you all had a chance to review it, and I think what you'll notice here is I'm using exactly that as the foundation for today's webinar, but we'll provide a bit more colour commentary as well as address a bunch of questions that were sent in, which I thank you for.

Before I start, I wanted to reach out and give a special thank you to Peter van der Made. Peter has dedicated a great deal of his professional career in developing Akida, our neuromorphic solution. I'm sure he takes great pride in the work that's been accomplished and the team's effort. I'd also like to thank Anil Mankar, our Chief Development Officer, who has worked tirelessly, literally eight days a week. He and his team have done an exceptional job of reducing to practice and getting the logic design done, and are about ready to hand off.

On one more sombre note, just reaching out to all of our friends there in Sydney, I hope you're coping as best as possible with those rampant fires. Now, ironically, San Francisco -- Bay City and California more generally -- we've suffered very similar challenges over the last several months, so I hope everyone is well and that soon passes.

I'm going to jump into the presentation now. I'll try and keep you guys on what page I'm on here if you're doing this with a hard copy.

As we end our financial year of 2019, a couple of the highlights that I'm going to touch on here, I'll also touch on some low lights, we can't ignore those. The introduction of the Akida intellectual property for licensing to ASIC suppliers was a very big achievement, one that's getting great momentum and steam behind it. The introduction of the neural network converter for CNN. This is important to note -- CNN to event-based CNN, which is kind of a middle ground, as well as native SNN translation. A great deal of activity going on with the conversion to event-based CNNs, and similar activity going on in the development of native SNN networks.

Of course, I think you're all aware of the definitive agreement that we signed with Socionext for the Akida development and manufacturing. That process has gone very well. I'll touch on that relationship and where we are in that cycle in just a few minutes. A couple of very important patents that were filed for Akida inventions. Those were provisional patents. I'll touch on this in a bit as well, but these are provisional patents which give us our priority date, which is first and foremost important, and then we have one year from that provisional filing to reduce those to practice as utility patents. Each will spawn a number of utility patents.

Answering one of the questions that I got, having the patents filed provisionally does not impede in any way the Akida device when it comes out. Having those priority dates set was basically the most important thing. I know Peter and our legal team is working on the utility patents. As you know, we raised about US$2.85 million in a convertible note that was issued and an entitlement offering, that in Aussie dollars was A$10.7 million.

I'll touch a bit on the expansion of our sales and marketing team. We've brought in a group that is really well-skilled in intellectual property licensing and sales to ASIC suppliers as well as design services companies.

Again, addressing a question that was asked, the size of the market we're really going to do with Akida, whether it's IP going into someone else's system on chip or whether it's a device, whether it's a card-level product. This is the entire chip set market for AI. You can see that between now and 2025, that market's expected to grow from roughly 5 billion to a bit over 70 billion. What's more important is when you look at the edge, which is coloured orange, the bottom of these bars. That market's going to grow from something slightly less than 5 billion to something over 50 billion, close to 55 billion.

When we talk about edge, this is where we are targeting Akida. It is an ultra low power, high performance, complete network on the chip. When you think about putting things into a module for ADAS, whether it's ultrasound, LIDAR, radar, when you think about smart cameras where you want to do analytics at the point of the camera, rather than sending all of the data back across a bus or across a wireless network, we can do the analytic at the camera, determine if it's a person, if it's a face, if it's a known face or if it's an unknown face. Similarly in ADAS, we can determine -- based on the LIDAR data for example, which we're working hard on -- we can determine what's in front of the car and not have to send all of the data back to a big GPU that's sucking up lots of power.

I think it's important to note this is a large market. It's not going to be a winner-take-all-market. There'll be lots of different types of solutions. But we'll talk about Akida and its features and benefits, which has been exceptionally well received.

Lots of questions about the development, where we are. Just putting it in its most simple terms, the logic and layout are essentially complete. Tying down loose ends, getting things ready to hit the button and go to what we call tape out, when we generate the file which will then generate the wafer masks.

The Akida logic design has been well rung out, both in simulation on the Akida development environment, as well as internally on an FPGA that emulates the actual hardware design. That is an important part of the process. You do simulation, you do emulation. All has gone quite well.

Next big part in the process is what's called design for test. These are large digital logic chips. You want to make sure that when they come out, they have been designed so that the test is inherent in the device itself, and there's a couple of mechanisms by which that gets done. Once we complete design for test, we do the final design review, we push the button, generate the file. It's called the GDS2. We generate the file, send it off through Socionext to Taiwan Semiconductor Manufacturing Company (TSMC). I think you're all aware they're going to be our foundry partner. This product will be built on a 28 nanometer flat logic process.

What really is one of the big benefits of staying in flat logic and not having any esoteric processing that's required is the device where the IP is completely scalable. We're at 28 nanometer. Some SoC companies may be in 14. Just divide everything by two. Your power will go down, all of your performance will just scale with whatever the node is that you're going into. That could be a smaller node or it could be a larger node. Many companies are still operating in 40 nanometer technology.

While wafers are in the fab, we'll be working on test programs, test hardware, evaluation boards, continue software development and collateral materials. These are all critical parts of the launch process, not necessarily for IP. You don't need your hardware. The test programs will be done by the SoC developer. But with respect to the Akida device launch, having the software in place, having the collateral materials done, test programs when the chips come out, the test hardware so that you can test them, those are all critical parts of our process.

Just a little bit more about the chip. This is just a picture to give you some sense of kind of where packaging has developed over the last 10 or 15 years. Again, this is a complete network on a chip, so there is really no external devices that are required. We'll look at the block diagram in a minute. That means it's got on-chip training. It does on-chip inference. And maybe most importantly, frankly we've seen no other solution that can benefit from incremental learning. That is, once you've trained the network and you've got 100 classifiers or 1,000 classifiers -- these are the things you want to identify in a video stream, or from a LIDAR feed the point cloud. If you want to add face 101 or object 101, 1,001, you don't have to go back and retrain the entire network, incremental learning, and add new classifiers in the field at the edge. That's a tremendous benefit to smart cam, smart home applications, even in ADAS when you've got things that you want to add to, that host of classifiers that you're trying to identify in an ADAS system.

The device will be available in a flip chip ball grid array. This is a packaging technology that's been around for quite a while now, but basically you put a bumps on the die itself, you flip the die over, you put it on a substrate, and then you bring out what are called balls that would be soldered to the PC board, so tried and true technology. The device will be 15mm by 15mm square, so it's really quite compact given all of what's included here. Again, no external CPU required, no external memory required, everything on chip.

This is the device itself. You've seen this picture before. I'm going through this because this is all of the work that was done during this past year. The device, as I said, has a M-class CPU on it. That's not doing the network. That is basically doing housekeeping. When you look at the big blue box at the bottom with the kind of like a checkerboard, that is the neural fabric. That's 80 cores organised as 20 nodes. It's all mesh networked.

If you look to the left, you see the green box. That is a great deal of the innovation and that intellectual property which we need to protect. That's the ability to take regular data, whether it's from a camera, could be internet traffic for cybersecurity, it could be LIDAR, radar, ultrasound, could be flat audio data for keyword spotting, but turning that data into spikes so that we can take advantage and the customer can benefit from what an event-based or spiking neural network really does, which is plays off of sparsity. Much of the data that comes through is information. Why process it? That is one of the mechanisms by which Akida accomplishes extremely low power.

We do have, on the right hand side, an external interface for external memory, the low power DDR4. That would allow you to, if you have an extremely large network that would not necessarily fit on 80 cores, you've got two routes. You can use external memory and augment what is on chip, or the lower blue box is a high-speed chip-to-chip interface where you can use multiple Akida chips. We've currently tagged this at 64. There was a question. We thought about 1,024 previously. In practical terms, 64 seems to be the right number for this architecture, but you can gang these chips together. You need no additional overhead. They basically look like one large neural fabric. We'll touch in a moment on some of the benchmarks of performance that we've recorded and shared with customers and customers have validated.

All the interfaces at the top were industry standard, 3.0, PCIe 2.1, I2S for audio, I3C for sensor inputs. That could be pressure, temperature, flow, vibration, any real-world phenomenon that you want to acquire and provide analytics at the edge. That's what the device looks like.

This will give you some sense of really what is impressing customers or potential customers, I should say. To the left, you've got your standard... I mean, these are very sophisticated, but I'll call them standard data-based convolutional neural networks. CNNs. They've been around a long time and they kind of dominate the landscape now. They tend to be big players in the hyper scale or data centre arena. These are very, very difficult to do at the edge. If you look down the list, you'll see you need an external CPU, you need external memory, and you probably need a math accelerator to keep up with all the matrix multiplication that's necessary. These things can be 20 layers deep, 50 layers deep. Some are very, very complex networks. I'll show you some benchmarks in just a moment. It's very math-intensive, max or multiplier accumulators. It's basically very, very high speed math, millions and millions and millions of calculations. They tend to be relatively inefficient. If you're using a GPU, you could be in a category of 40 to 100 watts. That's far too much power to put in an edge device.

The middle section is really where we're seeing a lot of activity right now. This is event-based convolution. We can do convolution, convolutional networks on Akida. We do turn the data into spikes. We operate in the event-based domain, which gives us that benefit of playing off of sparsity and all the other things that we can accomplish in eventually a native spiking neural network, but you can see it's fully integrated. No CPU, no external memory, nor does it require an external accelerator because we're not doing all of that matrix multiplication. Again, we're implementing the same convolutional neural networks here, but in the event domain, so these could be 20 to 50 layers deep, but we do get to play off the sparsity of data, which is less operations, therefore less power.

You can see the last bullet is maybe the most impressive, efficient power. That's 50 microwatts. That's 50 millionths of a watt up to maybe 4 watts. Compared to what you see in the data-based convolutional neural network, this can go in an edge device. This can go in a battery-operated device.

Then the third category is when you are truly native spiking neural network oriented. Similar attributes, no CPU, no external memory or accelerator, but the networks are shallow. They're two to five layers deep, so you get better latency. You don't have to go through all of the layers to get your answer. We do play off of sparsity as well. And you can see 50 microwatts to 2 watts. That's 50 millionths of a watt to 2 watts.

The diagrams below just show you what it would take to do a standard CNN. You've got several devices that you need, takes up space, sucks up a lot of power. Then you can see with Akida, once we get the pre-processed data, the device stands alone, needs no external support.

These are some benchmarks that we share with potential customers, and then potential customers running the ADE are doing their own validation. These are ranked from lowest power application to maybe some of the larger networks. If you look at the network configuration itself, keyword spotting, on, off, up, down, the ability to provide personalised keywording, keyword spotting, so that you can personalise the device. It's 38,000 parameters. It's the Google data set of commands. We can do... We call it frames per second, because that's been an industry-standard term, but a frame implies that you're looking at an image, which is actually how keyword spotting is done, but nonetheless, that's also inferences per second, so frames per second or inferences per second. You can identify seven inferences, seven classifiers and keyword per second.

The centre block, and this is for the more technical guys that also sent in questions, this is the input data size. You'll have 10 by 10. The last is really what would be colour. In this case, of course, it's one. Then as you move down, you can see that's... When you move over, you can see that's 150 microwatts to do keyword spotting. That's a very impressive number. Object detection, this is not classifications, just detecting that an object is there. On a proprietary dataset, we're running five inferences per second. You can see what the input data size is, the number of classes that you're trying to identify, and you're running accuracy at 90% at 200 microwatts.

I'm not going to go through the whole chart here, but if you look at the last row, this is a very, very large network. It's called Complex-YOLO, you look only once, which is what YOLO stands for. It's 50 million parameters. Compared to the first line, which was 38,000 parameters, we can accomplish 50 million parameters with inferences or frames per second at 133, a relatively large-size input data. Accuracy, which at 65% is about the best you're going to get with any convolutional neural network that's been implemented. We can do that with 4 watts, not 40 watts and not 100 watts.

These are the things that are exciting for us to be introducing to customers and editors. Customers are responding very well, potential customers are responding very well.

A little about licensing intellectual property, and again, this is the device on the left, the neural fabric and the data to spike converters is really what gets licensed. Builders of SoCs don't need our CPU complex, so they're going to have to handle all of their own housekeeping. They'll pick what interfaces their device has. Really what they license are the cores and they license the data to spike converter or converters. They can take all 80 cores, which is 20 nodes, or for keyword spotting or other similar applications with less parameters, they might only want 4 nodes, which is going to be 16 cores, but that is up to the customer, and we will work with them to determine what is the size of their neural fabric to complete whatever task they have.

Additionally, what customers, potential customers, see as valuable is you can run multiple networks. If you took all 80 cores, you could dedicate 10 of those cores, or doing them as nodes, take 20 nodes, then you can take 3 or 4 or 5 of them and you could run one network to do object detection. You could take the other cores and have them do keyword spotting or some other network, but again, it's complete, it's on the chip. You're not running the network on a host CPU, so you can basically run multiple networks on a device simultaneously.

Talk about intellectual property licensing a bit more. There were a lot of questions about it, and I think in part that's because we've voiced a strong opinion, coming in advance of actual device sales. There's no manufacturing process involved. There's no inventory. There's no loan package qualification by the customer. We released that in 2019. We have received strong response from prospective customers. The ADE, in the hands of one major South Korean company, is being exercised almost as much as we exercise it. They really have dug in, validated some of the benchmark results that we've provided, and now they're moving onto some of their own proprietary networks to do validation.

We've targeted specifically vision and acoustic systems. Those are two places where at the edge there's a dominance of requirements. We also have cybersecurity working in the background as well. That is a native SNN. That's not an event-based CNN, but in vision we can do event-based CNN, in vision and acoustic we can do event-based CNN, and work toward developing, in collaboration with customers, potential customers, we can work toward moving them into a native SNN environment.

Certainly there's a lot of activity in automotive industry. I got asked questions about what's going on with companies like Volkswagen and Bosch. Really, I think the shortest path to success is certainly working with the major automobile manufacturers so they put some top-down pressure, but when you look at companies like Bosch, a company out of France and Germany, Valeo, Continental here in the US, Aptiv, which was formerly Delphi, ZF and several others, they build modules that they send themselves to the automotive market.

Some of these are big companies. Valeo is a 20 billion Euro a year revenue company. Continental and Aptiv have to be in the same kind of category. These are primarily going to be radar, LIDAR and cameras, maybe some ultrasound and cameras for ADAS. At this juncture, level one, two and three, which is ADAS, and certainly with the target being autonomous vehicles, level four and level five.

In other vision applications, vision and acoustic, smart cameras, smart home systems, a plethora of edge use cases. Here again you have the OEM manufacturers, the guys that build the cameras themselves, and sometimes build entire systems, or you can work with tier one sensor manufacturers which want to incorporate incremental intellectual property into their device so they get more of the gross margin dollars. Here the leaders in the space are Sony on semiconductor and OmniVision.

Amongst those you probably have all the cell phones in the world, at least the vast majority of them, as well as smart cameras. Partnering with the image sensor guys as well as module manufacturers in the automotive industry, we've taken a very, I think, diligent course in identifying in each of these marketplaces who are the most likely to be successful, who has market share, what do their future roadmaps look like, how are our existing relationships with those customers? As important is intersecting their design cycle at the right time. For us to generate near-term revenue, it can't be a company that has just released its last-generation module and is on a two- or three-year path to identifying and building their next. So intersecting at the right time. I think we've been fortunate in that regard as well.

Acoustic applications for smart homes include a lot of tier one suppliers in the US, in Europe and in China. I'll touch on China a bit more here. In China for cameras, hubs of all kinds and peripherals. I just came back from Shanghai, Roger and I were in Shanghai, I guess it was a week and a half ago. It's an incredible amount of energy and an incredible amount of financing going into AI generally, and more specifically, AI at the edge. We'll touch on what our plans are in China in just a few moments.

In order to really attack the IP sale, it's a different selling process than selling chips. We have retained a group called SurroundHD, three guys we know very well -- very, very seasoned executives in the IP sales process, relationships with all of the tier one suppliers. They are really, with myself, Roger and Anil, and when we can have his attention, Peter, we're really the face to the customer for IP sales at this point. As we release the device itself, we'll have a more traditional semiconductor sales force with manufacturers, reps in the US, distributors overseas, and in some cases there'll be global distributors as well.

In most of the markets in Europe, there are regional distributors that are very technically competent. Someone asked a question about our relationship with a company in Israel called Eastronics. Eastronics is a very, very technical... They call themselves a distributor because they do inventory product, but they really act as your manufacturer's rep on the ground. Now, we're just reducing to practice a contract to get it done. They've already started to work with us in the field and introduce customers. Somewhere along the line, one of you folks that have all this great diligence stumbled upon the fact that they've already started to promote our product. But that's moving well. That's in Roger's hands.

In Europe, with respect to IP, we've got an existing relationship with T2M. They're bringing us great opportunities. They are a major independent supplier of IP. They don't build SoCs, they basically market blocks of IP to their account base, which tend to be people that are building ASICs or system on chips.

We're also in discussions in Japan. There was a question about our relationship with Socionext. We'll touch a little bit more on... The manufacturing or the development process has gone exceptionally well -- very, very strong team, both in San Jose here in California as well as in Shinyokohama in Japan. We couldn't be happier. Our team works very, very well with them. But we have had significant discussions about how to broaden that relationship, now that we're going to go away for fab. They build ASICs. They're the second largest ASIC supplier in the world, only behind Broadcom. They would be a phenomenal channel for us to have our IP block in their menu of alternatives that they can present to their customers for neural network embedded in SoCs or ASICs.

Having similar discussions, more with design services than ASIC houses, but design services in China is a very big and well-worn path. Again, they would license IP, they would market to their customers, and some of those customers would ask them to build the ASIC or some of those customers would have their own in-house SoC capabilities and they would market the IP into China for us. I already talked about Israel. China is going to be an interesting place. We've applied for obtaining an export licence. We may or may not need one. That path, like every government agency, will control the ball. We're certainly on the field, trying to get this done, but Roger's playing point person on this. We've had interrogatories go back and forth, but we certainly will need to have clearance that we don't need a licence, or if we do need an export restriction licence, what the type is and how we deal with it.

But China does represent a very large component of the AI edge applications. When you think about smart cameras, there's a couple of companies in Hangzhou that really dominate the space. They've got extremely large market share. And they have their own national ADAS and autonomous vehicle developments, as well as vision guided robotics. It's a market that we certainly can't ignore, and we've taken first major steps to figure out how to develop that presence in China.

Let's talk about a couple of the low lights, because I don't think we should ignore them. As you know, BrainChip Studio, our end user effort, proved to be far too expensive, wasn't really scalable. We've pulled back to only dealing with OEM engagements. We've got a few that are continuing to move along. Again, these will be cloud-based, cloud-based BrainChip Studio opportunities where facial recognition, object detection, object classification... We haven't given up on BrainChip Studio, but again, as you probably are aware, fundamentally most of our resources, if not all of our resources, are focused on the Akida development, introduction, sales and marketing. That is what this company from inception has had as its core mission.

BrainChip Gaming. I'm sure everybody is aware that GPI was acquired by Angel Playing Cards. We have signed a distribution agreement with them if we ever chose to adapt what we have built to meet their system requirements. That's not on the front burner right now. As I said, we're dedicating virtually all of our resources to the Akida development and introduction. But they're going through their pains of integration. When you put two large companies together and you've got thousands of employees with overlapping duties, they've got a lot of rationalisation to do on their part as to really where they want to play in this kind of vision-system approach to the gaming industry, as well as what they do with their own teams and where their manufacturing is going to be. I just wanted to comment that we have a relationship. I was in Kyoto with their management about a month ago, but again, all of our resources are really behind getting Akida into the marketplace.

Quick financial update. We haven't published December numbers yet. We're working that out now. This is a dated number. We finished with US$9.5 million in cash in September. We did provide a forecast as to what we thought our steady state operating expenses would be for the quarter. We have initiated significant reductions in planned expenses. What I mean by planned expenses -- what we have in the way of headcount, what we have in the way of expenses to support the development and introduction of Akida, they're all baked into our financial forecast. But we did have plans in the beginning of 2020, maybe January through June, to do a rather large expansion in sales presence feet on the street as well as some other areas.

We have restricted all of our hiring now to really essential personnel to get Akida into the marketplace, to start to promote more actively. I will touch on this in a moment. We're far more active in promotion with more standard press releases outgoing over wire services to reach customers and editors. It is becoming more and more difficult to have anything that looks like a market release get lodged with the ASX. They've put some significant restrictions there. I have appended to this presentation the last six months of press releases that really highlight the activity that we've had in the last six months. I know it seems a little anaemic on the ASX, but there has been a great deal of activity in promoting Akida, the intellectual property, getting the message out, attending shows. We did the workshop in western Australia a few months back, which was very successful. Again, significant reductions in planned future expenses until such time that we get traction, licensing deals that could help offset costs as we go forward, and then we'll make prudent decisions about headcount expansion and other supporting functions.

The financial outlook. There's not much that we can say publicly, but certainly intellectual property licensing is expected, based on current and future customer engagements. There's been a great deal of activity. As I've said, we've got our target list. We're in the field all the time. I think I spent probably six of the last eight weeks out of the country or travelling across country here. It can precede, and likely will precede, Akida device sales, because there is no manufacturing. Licences are prepaid. Particularly if you're going through an ASIC supplier or you're going through a design services house that wants to go out and market the IP, they prepay. It's all well documented.

Maybe as important as anything, intellectual property represents significant gross margin because there's fundamentally no cost of goods. Most of that falls straight through to operating margin. There's not a lot of operating expenses -- maybe sales commission, and maybe attributing some overhead in hitting gross margin. I think the IP business, the IP licensing, can certainly add support to the company's cash requirements in 2020.

Over the course of the next couple of months, we'll be able to size that far more effectively and we'll comment on it. I do intend to do these webinars on a quarterly basis as we had done in 2018, just so we can keep everybody up to date, particularly with some of the restrictions that we have with the ASX. I want to make sure that anyone that wants access to the press release content, you certainly can register for our direct email service so you won't have to hunt and peck to find stuff -- although I think this community keeps a pretty good eye on the ball, but you can certainly register for our direct email and then you'll get sent all of that information in real time.

There was a question about design wins, and Akida being able to be used and handled by customers based on engineering samples. As you go through what's a pretty arduous process on the back end of doing split lots, I mean, we do temperature testing, all of those things, but design wins certainly can be accomplished on engineering samples. Those engineering samples, as I said, we go into fab some time here late in January, we'll see what TSMC comes back with with respect to lead time. It's quite variable, but the mechanism that we're using should put it on a hot lot status and move as quickly as possible through fab.

OEM customers themselves, once we have the device in their hands, their design cycles will vary. You work with Cisco and they're building a new router, again, you have to intersect at the right time, but that can be quite long a process. You intersect with a smart camera manufacturer in China, that could be a matter of months and they've got production units in the field. It does vary, somewhere from six to nine months on the short side. Automotive applications, large networking applications certainly can be far longer and can actually stretch into several years.

We're evaluating opening two innovation centres and a software design centre. These are expenses that we will be prudent about. Some are baked into the current forecast. Some we'll absorb as we move throughout time to get these centres open. The software development centre will be in Hyderabad, India. We already got five contractors working there, doing a great job. We control the infrastructure, so we control our intellectual property exposure. We're looking at an innovation centre in western Australia, where we've had a significant historical presence, there's a good talented engineering community. That would really be focused on advanced research and development. We have research that we call applied research, which are things that Peter and company have been working on, are reasonably well proven, and now are being applied to the Akida design. Then you have advanced research where Peter's already got a stack of paperwork that he's generated, and ideas about what Akida 2.0 would and should look like.

Certainly if we're going to have a presence in China, we will have to have some local engineering support. A small innovation centre in China is something we're also evaluating. We need to have local language, Chinese. We need to have collateral materials in Chinese, probably simplified Chinese, but there is such a large and growing AI marketplace there that it's certainly one we can't ignore.

I'm not going to go through the company announcements. I've put them here for your convenience. I think you can see there's been a great deal of activity. We certainly have taken a bigger presence in utilising our LinkedIn service. We've certainly taken a bigger presence using Twitter. You can follow us on LinkedIn. You can follow us on Twitter. You can register for the direct email campaign.

I am going to touch on the first one, which just came out in December, because someone asked the question. Tata Consulting Services. I'm sure most of you all know that Tata is a very large Indian-based company. I think it's a bit over $100 billion in annual sales. This was an interesting application because this was a native spiking neural network. To Tata, this was their robotics group. There is -- and we've talked about this in the past -- there is a new technology for image sensing called DVS, which is Dynamic Vision System or Dynamic Video. It is a superb application to demonstrate how direct spikes into Akida can be processed very, very quickly. It could be hand gesture, it could be the motion in a robotic system. What the future holds with us and Tata, we really don't know. We know it was a successful demonstration and we'll continue to work that.

There is, on the October 31st release... If you haven't read it, I think it's a good report. The Linley Group Microprocessor Report is very, very well respected. If you read this particular piece, I think it speaks very highly of Akida and its capabilities. That Linley Report is read by a very, very large audience of engineers and editors. The patent that was awarded, we talked about patents, again the provisional patents will be reduced to practice as utility patents beginning of this year, and those applications will be made.

I think I touched on a development workshop. We will do others, and I think there was, I don't know, 25 people-ish, something like that, that attended. It was really well done. I think we got high marks from the audience.

Frankly, I think that covers everything that I wanted to. I believe I caught most of the questions. I tried to group them together because there was some overlap. But I guess that's it for now. We will speak again after the quarter. We'll do a quarter update and we'll keep this process going. Thank you all for joining us, and hope you have a nice day.


Ends

Subscribe to our Daily Newsletter?

Would you like to receive our daily news to your inbox?