BrainChip Holdings (ASX:BRN) September 2018 Quarter Update Presentation

Company Presentations

BrainChip Holdings Limited (ASX:BRN) CEO, Louis DiNardo, provides an update on the company.

Good afternoon everyone, this is Louis DiNardo, CEO of BrainChip corporation, BrainChip Incorporated (ASX:BRN). Thank you for joining the September 2018 quarter update, I hope everyone can hear me. I'm flying a little bit blind here. I guess it's about 2 o'clock in the afternoon. I'm calling in today from Sydney, beautiful downtown Sydney, nice sunny day. I'm going to step through an update from the quarter, basically cover what was printed in the update that went out before market this morning. As well as cover a few slides that took a little bit of the technical background on where we are with Akida. We're here in Sydney Robert Beachler, VP of sales and marketing, Peter van der Made and myself attended the Techno conference yesterday. Went very well. Bob did the presentation and I hope all of you can see that presentation sometime shortly. We'll put it up on the website.

I spent most of the day with institutional investors, both prospective investors as well as existing investors. Then we move through the deck, hopefully I can get this thing to forward, there we go. The standard disclaimer, I'm sure you've all read it before, either with us or with other companies that you've invested in. You can read it at your leisure and at your convenience.

The September's 2018 quarter update, a few of the highlights from the quarter which we highlighted in the update that went out this morning, following before C announcement yesterday. We did announce the Akida neuromorphic system on a chip architecture. I'll cover that in quite a bit of detail in just a moment, but it was exceptionally well received. We had a lot of inbound traffic from potential customers. We had a lot of press coverage, which I put a finall slide in this deck with a bunch of links to articles that were published in some of the top technical magazines throughout the world.
We did establish a partnership in Scandinavia with Telesikring, if I said that properly. They're a video analytics supplier to police and security customers across Scandinavia. It's a really good market for us to participate in and very sensitive to security concerns. We will take that same approach in many other markets, both throughout Europe as well as in United States and Australia.

We did license the cybersecurity technology from the University of Thrace in Greece. I'll talk about that when we talk about Akida a bit more. Having Akida with the neuron fabric, you want to build a model zoo as it's called. It's really is a repository of end applications to basically exercise the learning rules and the neuron fabric that we've built in Akida. We are very, very well schooled in vision applications. This brings us the cybersecurity technology. We're working with other customers in other markets which we'll talk about in a bit as well.

Gaming partners, international, we did demonstration of the fully integrated vision system at the G2A Gaming conference in Las Vegas, Nevada. It was very, very well received. The system is really working quite well. I'll touch on that a little bit more as we move towards commercialisation, final development stage and commercialisation. Las Vegas followed a May showing of the prototype in Macau which was also well received. Many of the clients that GPI has, major power houses in the gaming industry saw it in Macau and then came to the booth in Las Vegas where they got private demonstrations.

The quarter we had a lot of strategic advancement. I'm going to move forward and talk about, and this is really just hitting the highlights and then I'll get into some of the details. We finished the quarter with basically 10 million US in the bank, 9.95 million, excuse me, 9.995 million. Gives us plenty of runway. We're still burning at a gross level, we're still burning about $700,000 a month, that's excluding cash receipts. That gives us a little upside on the 9.2, basically 10 million.

Now cash receipts for the quarter were 246,000 in US dollars. This is a bit disappointing. The summer is typically a very seasonally slow time in Europe and I think this particular summer that was exacerbated by really a tremendous heatwave. I was in Paris with the guys from GPI in early September. We were sitting on 105 degrees in Paris and I think Barcelona hit 115 degrees. It's typically slow season, summer season in Europe where the bulk of our customers, existing customers are and I think it was exacerbated by extra time off. It's tough enough to get paid on time out of Europe, but when people disappear for four, five, six some cases even longer, I think the holiday season this year stretched into the September timeframe.

Our head count at the end of the quarter was 35 full time employees and two full time contractors in our cells in marketing group. One in the US and one in Europe. In the June quarter I know there's been a lot of questions, I'm going to try and touch on all the questions. I do have a draft of very good questions about the Akida development as well as where we are with BrainChip studio. There certainly is still open questions about what's going on with SNTech. When we did this call last quarter, we indicated that we had invoiced SNTech for $609,000 roughly based on our existing licensing and development agreement. We still believe strongly that they owe us that money, the step we've taken to confirm their receipt of cash under the agreement we have the right to do a full audit of SNTech's books, records and source code. We've started that process. We've got dates on the calendar, we've received some information. The results of that audit are expected in November of this year, so we're moving quickly to collect on our $609,000. It's a significant amount of money and we've all talked about locked port last year, year and a half. There's a whole lot going on there. A whole lot of dialogue going on back and forth, but the audit will reveal quite a bit. As I said, it will be concluded I believe in the November timeframe.

As we move forward and look at the coming quarter, we do expect cash receipts to increase for the quarter from OAM system integrators, other partners, the December quarter should be up from the September quarter. The sales pipeline continues to grow, it now includes well over 500 leads, actually 5000 contacts, in excess of 5000 contacts, 500 leads. We have instituted a CRM, a customer relationship management system, so there are certain qualifying events, one of which is called an active account. Out of the 500 leads, there's 105 active accounts where we've engaged, we've had dialogue with qualifying opportunities. You can see 55 qualified opportunities where we've met. We've touched the customer location and we've qualified the opportunity knowing that our product is a good fit and we've qualified the opportunity both with respect to volume and pricing.

We've got about 17 design wins. There's a number of questions about design wins and what the conversion rate on design wins would be. That is something we'll learn overtime. We expect that the conversion rate will be very high. Once you go through the qualification process and they go through a trial, by the time we call it a design win, we've got a pretty high confidence that that design win will go into production. We don't control whether our customers will be successful or not. They give us volumes, they tell us what their timelines are, but again they have to sell their products not us. 17 design wins is quite healthy for BrainChip Studio, it really is the flagship product that we're selling right now as many have commented in their questions. That revenue will carry us until we get Akida out in the marketplace and we start to generate revenue off of our own silicon.

Currently we're supporting about 21 committed or active trials. Many people are going into trial or they're in the midst of a trial. A trial period can be anywhere from a couple of weeks to a couple of months. Once the trial is complete, then people come back with positive results, which is what we've seen time and time again. Then you go through the process of winning the design. Once you have the design win as I said conversion could take sometime. It is awkward that most of what we do is under non-disclosure. The first line in most of the non-disclosure agreements is the very nature of having a non-disclosure agreement should not be disclosed with a customer name. It's a little bit difficult.
You know law enforcement agencies don't want the general public knowing that they're putting systems such as this in place. When you talk to automobile manufacturers, they don't want their competitors to know what kinds of technologies they're evaluating. When we talk to cellphone manufacturers, it gets strikingly even more robust with respect to non-disclosure. If you think about the top three cellphone manufacturers in the world, you've got Apple, Huawei and Samsung, none of those companies would like the other two to know what technologies they're evaluating.
It's a little bit awkward, but as is the case with GPI when we have a relationship that's blossomed as well as that has, GPI is quite in favor of us continuing to publicly disclose what we're up to. They are a public company, so we can't front run them. They have investors, they have customers and competitors. They'll announce their product, they'll go into deployment. As those things become prominent in the public domain, we can comment on them, but we can't front run them with news or information about what they're up to with respect to who their customers are, what their pricing strategies are, what their technology benefits are in the public domain until they make it a public event.
The outlook for the quarter, the December quarter is good. I think the again, cash receipts should be up maybe more importantly a lot of strategic progress will be made, I'll talk about that as we go through the rest of the presentation. That includes intellectual property protection, it includes the advancement of an FPGA, which emulates what we're doing in the Akida development environment, that will be primarily for internal use at first. I'll talk about that again in a few slides.

I want to separate a bit applications from market places. Applications in the automotive industry will go to automobile manufacturers, it will be major automobile manufacturers around the world. I can't discuss who it is that we're engaged with, but the advanced driver assisted systems as well as the advanced, the autonomous vehicle developments, they're going on everywhere from GM, Ford, Renault, Volkswagen. I could give you a list of 10 or 15 automobile manufacturers that are focused on this area. For us it's primarily at this juncture A desk.

Level two, three and four where you're building in lidar, radar, ultrasound, putting transducers in the bumpers, putting cameras and looking at the driver to make sure people aren't falling asleep. Making sure that you don't have drunk drivers on the road, advanced systems where hand gestures, interaction voice recognition will be features that come out in the near future from automobile manufacturers. That's not to discount at all that the automobile manufactures look to third parties for specific modules. When you look at a transducer in a bumper and again it could be lidar, it could be ultrasound, it could radar, in may cases they go to third party vendors. These are not small vendors, these are major league companies.

In France, you have a company called Valio, in the US you have Continental. It's not continental the car, it's Continental the vendor. There was a company called Delphi which I think has recently, I don't know recently, but has renamed itself I believe Aptive. These are multi-billion dollar corporations that supply transducer modules and other modules under the hood for engine control systems, telematics in our case it would be sensors for autonomous vehicles or A desk levels two, three and four.

We have engagements in both categories, engagements meaning we've got open dialogue in the automobile manufacturing arena. The primary goal, our first goal is to get out of the research lab with the automobile manufacturers and get a project. Frankly that can be a long course, the research guys like to do research. We find it to be a faster path with the third party vendors, they move much more quickly. They're willing to define specific use cases, scope of work and look very closely to move a project along very quickly in order to support their automobile manufacturers. In both cases we've got active engagements.

Again, I need to hearken back to, we put the announcement out that we had shipped accelerator car to a major European automobile manufacturer. Let's recognise that BrainChip Studio and the accelerator for BrainChip Studio is not the end game in the automotive industry. The end game in the automotive industry is Akida. Akida is a low powered device that can be embedded with a transducer. I'll go through a little bit more of why that's important in a moment, but the effort to understand our technology, play with BrainChip Studio, watch it work on an accelerator card. That was really a preamble to us now showing the automobile manufacturers as well as the third party vendors the real power of Akida as an edge device. In a car you've got things at the edge where the transistors are and then you've got things that are still going to be run by GPUs and CPUs as the central processor or the data centre within the car.

Simple surveillance, again we're focused on original equipment manufacturers, those are people that build surveillance cameras. They're people that build VMS systems. They're storage system providers. I think I mentioned on the last call, once we've put the quantum corporation release out, there are really five top vendors in the storage market. It includes IBM, Hitachi, Dell EMC, NetApp, I forget who the fifth largest is. We had inbound traffic from one of those majors and that has moved along very, very nicely in that it's gone from potentially one project to two or three projects across a wide spectrum of end use cases. That original equipment manufacturer strategy is very similar to what we do with GPI.
Now you get embedded in a storage unit and you're leveraging the sales force, so then IMB, Hitachi, NetApp, Dell or whomever, their customer basis installed. They have a large sales team, they have a large service organisation. With one relationship you can reach thousands of customers rather than playing the end user game, which is the second category of our sales effort where we're calling directly on law enforcement agencies, police forces around the world, homeland security departments, schools, school districts, hospitals. We do have a robust end user base which we're engaged with and it's very active, but we really are playing both ends of this.

Tom Stengel is focused on original equipment manufacturers, that's really been his life's work and his career. We added Luis Coello in Barcelona, early in the year Luis came out of Teleste, which is the largest Pan European surveillance integrator. He's opened up the entire Pan European market for us. We think back a year or so ago when our sales executive in Europe was strictly focused on France. Luis says it's now, he's got us in the UK and several departments of London police force where we've got dialogue, engagements or trials. We've got Scandinavia, we've got Denmark, we've got Spain, we've got Italy and of course we're still focused on France. The opportunity in the aperture of our opportunities in Europe has just gone very, very wide outside of Paris, outside of France into the Pan European space.
Similarly, in the US we've got Tom again working with original equipment manufacturers, but calling on local police forces, calling on homeland security, FBI, field officers. We've added James Roe who's in Southern California and James is the end user sales executive for the Americas, primarily North America is was called the Americas. James comes out of law enforcement, he still carries a budge which says POST on it, which is police officer surveillance training. He's a certified trainer for surveillance. Police departments don't like sales guys knocking on their door, but James has a calling card and he has generated more activity in his first few months on the job than we could have in a year without someone who came out of law enforcement and had those relationships.

Again, in the application space, we have gaming, which you all know about. That's gaming products international, I'll touch on where we are on that development in the deployment for and commercialisation of that product in a few moments. Most importantly I think is to focus a bit on the neuromorphic computing market, the multiple markets that we will serve with Akida. Vision systems clearly, it's an expertise that we have. The folks in TuLusk, the spike team has been working the video analytics and vision systems environment with a spiking neural network for a couple of decades now.

Cyber security, recognising repeating patterns, being able to look and do deep packet inspection, looking for malicious code, looking for malware. That is a very vibrant and very critical industry development and all of us, companies I should say are looking internally at our own cybersecurity protections. We have a great deal of international property, and what we're focused on, entering the cybersecurity market with the Akida launch, we also need to look internally and make sure that our cybersecurity systems are in place and robust enough to protect our intellectual property.
Financial technology is similarly when we look at repeating patterns, so we can look at options and commodities trading, high frequency trading and look at a variety of attributes. There are algorithms, there are specialties within the hedge-fund markets, specialties within the trading market generally which we can lower link see if we can lower power. We can do a formal, efficient and effective job we believe in giving financial technology to the industry.
Agricultural technology as well, it's very much a barging industry. Agricultural technology again takes attributes of what does it take to grow a plant productively and optimise yield. What kinds of fertiliser, what kind of water. I've been surprised to learn that the quality of the water has a great impact on your uptake of nutrients both from the fertiliser as well from the soil itself. If you've got heavy water from a well which has a high iron content, you need a lot more water to get the nutrients out of the soil into the plant and optimise growth and improve yield.

One of the early dialogues that we've had in the last few months, 312 attributes I think was in there. Something over 300 which really make up the profile of what makes a plant grow efficiently, lower the requirements for water, improve the uptake of the fertiliser and the nutrients in the soil. Correlating all of those attributes and providing actionable data for the grower is really becoming a very large industry. We're reaching out to both small startups that are very innovative, we're reaching out to large companies that are multi-billion dollar companies which play in this space and have their own research and development teams.

This is kind of the application space. Separating that a little bit from markets. If you take it up a notch and as Akida starts to generate more and more activity and this will be far in advance of us actually having silicon. We've already got a lot of inbound traffic and we're reaching out to the peripheral or tangential customers that we think really reflect the inbound traffic that we've had. We will probably take these categories to report to you regularly what goes on in the industrial space. When you're selling integrated circuits, industrial space is a great market in that you get the design win and it lasts for five, 10 or 15 years. Once you get into a system, let's say you get into a transducer application in a tractor out of Caterpillar as a good example.

I heard recently that Caterpillar tractors have something over 700 transducers which send data up to the cloud basically an IoT like device. You get designed into a Caterpillar tractor, you're going to have very nice average selling price and you're going to have a very nice long life cycle. Similarly, in automotive as I've just mentioned, ADAS Advanced driver assisted systems and autonomous vehicles, you get designed in then you've got years of stickiness to that design, very much like the industrial space, but it has its nuances. If you're under the hood before again, entry control systems or telematics. It's a very long and arduous process to get qualified, but if you're in a module that comes from a third party, or you're in a module that's developed internally for ADAS and again transducers such as lidar, radar, ultrasound, those kinds of applications, much faster path to revenue. We'll talk about industrial in our penetration, we'll talk about automotive in our penetration. Then there's the consumer market.

The consumer market it's nice. Again, there's an internet of things in the consumer market. A lot of people putting in IoT connected thermostats in their homes, vision systems in their homes. The number I heard today was 200 billion connected IoT devices over the next 15 years, a very, very large market, very price sensitive market, but a very nice market for a device like Akida, which is targeted at less than a watt. It can bring artificial intelligence to the device, rather than the device having to speak to the cloud and decisions being made at a remote cloud location or data centre enterprise location.

Consumers heavily driven by cellphones. I don't know what it is now, it's got to be a billion and a half or more cellphones every year, probably significantly more than that. Cellphones is an interesting business. Frankly it can be very quick time to large revenue. It does take a lot of research and development. It takes a lot of engineering I should say to stay in. Once you win in a cellphone, the cellphone goes to market very quickly, all of the majors are introducing four or five models each year, but you've got to re-win it in the next year.

Fortunately for us, the contact that we've had in the cellphone space is not to buy an Akida chip. If you tear open your cellphone you're only going to find three or four ICs or integrated circuits in your cellphone. Each of the majors and again you can think about everyone from Apple, to Huawei, Samsung, ZTE, all of them have their own internal IC development capability and they build their own applications processors in many cases. What they would look to us for is the Akida IP block.

You basically sell your design the IP block only those pieces that they need when we look at the architectural design of Akida, they wouldn't need a lot of the things that are on there. They'd need neuron fabric, they need the data to spike converter for whatever the specific use case is, but they would then take that design and they would incorporate it in their own silicon, their own processor that they're building.

Very nice part about this is it could lead to revenue for Akida long before we even have our own IC. If you intersect with one of these companies and you have a compelling case, smaller footprint meaning less dye area, consumed with a spiking neural network rather than what they may be do with a convolutional neural network now, much lower power. Again increasing battery life basically when you think about power it's all about battery life.

If you've got a compelling set of features and benefits, you can end up inside their silicon and that could very well if you intersect at the right time happen well in advance of us having our own Akida chip for edged devices or ganged on a PCI card for enterprise applications. We'll see the same thing that's going on in cellphones happening in personal computing and in home security. Facial recognition, object recognition, a whole bunch of features and things that we haven't even thought about as consumers. These are really the three target markets.

The fourth that the semi-conductor industry typically has on a slide like this and this is how you'd see breakouts from pretty much any of the major semi-conductor companies, is the communications space. We haven't delved into the communication space you were talking about switches and routers and all that goes into the infrastructure of the backend of the common space, so we're not putting it on the slide yet. That doesn't mean that there's no opportunities there, it just doesn't rise to the top of the list. We'll continue or start to give you insight as to, over our design wins. How many are going into the industrial space? How many are going into the automotive space? How many are going into the consumer space and as revenue starts to accelerate what's the pie chart look like? What's the content in industrial? What's the content in automotive? What's the content in consumer? It's very typical reporting of a semi-conductor company.

I'm going to move forward, some of these slides I'm going to move through quickly. The intent here was that you could have the deck and you could read it at your leisure. This is something that Bob presented at the Techno conference yesterday. We'll be on our way to Melbourne tonight and be presenting at Techno in Melbourne tomorrow. Again, I'll be visiting investors at the same time.

You can look back to 1957 when you think about artificial intelligence, the perceptron development in 1957, really started the wave of moving from a Von Neumann architecture, which is that serial programmed architecture that Oliver computes. Whether it's your cellphone, whether it's your laptop or your desktop, the Von Neumann architecture has been around, predated the 1957 perceptron.

There were a couple of what have been called AI winters between 1957 and all the way through 2012 where people thought at large that we were going to have a boom, kind of a title wave of activity in artificial intelligence only to foresee it rollover. Technology wasn't there, semi-conductor technology was probably not advanced enough at that point either, but when you get towards 2008, you can see that is when Peter van der Made's first foundation and very fundamental patent for a spiking neural network was awarded. Others were working on convolutional neural networks which are very math intensive. We've talked about GPUs and CPUs how much power they consume, because they are doing matrix multiplication or floating point math and they jack up the clock speeds and they suck a lot of power.
In 2008 that patent was awarded to Peter. In 2013, he founded BrainChip (ASX:BRN) and also published Higher Intelligence his book which I think some of you have read. As you move through 2014 and '15 that was when Peter was taking what was in his head, putting it on paper, really defining his vision for how you could and we say biologically be inspired and designed digitally. How you could take the function of a biological neuron and synapse and mirror that in silicon. 2015 was the IPO, which was really an ITO on the ASX. Raised a bit of money, that's allowed Peter to put a team together. He was back in the US with that team. They started to work on the Snap 64 architecture, raised a little bit more money in a second offering.

In 2016, Snap 64 was a test chip. It was live, it demonstrated that the synaptic structure and the neural architecture worked. It was not a production or a commercial product. It was a test chip that demonstrated that what Peter believed could actually be implemented in silicon. The company then reached out and acquired Spike Net in September of 2016, which brought in a great deal of vision and video analytic capability. Had to reconfigure in an FPGA rather than the Snap 64 test chip, but in an FPGA which now you all know as the accelerator that chip that's on the accelerator card. That allowed us to refactor the software, so you have an off the shelf BrainChip Studio product and you have an accelerator which has the learnings out of Snap 64, the learnings out of the Spike Net team.

As we look to 2018, this is really the mission of the company and has always been the mission of the company, is to get to a piece of silicon, a chip, an integrated circuit, a semi-conductor, whatever you want to call it. Get to Akida which is a fully integrated system on a chip as a spiking neural network. Since of the timeline of the company, I just wanted to, sometimes the investors around the call have been with the company for several years. Sometimes we have new investors on the call. It's a slide that we'll keep in the deck and we'll cover it every time, but I thought it was worthwhile, it was a good slide that Bob put together for the presentation that he did yesterday.

Why do neuromorphic silicon? The real reason is that there are things that cannot be done with standard convolutional neural networks doing calculations, mathematics, matrix, multiplication. A spiking neural network can be far lower power, it can be far lower latency, far smaller dye size. This is the overall neuromorphic semi-conductor forecast. You can see that between 2018 and 2025 we're going to go over $60 billion of content which is neuromorphic accelerators and semi-conductors. Some of that will be FPGAs, some of that will be running on GPUs, some of that will be running on special application specific processors. Some of it will be taking advantage of the features and benefits that a device like Akida, very small size, very little latency, very little power particularly for edged devices. Some piece of the 60 plus billion dollars will be the target market for Akida. I mean if you just back all the way up to 2021, which is right around the corner, you're looking at something on the order of $27 billion. It's a very large market, it's a very high growth market and we've cut out a niche with this spiking neural network, the biologically inspired neuron which other technologies really can't address.

You've put them side by side, convolutional neural networks versus spiking neural networks. Computational functions are done in math in CNNs and it does take a lot of silicon real estate. It takes a lot of power. Again it's doing matrix multiplication and off loading point of math. When you look at a spiking neural network, we're using standard digital threshold logic, with connection reinforcement, which I won't go into, it's math light. It's very low power and again, it's standard logic. It's not an esoteric process we buy off the shelf, silicon in our case Akida is likely to be in a 28 nanometer nod that's a geometry 28 nanometers. It's not bleeding edge, nods in geometry's got now down to seven nanometers, that's not necessary for what we do. We can get a small dye size, we can get low power out of the mainstream 28 nanometer process.

Training is important in convolutional neural networks, it requires large pre-labeled datasets. It's very expensive and long training periods. It's got back propagation, which is the training mechanism and that's what takes so much time in addition to all the mathematics that have to be calculated. The spiking neural network is a feed forward architecture, it's done on chip. It's very short training cycles and it continuously learns. That continuous learning, the ability to do autonomous learning as well as influence mode as we'll talk about on Akida is really what makes Akida very, very special.

Now I'm going to do a quick update on studio, there are a lot of questions in here about what's going on with studio is released. Studio was released in July of last year, what's the design cycle like? Or what's the sale cycle like? What progress is being made? When can we expect meaningful revenue? Revenue or cash receipts if you're looking at a 4C revenue, if you're looking at a half year or a full year. We took the target of gaming, civil surveillance and machine vision. You've heard us talk about companies like Saffron, which continues to use the product by incremental license, licenses. We've talked about French homeland security in the French police department. These were all Spike Net existing customers where they were really custom pieces of software. You couldn't mix and match, they were done in different languages.

We took the first half of 2017, we refactored that software, did the shrink wrap, called it BrainChip Studio. Pretty much all of the design wins that we talk about, all of the trials that we talk about are BrainChip Studio. There's really no activity in evaluating Akida, yet at this point. Although with the development environment that will start very soon, but all of the design wins we talk about, all of the leads that we talk about, the qualified leads and the engagements are really based on BrainChip Studio.

When you think about the level of activity that we have, it really comes down to not if in my mind, but when. How long does it take? It looks like the sales cycle is about a year, it takes a couple or three months to go through the initial trial. When it comes out of that initial trial, there's people get together at the whoever the customer is with, LA Police Department, Boston Police Department, some place in London, any of the locations that I talked about when we were talking about end users.

Then it goes through a process. It's decision making process. We think we're going to have a very high conversion rate from the trial to a design win. Then you're dealing with budgeting cycles and we found it is extending a little bit longer than I would certainly like. There's nothing disheartening about it. It's really a matter of time and persistence. BrainChip Studio again if you go through the data about who we're engaged with, where the trials are going on, what the end applications and used cases are, BrainChip Studio is doing well with respect to customer engagements that I think the language in one of the questions was about uptake. Uptake is a result of getting the design win and moving that design win through to an order and getting paid. I think we're seeing some incremental traction here in what is the fourth quarter, but I think the first half of 2018, all the smoke that's been created by this level of design activity there's going to be fires and purchase letters that come out of what I hope is large majority of these opportunities.

This is again, this is from the top level. Why is it that something like BrainChip's Studio is so important? When you think about 500 or 600 million surveillance cameras around the world recording video 24 hours a day, seven days a week, 365 days a year, that's an enormous amount of storage that's required. We call it XO bites of previously recorded video which at times, many times will need to be searched for a specific suspects of interest, specific objects, whether it was a backpack that was left behind, a bomb goes off some place. Our ability to quickly move through vast amounts of video is really what makes this attractive to law enforcement and homeland security.

We can do that as we talked about previously in low resolution, noisy environments it is prohibited expensive to try to do this in a deep learning environment. It's long training cycles and very expensive to do either in the cloud or even at the enterprise. That's just another example of how things work, one shot learning. You highlight a face, you highlight a shirt, you highlight a particular object. We can with one shot learning recognise that object and then follow across multiple cameras, multiple areas of interest. This is really for forensic search, this is really what law enforcement homeland security loves about the product. Then face detection and classification once you detected a face, you can classify it and you compare it to other faces, bad guys, good guys, whatever the use case is.

This is really basic, I'm sure many of you have seen this before, but we don't look at an image. We look at a recognition model. We look at a spike model of whatever that image is, so you can see Monalisa turns into a spike model. If I showed you that particular spike model, you would not say that's Monalisa. If you look to the right, whether you've got high contrast, low contrast in a variety of either noisy images or bright or low resolution images, we're able to extract the same spike map approach out of those images and compare it to the recognition model. You can see down at the bottom an airplane out on an airfield. The spike model, again, I wouldn't look at that and call it an airplane but at this point it's been classified as an airplane. Then you can see off to your right with very low resolution, very grainy images you can spot airplanes. Of course military would love this, drones as a clear example of an application. This is the BrainChip Studio learning mode where you're learning from an image. You're extracting the recognition model and then you can compare that to patterns with similar features.

The accelerator again, it accelerates what BrainChip Studio does. BrainChip Studio as we've talked about before can run five simultaneous video channels. That can be cost prohibitive if you've got a large deployment. Even if you go into 50, 60 or 80 channels, nonetheless if you go into thousands of channels you would need tens or hundreds of servers. You plug in the card, the card can run 16 channels, maybe more if you take frames per second down. We call it 16 channels, so you can see the increases in speed. We've talked about this before, but I thought it would be a benefit to have a full deck for you.

Now we talk about Akida. This is the first, the world's first neuromorphic system on a chip. It's important to call it system on a chip and not a simple processor, I don't mean simple in that regard, but simply doing the processing. It's got far more content on it. Peter and the research team have taken what was learned in Snap 64, combined with what we acquire in the Pan which is a learning rule called STDP, spiking time domain plasticity has a great deal of new invention over the course of the past year. I'll touch on maybe here intellectual property in addition to the 2008 which is a fundamental patent around spiking technology.

Many of the things that have gone into making Akida what it is and we'll touch on the architecture in a few minutes. The ability to put as many neurons as we have, as many synapse as we have, the data to spike converters to get into the spiking domain and a whole bunch of things that patents are being written, provisionals will be filed. We'll put fences around this intellectual property that take the fundamental 2008 patent to a much higher level.

It is a very efficient neuron model and that is why we can get so many neurons on a small piece of silicon. We've got innovative training methodologies. We will be able to train on chip. We'll be able to do inference which looks like deep learning. We'll be able to do autonomous learning which we all talked about with respect to have BrainChip studio functions. It will have an on ship processor, it will have data to spike converters and will be scalable for server and cloud applications. That's to say you can take a small chip which runs at one lot or less, we can put many of them on a PCIE card, a card that looks like the BrainChip accelerator that might have 10 or 20 chips on it. You can gang them together and you just increase your number of neurons, you increase your whole neuron fabric. You increase neurons, you increase number of synapse and you can put that in a real enterprise application and compete within a GPU or CPU in an enterprise or cloud environment.

There was one question about when you do that and we've talked about the way the Akida devices architected, you could gain a thousand of these things together, and you'd have an incredibly powerful spiking neural network. Whether that's practical or not, it really does work. You'd gang them together and you just pick up all the new neurons and you'd pick up all the new synapse and the architecture would work as one chip. Let's move forward.

This is the architecture. Many of you saw this I'm sure in the architectural announcement, that's worth putting in here. If you look at that centre box, the blue box, you've got your sensor interfaces, data's coming in, it can be pixel data, it could be audio data, it could be a dynamic vision sensor. It could be analog data from things that are pressure, temperature, flow vibration, all those analogs transducers. Many of the analog transducers have matured and they actually have digital outputs. They have analog to digital converters on them. When it comes in, whether it's a pixel, whether it's audio, DVS, data from FinTech, AgTech or cyber, we turn it into spikes and then it hits the neuron fabric where the learning rules setup the architecture. How many layers? How are the neurons connected to the synapse? It has an on ship processor complex. A processor complex basically functions so that we can do on ship training rather than have to go off board to do training on a CPU and then download it to the processor. It's a fully contained system on a chip. You can see the inputs and the outputs. We've got the memory interface, high speed chip to chip interface, which really allows us to gang them together, again, putting them on a PCIE card with multiples on a card.

Standalone this chip will be less than one lot, it will be appropriate for edge applications where you want to add intelligence at the edge. If you think about an automobile, I use that as a good example. If you've got lidar, radar, ultrasound today, all of that data has to go across the internal bus in the car, get to some central processing unit. It basically clogs up the bus and it sucks up processor power. I mean both compute power as well as power with respect to energy. If you can move the AI to the transducer in a cost effective way and that's why we're targeting a small dye and we're targeting a price point which allows us to get into the transducer, the AI gets done at the transducer. The only thing that you have to send across the bus is the actionable data, send the entire data across the bus and haggle up the bus and suck up compute and energy basically just send what has already been determined at the AI in the transducer to be the actionable information.

This is maybe too much technical stuff here, but we have 1.2 million neurons is the target for Akida with 10 billion synapse. That's huge and there's nothing been produced like it, even the test chips out of Intel and IBM, the test chip intel where way high had 130000 neurons and a million synapse. We're talking about 1.2 million neurons and 10 billion synapse on a dye that is a tiny fraction of the size that intel introduced as a lab project or test chip.

In order for their chip to run, they had to have basically what looks like an 80-86 processor sitting outside. It's really a two dye solution. We can replicate CNN functionality, we can do convolution pulling and fully connected neuron fabric. That's important because it allows us to attack CNNs head on and have them converted into SNNs rather than evangelise in every case SNN as a technology, we can simply take the CNN, help our customers do development environment, move it to CNN. It may not be as optimal as doing it in a native SNN, but it will show benchmarks in performance. It will certainly show the lower power, the smaller size, the lower cost.

Again, this thing it's right sized from embedded applications or edge applications. We have done this CFAR, which is an industry standard benchmark for image processing. Online here I can't do the demos, but there's very nice demo of us running CFAR, us running MNEST couple of industry standards. 10 classifiers meaning you label, you classify them, looking at is a dog, a car, a plane, a car or a boat, and then you run tens of thousands of images against those classifiers and you look for your accuracy.

In order to do CFAR, it's an 11 layer network, we only used half of the neurons to get there, and a 517000 at a 1.2 million, 616 million synapse against 10 billion synapse that are available. We only use a very small portion of the power of Akida to come up with accuracy that's comparable to the benchmark standard. It's TFAR we're able to process 1100 frames per second with 82 per cent accuracy. Those are outstanding numbers.

The next slide is a little bit about the computing fabric, slide can be a little bit misleading here. This is efficiency and you can see we're in order of magnitude better than Intel. The size of the Akida brand there is a little misleading. This is not proportional. If we made it proportional to the image for the Intel chip, you would barely be able to see the Akida device. It's a very small fraction. I'll move through this stuff quickly. This is an interesting slide, because it really speaks to what Akida can accomplish. If you can look at the far left, that's an Intel myriad two. It's actually a chip they acquired a company called Movidius. This thing is price comparable, it's targeted at similar applications, but you can see 79 per cent accuracy at 18 frames per second per watt. That's the figure of merit, how much image can you process, how many images can you process and how much power does it cost you?

If you look at Akida which is the blue circle, you can see 82 per cent accuracy and 1400 frames per second at a similar price point. These are high volume prices, this isn't a one piece price, people are paying a lot more for lower volumes, but you're still in a very cost effective environment and you've got a chip that is two orders of magnitude with respective frames per second per watt. True north it can process more frames per second, but it's a very, very big chip. It's very difficult to use and it cost 1000 bucks. You can look at the Xilinx FPGA implementation of CFAR. Maybe per cent accuracy, so at 80 per cent for them and true north is at 83 per cent, but again it's a $1000 chip, it's not practical in its commercial applications.

This is the applications that are our first targets embedded vision, because we're so well skilled at it. We've got relationships in the right places, the inbound traffic as well as the outbound reach. Object classification generally ADAS and autonomous vehicles, surveillance is generally and busy guided robotics could be drones. Cybersecurity financial technology and I would add agricultural technology to the slide as well.

This is a development environment, this is what people will be playing with for the next few quarters until we have NIC. They can replicate or simulate everything that will go on in Akida in this environment. We've got all the scripts for data pre-processing, data model training, whatever training modes you're going to be in and then the instrumentation and visualisation, so that users, now whether it's the folks in the automotive industry, folks in the storage industry and use case, they can actually see what's going on in the device. What's happening with the neurons when they're running whatever the application is that they're looking to.

You can see this thing in the bottom left hand corner's called the model zoo. It's kind of an interesting name. It's called that because it's like a zoo. You walk through a zoo and see monkeys. You see bears, you see elephants. Here these are specific use cases. You can see the MNEST model in the zoo, you can see the CFAR model in the zoo. We'll have Google Net or Image Net as a model in the zoo. These happen all the vision applications, but we'll see a cybersecurity model in the zoo. You'll see a FinTech model in the zoo and so on and so forth.

Then the Akida execution engine really is the core of Akida, that's that neuron fabric and everything that we talked about with respect to training methods, whether it be inference or jumping over into an autonomous training method. Again, the world's first seen network system on a chip, very, very appropriate for both embedded and edge applications, but scalable, gangable and scalable for cloud server applications. We expect sampling in the second half of 2019. There are some questions about the timing of Akida. We expect it will be taping out the device sometime in the second quarter, probably May of 2019. That is about a quarter different than what we thought six months ago. The reason for that is not, it's not been impediments in research and development or design. What we're finding is that we're getting good education as we've announced the development environment even prior to that we had under NDA discussions with potential customers.

When the development environment was announced and then the architecture was announced, we got inbound traffic then we want to make sure that we have those models built and we exercise our learning and our neuron fabric to make sure that we ink out the best performance we can. We know that we're well suited for those use cases, so we're getting more education, but I think that will lead to faster time to market and it will lead to a better overall design as we get done.

The last slide I threw in here and I don't know if you folks download this with whether you can actually click on these hyperlinks. The press coverage for Akida was tremendous. This is not a comprehensive list, but these are some of the more well read trade magazine, some of the more notable authors and editors and our EE times I think you guys probably know Rick Merit, Electronics Weekly David Manners is a prolific writer. You can just go down the list, but a great deal of activity going on with press coverage, editorial coverage, investor interest and certainly customer engagements. I think that's actually my last slide. I walked through a lot of this stuff quickly. I do want to, I've got quite a few pages of questions. I think I touched on most, but let me go through this.

Yeah, there's a question about NDA's, we talked about that. It's the nature of the beast, one cellphone manufacturer doesn't want another cellphone manufacturer knowing what technology they're using. One automobile manufacturer doesn't want another ... As things mature and you get into production or close to production, we probably will be more free to talk about the commercialisation.

There's a question about GPI, have they sold any of the new product? They have not sold any of the new product, it's been demonstrated, it's been on the floor in at least one casino that I know of and probably more since I haven't really been that close to what's being put on the floor since I went to Las Vegas at the conference. We're moving very, very quickly to finish up the development and have a commercial agreement in place. It's a big step. It's our first major commercial agreement. I did define in the update that went out this morning, it was noted in I think last quarter or the quarter before.

The way this will work for us is we have 25 per cent of the revenue for any system, I think it's called ATS which is the automated table solution. If they use vision, we get 25 per cent of the revenue. Again, they'll control the customer, that's the benefit of using them as an OAM, they'll control pricing, but they have several modules, many modules. You can do currency security, you can do behavioral analysis. There's a whole bunch of things that some customers may want, items one, three and five and some customers might want all five. The commercial agreement will be a major mark for us on, I believe that will be coming very quickly.

There's a question about Baritone, Baritone is still moving through engineering. There's things called dockers, it kind of encapsulating the BrainChip Studio OAM version. It's fundamentally a plug in, but that should be completed relatively quickly. There's some work on our end, there's some work on their end. Question about Rockwell Collins, whether they're in the perimeter fence system. I don't know it as being called that, we probably have a different internal name, but certainly we know we're designed in to something that I can see why it might be called the perimeter fence system, because it's perimeter intrusion. What's going on with France? Paris police force continues to love the product, there's been some turnover in that police department. There's a new leader there, so he's coming up to speed getting an education. Design wins I think we quoted 17 design wins.

How many will turn into revenue, I think the majority of them will turn into production if the customer brings the product to production, how successful they will be is really something that we can't determine at this point. We do try to pick our engagements with what we think will be the most successful customers. When you go to any marketplace, you've got kind of also rounds and has been companies and you've got high fliers. We certainly try to pick with our limited resources, we try to pick those customers that we think will be successful.

What areas and fields are they in? Most of the design wins at this point are related to surveillance, that could be, it could be the analytic tool being on a server at a police department with how many video streams get connected. As you know we've put the announcement about quantum on the storage side, Baritone on the fundamentally storage side as well as application side. We've got some nice live dialogue with one of the majors in the storage business. It's not one company, in this question here.

16 design wins will be crossed basically 16 different companies. There's a set of reasonably technical questions, which I won't bug this call down with, but I'd be happy to reach back out or maybe hit Bob or Peter reach back out about what happens when you gang 20, 1024 of these things together. Are we going to bump up against quantum computing? It's a real technical stuff that they make it better to take offline.

There's several questions about what it feels like a delay in revenue generation off of studio. I talked about the design cycle or the sales cycle. We've learned what it takes, we've got a pipeline that we've filled up and then when we get more robust overtime particularly with James Roe, Luis Coello. We are looking for a new executive in Australia. There's a question about anybody circling that's for take over, you know I can't comment on that. We are in the process of narrowing down the search for the CFO as a replacement. I've got one candidate that really, really is kind of my first choice, but others that are similarly qualified. We talked about the Akida milestone. I just mentioned about Australia, we're looking for a new sales exec.

There's a question about how a BrainChip handle the coming economic recession. I don't know. I don't know that there's one coming. This seems to be a statement more than a question, but I have lived through several, quite a few recessions. Some segments of the technology industry are less impacted if you are in a vibrant space and you've got a multi market approach, so you're not relying on one use case, one application. There have been companies that they rely strictly on some power management chips into one cellphone manufacture. That one cellphone manufacturer gets impacted by an economic recession and that supplier gets hurt really bad. I think that's one of the great benefits of Akida being a multi market device, is that you're not susceptible to a wild swing in any particular end market.

Number of downloads on the development environment, at this point we're being very selective, because we are still debugging the development environment. I can't really quote a number, but we're being very selective. I think very soon I know Bob and Peter want to open it up to research facilities and academics because those guys will play with it very quickly and give you a great deal of feedback. It will be well into the public domain with customers, academics and researches very soon. Question about patents, I touched on patents. We've to a whole bunch of work going on to protect our IP. We do have probably one of the most sophisticated patent advisory groups helping us to find where to patent, when to patent, how to put fences around patents so that you build walls around your IP.

Touched on AgTech and FinTech, those are still very much in the early endings, again we're being selective. We don't want to work with, put too much energy into startups, although they can be innovative. There are some meaningful companies that we're reaching out to as well. The FPGA it's on track and there's a question about the FPGA on track. I think it's important to recognise that the FPGA is much for internal use for us to debug our own RTL design, more so than it is necessarily for customers. Challenge with an FPGA is about 100 to one difference in the content between doing an instance in the FPGA or doing it in harden silicon with neural transistor logic design. In an FPGA, you could never get a million and 200000 neurons in a 10 billion synapse, but it will certainly be, it's certainly on track.

I talked about patents. There's a question about publications. I just showed you 15 or more publications that picked up Akida, but this is a very good question about whether we would be targeting magazines like Verge, Wired, maybe the Fortune Magazine. I do think that with Akida now being out in the public domain, I think we probably want to get provisional patent filed, but the idea that we can really raise the profile of the company with things outside of trade journals for the electronics industry, the surveillance industry is a very good idea.

Question about whether with the capabilities that we're building in, are we really approaching at some point artificial general intelligence. Now I could tell you, that is the brass ring for Peter. Akida makes major, major steps forward, it brings spiking into real commercial applications, but artificial general intelligence are no hitch. He thinks about that stuff all the time. Akida, I think I've covered everything, so I'm going to sign off now. I'm going to thank you very much for joining. Any followup questions, you guys know how to get in touch with me, otherwise we'll talk next quarter and have a good day. Thank you, bye.