BrainChip Holdings (ASX:BRN) June Quarter Update Presentation

Company Presentations

Good morning, everyone. Thank you. Speaking to you this morning from Sydney, Australia. I'm going to try and keep this relatively brief so we have adequate time for questions and answers. I think last time we ran a little long, and I actually had to cut questions short.

I'm going to flip to the first slide, which is the disclaimer. Lots of words. Very dense. I'll leave it to you to read at your convenience. I think you've all seen it before. Let me move forward.

Significant developments for the quarter. It was a very productive quarter. We made a lot of ground building our team, getting our product to market. The appointment of "Manny" Hernandez, Emmanuel Hernandez, to our Board was really exceptional. Manny and I have served on two boards together previously. He really is a marquee both director as well as operational executive, with 40 years of technology experience. He was a Chief Financial Officer of Cypress Semiconductor. Currently sits on the Board of ON Semiconductor, a large publicly held, I think NASDAQ-listed, semiconductor in the US, semiconductor company in the US. He's been on a number of private company boards as well and spent some time doing venture capital investing.

Just a week ago or so we introduced Tom Stengel as Vice President of Americas Business Development. Tom's a 25-year veteran with experience driving revenue growth at major Original Equipment Manufacturers. I'll talk about the OEM sales model in a few minutes. You can go directly to customers or you can go through Original Equipment Manufacturers as a business strategy. Tom has worked in the video analytics business for a long time, has relationships -- well-established relationships -- with many of the major OEMs in the video analytics business.

Maybe most importantly, we introduced our BrainChip Studio software suite. It's an integrated software solution for pattern matching and facial classification. I'll talk about that again a little bit more in a few moments, but it's probably the most important step forward that we've taken, a truly commercial off-the-shelf solution. To date, with the acquisition of Spikenet, we've got a wonderful and marquee customer base. But frankly those were kind of one on one on one deals where you were somewhat customising software and providing tailored solutions. This is an off-the-shelf solution which really has the ability to scale for large deployments.

We announced Safran and our relationship with them, this collaboration in machine vision. I think you all know that our focus on civil surveillance and commercial surveillance is now being augmented with machine vision, primarily visual inspection. Safran in this case works with another major national corporation for the assembly of mission-critical parts in the airline industry or aircraft industry.

The initial trial of Game Statistics. Most of you are well aware that Game Outcome has been trialled. Game Statistics brings another leg of growth for us. It's a major casino in Las Vegas. The trial is going well. The system is being stress-tested.

And we successfully raised -- and this is US dollars -- $4.9 million (Australia -- $6 million) through a share placement, which gives us plenty of runway to execute on our business plan.

So, overall, a very productive quarter. Again, I'd really bring your focus to the introduction of BrainChip Studio. We spent the better part of the last six months taking all of the software that had been developed by Spikenet, integrating it, refactoring the software so it's up to modern code, and then integrating that in a neural network on a piece of silicon as well.

Just to kind of bring us all back to a level set, the artificial intelligence arena, you know, it gets daily headlines in the newspaper. It's always important to separate what we do from the kind of general overarching artificial intelligence sector.

Deep learning, as many of you know if you've been watching the company for a while, is quite different from what we do. It's large data sets, millions of samples, weeks or months of training, requires a lot of memory, a lot of computing power. Certainly is appropriate for many use cases. You can see Google, IBM, Intel and many other large multinational companies playing in the deep learning space. And deep learning, just to kind of put it in perspective, means the number of neural layers that are used in order to execute on whatever the use case is. That's not our space. We live in a place called autonomous learning. It takes very, very small sample sets -- in some cases a single model. It provides efficient forensic and real-time analytics.

Two modes for us. Supervised learning, where we capture a model -- it could be a face, could be a pattern -- we compare it to a video stream, and we find that face or pattern in a video stream, again either forensically or in real time. As we move forward with the original SNAP64 core and the integration of the JAST neural model as well as the JAST learning rules, we move into a place we call "unsupervised learning". There are no models, there is no training. We simply (not "simply", but in a very complex way) -- simply provide efficient real-time learning and analytics. We do pattern recognition. Repeating patterns. Again, that could be video analytics. Maybe more importantly, it allows us to analyse data streams. That could be for in the fintech arena, where you're looking at high-frequency trading options and commodities tradiing. It could be in cyber security, where you're looking for denial of service attacks or malware. It could be in the health industry, the healthcare industry, as devices... For example, in type 1 diabetic models, people are using constant glucose monitors. There's an enormous amount of data that's harvested. Doesn't usually get used until someone visits a doctor. We can recognise in real time aberrant patterns.

So, always important to separate ourselves from the deep learning space. The autonomous learning -- whether supervised or unsupervised -- is very fertile territory for a growth business.

Next slide, which is, you know, really focusing on our product roadmap. I think there has been some confusion as we've had to change names. I think some have speculated that SNAP64, you know, SNAP has gone away. We've now got the moniker of "BrainChip Studio". I think everyone knows about Snapchat and its big public offering. We didn't want confusion in the marketplace, so we rebranded the product. SNAP64, the original neural network core, Peter van der Made's vision for the company, really focuses on unsupervised learning.

BC Studio, which was recently introduced as a software, BrainChip Studio, is a software platform that is really predicated on all of the work over ten years that the folks at Spikenet did, the additional intellectual property that was added by BrainChip. That is being marketed as a software solution, off-the-shelf. BrainChip Studio does pattern recognition, single shot learning, both for pattern recognition and facial recognition.

Very soon to be introduced is the hardware Accelerator for BrainChip Studio. You can see in the bottom right-hand corner we've taken the creative licence to put the BrainChip logo on here. That in fact is an FPGA. I think everyone knows that, at least at this juncture, we're implementing our hardware design in a field-programmable gate array. BrainChip Studio Accelerator is expected to be released before the end of the quarter, a very nice one-two punch into the marketplace with software that can run on a server; and, in many cases, customers will continue to use that software and then, in other cases, they'll take advantage of the Accelerator card, which improves throughput, improves density, dramatically improves density. You can run multiple of the number of channels in a server when you put the Accelerator card in. Reduces the total cost of ownership for the customer, and allows us to scale into very large deployments.

The box at the top right -- call it at this point "BrainChip Spiking Neural Network" -- really is the culmination of taking the spiking neural network core that was developed for SNAP64, implementing the JAST neural network model, as well as the JAST learning rules. That will be focused both on supervised but, more importantly, unsupervised learning applications. That development is underway, and we'll keep you abreast of that in the coming quarters.

So, that's the product roadmap. I hope that makes it a little bit more clear for folks. So, today, again, in the marketplace, we're aggressively marketing BrainChip Studio. It's the software suite. Soon to be marketing the Accelerator for BrainChip Studio. And then, some time in the future here... Again, we'll keep you abreast of the development of introducing the next-generation JAST spiking neural network.

So, on the launch of BrainChip Studio, the next slide, this is the first commercial... Frankly, it's the first commercial product introduction for the company. Again, SNAP64 was developed back in 2015 and early 2016. The acquisition of Spikenet occurred in September of 2016. Taking that software, refactoring the software, developing a core that was more suitable for the Spikenet approach to spiking neural networking has now come to fruition. It's been a great effort and a lot of success on the part of both the design team, the hardware design team in Newport Beach or Aliso Viejo, California, as well as the software and algorithm team in Toulouse, France. It really does represent the culmination of over ten years of advanced algorithm development, extracting patterns or faces, providing matching or classification for end users.

Those of you that are familiar with the company are well aware of the marquee account base that has been using kind of the one-off solutions that came out of Spikenet -- the Department of Homeland Security in France, the French police force in Paris, the airport in Bordeaux, all doing either facial recognition, pattern matching, perimeter intrusion. I'm certain everyone here is also aware that once the world recognised that we could effectively and very quickly and efficiently match a pattern, that the folks in the gaming industry... We can match a face, we can match a pattern, therefore we can match an ace, a king, a queen. Game Outcome, you know, has been marketed for the better part of six months or so now, and Game Statistics is coming up right behind it.

The refactored software, as I said, is off the shelf. It's as close as you can get to what we might call a shrink-wrapped solution where, you know, someone comes in, they can buy a licence, they get a key for that licence, a code, they download it from an FTP site remotely, and off they go. In the case of BrainChip Studio, it's a very nice user interface. You can determine whether you want to do a pattern match or a facial match and walk through all of the attributes, thresholds, which allow you to really, you know, zoom in on that which it is that you want to discover.

The features and benefits, as I said, are well-proven. This is battle-hardened software. The refactoring was really taking the existing software, bringing it up to date, making it user-friendly. And, again, it's well-proven both in homeland security and law enforcement.

With a single screenshot... Some time in September, when I'm back here with the folks at the Finance News Network in Sydney, we'll also be taking a day to do product demonstrations. I hope we can get those online so that you can see them. If you're not in Sydney, you can dial in and see what we're up to. Single screenshot is very, very unique in facial and pattern recognition. Any video stream, you know, take a single frame, highlight that which it is that you want to find with your cursor -- we can go down to as little as 20 pixels by 20 pixels -- and then run hours and hours of video -- could be high definition, lower definition, noisy environments. What's really unique about BrainChip Studio and the architecture that we implement, it is in low-light, low-resolution environments, so existing infrastructure, in subways and airports and casinos, no change out of cameras to high definition. We can pick a 20-by-20 or 30-by-30 pixel image or pattern and identify that in, you know, tens of thousands of frames. When you're running video at 30 frames per second and you're looking at hours of video across multiple cameras, you can do the math and figure out that there are tens of thousands of frames, and we can identify both the timestamp of finding that image across all of those frames as well as the coordinates in that image, and feed that back to either law enforcement, the gaming authorities, whoever the end use cases that we're trying to support.

So, you know, the off-the-shelf package allows us to scale. When you're doing somewhat customised software for end users, there's a lot of hand-holding, there's a lot of manpower that's required. Having refactored the software and having a commercial, off-the-shelf solution, it allows us to scale both across a wider number of customers, but larger deployments for those customers, because of the ease of use and the friendly user interface that we've put together.

Moving on, the hardware Accelerator... This is much talked about. The hardware Accelerator takes basically the same software, and it runs it on an FPGA in this case (it could be an ASIC in the future, could be an IP block that gets sold to someone that's building their own system on a chip), but it accelerates and improves the density. And where a server today might be able to process four or five channels of video simultaneously, when you run it on the Accelerator, it will be a multiple of that -- three, four, five times, could be more, could be less, depending on the number of analytics, the quality of the video, the number of frames per second. A whole bunch of different attributes play into just how much density and throughput you can get out of the Accelerator.

The release is expected in September. I said that it would be the third quarter when I spoke at our recent annual general meeting. That's well on course.

It's a PCIe card, so, you know, existing customers can take a screwdriver, open their server, plug in a PCIe card, run the same software that they've been using and get much greater density. For us the benefit is if you can run four or five channels simultaneously in software on a server and you can run a multiple of that on the Accelerator card, that reduces the number of servers that need to be deployed to cover hundreds of channels rather than tens of channels. Hundreds of channels would consume dozens of servers. It's problematic in some industries. Police departments don't want to build a data centre in their basement, and nor do the gaming authorities want to see data centres built out in their basement. When you can run multiples of that software solution on the Accelerator, it increases the density, lowers their total cost of ownership, our business model gets very well refined with lots of leverage as we save the money. We harvest some for ourselves and allow them to decrease their overall total cost as well.

So, the Accelerator really does take what is now a scalable software package, because it's refactored, it's got a nice user interface, it's easily deployed across many customers without a lot of hand-holding. You add the Accelerator, which gives you the density and the throughput, and it's a very nice solution for the vision applications that we're currently focused on.

And I do want to go back and just harken back to that slide on where the product roadmap goes. Today, our focus is primarily on vision applications. That's because we're very good at it -- the pattern recognition capability, which also translates into facial recognition -- but as we bring up the SNAP64 core with the JAST neural model and the JAST learning rules, it opens up a whole nother range of opportunities for us in data analytics. And we're looking at... It could be fintech for financial transactions, it could be cyber security for malware and DNS attacks, and a wide variety of other data streams that we can recognise repeating patterns.

So, I'm not sure this is the last slide, but close to it. Just a quick summary. Top left-hand corner, you can see what we do today, and in the top right-hand corner, in that little right inset, you can see the guy with the red box highlighted. A very poor quality image, and this is again where this spiking neural network excels. It wasn't his face that we found. He's got a unique pattern on his T-shirt. Across three video streams over several hours, we were able to identify this suspect in multiple locations, timestamp, provide coordinates and, unfortunate for him, it didn't turn out very well. You can see on the left, that's a subway station. We highlight pedestrians. We can recognise the difference between a person, a dog or a cat. We have algorithms that look for left luggage, if someone loiters too long, they put their bag down, they walk away. You can see the casino or gaming application in the forefront there. The SNAP card, which has now been renamed the BrainChip Studio.

And just as an example, which I found very exciting, is we have a very receptive customer base who is very familiar with the software. It has been battle-tested by the French Homeland Security, the Paris police force. You know, it's good to be in the right place at the right time. I think the announcement that Paris has taken on the Olympics in, I guess, it's 2020, really gives us incremental opportunities in a marketplace with law enforcement that is very, very familiar with our solution and, you know, is a very big fan of our capabilities.

The Bordeax Airport we've talked about, Airbus we've talked about, Cisco, Mohegan Sun, other casinos that we're working with in Las Vegas as well as globally. I think last time we spoke we talked about our pipeline of discussions being 17 casinos worldwide. We certainly are looking to expand that pipeline and close as many as many of those opportunities as we can.

Just a little bit of overview of the company. We're still running relatively lean, and we intend to continue to do so. Recent additions in sales and marketing are really the promiment theme for now. We have a product that works, hardware that works, software that works, customers that have provided endorsements. It's really now time to build out that sales team and really generate scale and revenue growth.

Aliso Vieji, which is predominantly our hardware team, has 10 people. Toulouse, France has 13. San Francisco, I'm there by myself right now, but that's soon to change. And actually that should be two. I'm sorry, there's... Bob Beachler's there as well, our VP of marketing and business development.

Capital raised -- these are in Australian dollars. You know, translate them to US dollars in order to provide comparables with respect to others in our industry. But at $19.4 million total in, I think when we published our 4C early last week, we had about US$4.5, US$4.6 million left on the balance sheet, so we're very cash efficient. To have hardware, a commercially available software package, soon to be Accelerator card, customers that, you know, we've got working relationships with that are big fans of the product and have used the product, that's very cash efficient as compared to some of the comparables in our space.

So, a quick summary. We're in a position to scale. We have software that's in the marketplace, commercially available off the shelf, a hardware solution which will be introduced by the end of the quarter. The development of the JAST hardware and the SNAP64 core is ongoing. Our sales team is being built out. Tom Stengel's looking for people in the US. Hung Do-Duy, our Vice President of Business Development in France, is looking for additional team members as well. So, I think we're in a very nice position to see the company scale over the short term.

I don't know if that was 15 minutes or not, but I wanted to leave ample time for questions.

Q&A

Has Cisco endorsed BrainChip technology now? If so, where to from here?

I think the basic question is "How it's going with Cisco?". Cisco has an effort with us in Western Australia. I'm having dialogue with folks in Sydney as well. I don't know what I would mean by "endorsement" -- they don't really go forward and endorse -- but they're excited about both the project that's going on in Western Australia and, I think, maybe more importantly, what we can do with pattern recognition, facial recognition and video analytics. So, the initial effort with Cisco, which is ongoing, is in Western Australia, and that's really with regard to vehicle identification and projects that they're working on in Western Australia which I'm sure could go nationwide. But I'm really enjoying the interaction that we're having with the folks in Sydney, who are really kind of coming up to speed on what it is that we're doing with BrainChip Studio.

Has the company been holding off on securing a contract until the hardware market is ready?

No. We will continue to market the software aggressively. There are customers that either have installed infrastructure. You know, they don't have necessarily thousands of channels that they have to deal with, but they have maybe intense analytics. So we will continue to market the software. I think it's important to recognise the software package as an off-the-shelf package and not a big hand-holding exercise was just released. I think it was July 19th. Some customers, existing customers that are using the software, certainly will migrate to the hardware accelerator -- again, for the purposes of getting throughput and density. You know, if you take any one of the major metropolitan cities, you're going to find, you know, somewhere between 500 and 5,000 cameras in a downtown area. And if you can run four or five channels of video simultaneously on a server in software, that's an awful lot of servers, that's an awful lot of overhead, there's a lot of problematic issues. Those customers are really ripe to migrate to the Accelerator. It gets them a multiple of that number of channels. Again, the multiple depends on, you know, what the resolution is, is it high-def, is it 720p, or is it 30 frames per second or 15 frames per second, so it's hard to put a fixed number on it, but the density increases dramatically. So, direct answer to the question -- no, we're not holding off on securing any contracts. We spent a lot of time making sure that the software launch went smoothly, that we have a very user-friendly package, and that allows us to scale quickly.

Is the maximum simultaneous videos using FPGA hardware 16 videos?

I think I just addressed that. It really... You know, it's awkward to say. It depends. But if you think about running 4K video or 1080p or 720 or 480, and you look at, "Am I running 30 frames per second, 15 frames per second?" In some cases, 5 or 10 frames per second is adequate. So, it's a kind of a multivariable equation as to how many channels you can run simultaneously, depending on those factors, as well as how much analytics are you trying to run. Are you simply doing facial extraction? Are you doing facial extraction and facial classification? Are you doing pattern matching? So, it's a tough question to answer with a very specific number. At some point we'll have, you know, a table, kind of a menu of, you know, if this is your frames per second, and here's your resolution, with this number of analytics, here's the number of simultaneous channels you can run. Those things are all being discerned as we launch the hardware product later in this quarter.

Does the company expect to improve significantly on the June quarter revenue results in this quarter via software and/or hardware releases? I've also noticed a career section on the website, and was wondering...

And it breaks off there. Revenue quarter to quarter at this point, it's really hard to call. Again, the software package was just released -- off-the-shelf software package was just released in July. While we certainly have a very nice lead generation source here, we've got great customer engagements, you know, it's going to be a little lumpy until we have a wide and diverse customer base. So I don't know that I can give you a specific answer on kind of quarter to quarter. The numbers are relatively small and we expect over the coming quarters to see meaningful growth. On a quarter to quarter basis I'm not particularly sure.

The career section -- yeah, the career section has been a wonderful asset for us. You know, we've filled several positions. We've added software talent in France. We've added an engineer or two in Aliso Viejo. I think we're well-staffed there. We've got candidates coming in for what we call "field applications engineering". You know, as you do business development, you don't want to distract development engineers from having to engage with customers for, you know, questions and engineering support. So, field applications engineering is really a significant part of what we need to accomplish in building out the sales team.

Presumably FPGA testing is in progress. Can you comment about the internally observed performance to date.

Well, it's far more than in progress. We're looking at launching the FPGA or the Accelerator card in this quarter, so we're really buttoning things down. Performance is what we expected, if not better. As you get towards the end of an FPGA development, you really can optimise the FPGA implementation. You know, an FPGA is kind of like a quilt, and you've got fabric that you connect in a variety of ways. We've found that we can optimise the FPGA and get more cores in the FPGA implementation than expected. That gives us more simultaneous... you know, potential for simultaneous channels or a similar number of channels with more intense analytics. So, yes, it's gone very well. I think we're seeing results that are very encouraging, and, again, we're well past kind of initial testing. You know, we're buttoning things down for a launch.

Will customers purchase BrainChip Studio before the Accelerator is available?

Customers will purchase, or we expect customers to purchase BrainChip Studio in an ongoing fashion. As I said, we'll continue to market both. The Accelerator doesn't preclude us from marketing the software solution to those that want to keep it on their own server. So, we'll market both.

Would it have been better to launch both together?

I don't necessarily think so. I think that customers have been playing with the software in one fashion or another through, you know, kind of the Spikenet interaction. I do believe we've got a lot of attention on the capabilities of the analytic tools by launching BrainChip Studio as a software solution. It was ready to go and I saw no reason to delay that launch and engage with a wider group of customers and try to get to scale on the software as quickly as we could. Time to market, time to money. And, again, it's not mutually exclusive. People won't all gravitate to the Accelerator.

What income has been generated by Game Outlook?

We're not going to disclose revenue by any particular product line or application. It's just not good business practice.

What income do you estimate for BrainChip Studio in the next 1 to 5 years?

We're a public company. We don't provide forward-looking guidance of that sort. That being said, we hope to have engagements and interactions with sell side analysts who will build their own models based on the total available market, the served available market, and an estimate of what share of market we could capture.

Was the hardware discussed and demoed at the Mizuho conference?

It was certainly discussed. I wasn't there and I don't think Bob demoed it. We have done some demonstrations. Again, in September, when I'm back here in Sydney for the CEO Sessions, we are going to have at least one, if not two, product days. What we'll demo at that point -- I would hope to think we could certainly be demoing the software -- that's already in the marketplace -- and that will be right on the cusp of the product introduction of the Accelerator.

Can you please share with us how many NDAs we currently have in place and how many of those are with top-tier companies?

No, I can't do that. That's the nature of NDAs, and I certainly wouldn't want to lead our competitors to the trough. I could say it's significant, and we've got many NDAs in place.

There's a question about top-tier companies. I would say virtually all of them are with top-tier companies. You know, when you're look at the announced accounts that we make public, you could think of companies in a similar class. As a small company, your focus has to be on the leaders in industry, both to validate your technology as well as as the greatest opportunity for a return on invested capital -- the invested capital in R&D as well as the invested capital that goes into building a sales team.

Any update on the New York school?

The New York school is coming along. It's dragging out a little longer than I would have expected, but I think that we'll have something out of them before the end of the quarter I would hope. That's kind of my expectation.

We've heard a lot about trials. When will we be seeing revenue from customers?

I think I've talked about that already.

If time runs out and some questions are not answered, can you post answers on the BRN website?

That's a great question. Thank you. That's a great idea. We can have a place where we can field questions. I mean, I don't know if it's the website or some other venue, but I think it's our responsibility to actively communicate with investors, at least as best we can given the restrictions that we have as a public company, but also the obligations that we have as a public company.

Do you envision another capital raise in the near future?

I certainly can't comment on that. I will say that we've closed the 4C with $4.5 million, rounded to $4.6. That's a good chunk of cash.

Do we expect any major surveillance deployments with payments of $500,000 to $5 million?

I'm not going to comment on what our expectations are in the near term. Certainly, the opportunity is there. We've got great customer engagements, we've got great product, and these kinds of numbers are certainly within what's a reasonable realm of expectations.

It is understood trials are a lengthy and cumbersome process. Are you able to provide any insight into the general progress for trials?

Yeah, trials are interesting. They take long. Every day feels like it's long, but if you think about... I'll give you an example which I can speak to publicly. Our engagement with Mohegan Sun started a month or maybe six weeks after I joined the company. My first full week was the first week of October of 2016. So, I think about late October, maybe even early in November. Their goal was to run through the trial and get the deployment on Baccarat done before Lunar New Year, which I think in 2017 was the early part of February. So, that trial, you know, it went through the month of December and January, and tables were up and running for... I'm pretty sure it was for Lunar Year, you know, the beginning of February, some time in February. So, trials can go quickly. On the other hand, if people have incremental requirements and they want to tweak this, they want to stress-test a little bit more, it can go beyond a couple of months and, you know, it could be as long as six months. But, you know, trials have gone well. Again, the most recent trial of Game Statistics at a major casino in Las Vegas. We're learning a lot, we're collaborating with a major player in Las Vegas to make sure Game Statistics is stress-tested. How many video streams can you jam through and what is the frame rate that can be jammed through? So, optimising the system is really kind of the end of the trial period and before you go into deployment.

A new drone fleet was just released. Is BrainChip technology being used?

I can't comment on the specifics of where we're being used and where we're not. Customers don't necessarily like that, particularly in the drone space... I should say in the security arena. The drone space is... We're very actively pursuing opportunities. The supervised learning mode is very important. They know what they're looking for. You know, it's a tank, it's a missile. On a farm it could be, you know, aberrant growth in crops. And there are the unsupervised arenas as well where the end user, whether it's security in the civil surveillance arena or in the commercial arena, and it could be agriculture or it could be infrastructure for pipelines and refineries, that kind of critical infrastructure... They don't know what they're looking for. And as JAST gets implemented into the core wrapround of the SNAP64 architecture, I think our opportunities blossom even further, but we'll keep you up to date. I can see there's attention to the drone arena. You know, one of the interesting applications in the drone space is not the drone itself as much as it is finding drones. Drone finders -- you know, drones that look for other drones -- is becoming a much more prominent issue. Drones can be used as weapons. They can't be picked up by radar. And the unsupervised learning mode of the JAST/SNAP64 integration really brings great capability.

Please name an existing customer of Studio.

Studio was just released on the 19th of July, so that's a tough question at this point. My belief is the first customers of Studio as a canned package will be existing customers of the Spikenet solution who want to go with larger and wider deployments; and, as those things mature, I'll certainly be able to discuss them.

What is the retail cost for the software versus the hardware accelerator package?

That is a very variable question, and it depends a whole lot on how many cameras. Let's stick with the surveillance application. How many cameras? Are we in a licensing model, which is what we would use... I guess I should really talk about this a little bit more. Business model really depends on the end use case. In the case of surveillance, civil surveillance, it's most likely that will be done on a per camera basis and depending on how much analytics the end user wants to licence. Are they just doing facial extraction and building a database? Are they doing facial extraction and facial classification? Pattern extraction, pattern matching? So, the range of licence per camera could be anywhere between $500-$1000, up to as much as $4,000 or $5,000. That's a tough question to answer. And when you get into hardware, it's very similar -- it's just that you're jamming more channels through that hardware so your leverage goes up. The hardware cost and the price of the hardware is somewhat irrelevant. Even if that was a pass through, it's... They say razors and razor blades. Get the hardware in the hands of a customer, harvest the licence, harvest the maintenance fee. When you go into the OEM space, if you're licensing the code, you're licensing the RTL design, the digital design as well as the code, then you've got a licence and you've got ongoing royalty. In the gaming space -- I think we've talked about this publicly a number of times -- it's different. It's not on a per camera basis, although you could maybe think of it on a per camera basis -- it's on dollars per table per day. If there's one camera, then I guess you could say it's on a per camera basis. But that is certainly an annuity model where we're getting paid, you know, on a daily basis for those cameras that are in use, and that's in perpetuity.

With regards to Game Outcome and Game Statistics, when do you anticipate these trials will progress to signed contracts and recurring revenue?

In the case of Mohegan Sun, that's already occurred. Certainly we'd like to see them move to larger deployment. It is interesting that... Almost universally, the casinos like to go after Baccarat first. I think it's the highest table stakes game, and at least what I've heard offhand is that also happens to be the place that people, for lack of a better word, cheat -- cheat the most. So, deployment... The deployments, we'll announce them as we go into scale with those that are on trial now as well as those that we expect to move to trials this year.

Do you see some sales likely in the short term after release of cards? If so, what division is likely to be your first contract?

OK, I'm not going to forecast who our first contract's going to come from. We've got lots of opportunities. We've got lost in the pipeline. I don't know what you mean by "short term" either. The card runs the same software. So if you think about the short-term opportunities, you know, the closest time to money will likely be those customers that are already using the software prior to the BrainChip Studoi launch even, because it is the same software refactored with a nice user interface, but, you know, they want to go for large-scale deployment and they don't want to build a data centre and have a bunch of servers. So, I think it's just reasonable to expect that our first deployment of the Accelerator would be with an existing customer that wants to go bigger and broader.

How will you work out the cost per camera that you spoke about?

I think I just talked about that.

When are we expecting to reach a material contract?

I think we're starting to get a little bit redundant here.

Could you start up a Twitter account?

(Laughs) Coming from the United States, where we got President Twitter, I don't know. Yes, I suppose Bob Beachler could use a Twitter account. He does, for those of you who have access, he does actively use social media in the way of LinkedIn. So, anybody that's on LinkedIn, you know, Bob's got a relatively active presence. And, when we're announcing things of meaning -- and certainly if they're material, they would also be covered by the ASX -- but, you know, kind of day to day stuff, Bob posts there, and if we're looking for people in certain marketplaces, certain positions, they would be there. But I'll talk to Bob about the potential of a Twitter account. I think he might shudder at the thought.

Is the Spikenet website being updated?

Yes, the Spikenet website... The Spikenet website, frankly, will disappear. It will be subsumed into the BrainChip website. You know, we try to keep a lean team. We wanted to make sure that the BrainChip Studio launch went well. We did a facelift, not an overhaul, but a facelift of the BrainChip website. Now we'll take that next step -- integrate the Spikenet website, which has a lot of good content in it. But, you know, branding the company as "BrainChip" and not "BrainChip or Spikenet". Spikenet, they've got a good presence, they've got a brand that they've built in France, but that brand is being integrated into the BrainChip brand; and I've spent a lot of time in France myself, as has Bob. Collateral materials will now be branded as BrainChip and the website will be integrated.

When do you expect to see our first revenue?

Well, it's small. We have revenue. You know, the Spikenet software has been shipped and we've collected cash from the Department of Homeland Security, the French police force, all the people that are on that slide. I think maybe this question is really more pointed to when do we expect to see revenue from the integration of the refactoring and the launch of BrainChip Studio as well as the hardware solution, and I would just say that, you know, we're working hard on it. It was just released July 19th. The hardware will come out some time before the end of the quarter, and we will keep you apprised certainly.

You quote existing customers. Who are they? This must be public information.

I think it's very public. Most of the logos are on the slide in the deck. I won't repeat the names again and again. But it certainly is in the public domain. That's not to say that, on an ongoing basis... Customer names are not necessarily public information. You know, having been in the technology industry for the last 35 years, I could tell you that if you do business with Apple and you ever use the name "Apple", they'll throw you out of the door. So, some customers are very sensitive, others are willing to write endorsements.

How does our civil surveillance offering compare to others in the space?

This is a good question. How does it compare to others in the space? Look, there are lots of folks trying to do facial recognition, and lots of folks that do facial recognition and pattern recognition. The strength of the spiking neural network is it takes a very, very small amount of data and very, very small sample set. You know, single shot, one shot of a face, one shot of a pattern, and you can generate a spike map, which is a subset of the data in the image. That spike map can be harvested from an image in low light, low resolution, grainy environment, noisy environment. If you look at solutions -- and I'll name names, so that at least you guys can get a sense -- if you go... You know, IBM has a solution. Morpho in France has a solution. NEC has a solution. Their predominant strength in their case is very high definition, very well lit. You know, if you want to do facial recognition for security purposes where that face opens a door to a mission-critical area, where it opens a file for medical records, it's certainly the method of choice, but you have to have a high-definition camera, you have to be less than, you know, a metre away, you have to have a flash go off. That's not what we're doing. We're taking existing infrastructure, you know, in subways, in bus stations, in airports, grainy images, lower resolution, you know, cameras in the sky in a casino that are 10m or 20m up in the sky, and we're able to identify an ace, a king, a queen, a jack. In civil surveillance we can identify, you know, a cufflink. Any pattern that is unique, we can extract. So, that's the difference between a spiking neural network and how we implement it and what people are doing with more traditional convolutional neural networks or even deep learning using convolutional neural networks.

I think the rest of the stuff is starting to look a little like the same question, so, with that, I think I'm going to call it a day. I do enjoy interacting with our investors. I get out and visit many. I don't get a chance necessarily to see as many in the retail channel, so I put a lot of value on this interaction, and questions are all good. So, thank you very much, and we'll talk to you again next quarter.