BrainChip Holdings Limited (ASX:BRN) CEO, Louis DiNardo, provides a June 2018 Quarter Update covering the Akida Development Environment, BrainChip Studio and the Veritone announcement.
Good morning, everyone. This is Lou DiNardo. Thank you for joining our call. This is the June 2018 Quarter Update. I'm actually dialling in from Europe. I know that some of you had dropped me a note as to the timing of this call. So it's about 8am here in Europe. I think it's about 4pm in Sydney, and one o'clock in Western Australia. I'm going to cover a lot today. As you know, we do these calls on a quarterly basis. I think it's a good practice. It's a good way for us to interact with individual investors. I did get a number of questions and, as you know, I try to weave those answers throughout the presentation. If I missed anything, I'll go back over the list as we get towards the end of the call.
I'd also like to say the questions I think were really good. A lot of well-informed questions, really focused on the Akida development, the Akida launch, which is really the big effort for us and the big prize. So I'm going to jump right into the presentation.
Page two is the standard disclaimer, you've seen it every quarter, you've probably seen dozens of them. Read it at your own convenience. I'm just going to move on.
Just an overview of what we'll touch on today and these are the quarter highlights, and it's really not just a quarter update, it's really an update since the last quarterly update. If we were to truncate this thing and not talk about things that happened subsequent to the quarter, we'd be leaving out a lot of material. So subsequent to the end of the quarter, it's after the June quarter, but before this call, we announced the Akida Development Environment.
This has been an extremely exciting development for the company. We're getting lots of good press around the world. I'm going to go into some detail about the Akida Development Environment and what that means in the evolution of the Akida project and the ultimate introduction of an integrated circuit.
BrainChip Studio, we had a secondary launch in the quarter. Fundamentally, this was the Linux version of BrainChip Studio and BrainChip Accelerator. The first launch was on a Microsoft platform, a great deal of industry works in Linux. It also added some incremental features such as autorotation of models, which is big value for customers. As they set up a model, they can autorotate, they don't have to set up multiple models. And an API for easy systems integration.
The API really addresses the OEM account base. End users will use BrainChip Studio off the shelf and the graphical user interface or the user interface that comes with BrainChip Studio. When you're working in the OEM world, it's more like an embedded application. The end customer, our customer's customer, will likely not even us. It will be embedded in the system integration of our OEM partners.
We announced the Veritone collaboration. Their aiWare is a very nice software package. Software as a service. So it's a cloud-based application. They also do on-premise. Veritone has got a lot of good press on their aiWare launch and we are happy to be working with them. The project is moving along quite nicely and integration is going well.
We signed our first manufacturers' rep. I think I had the opportunity to talk about this on the last call so I won't go too much into that. But Bager in Southern California. It's a rep company that's been around I think well over 30 years, and I think it's the strongest manufacturers' rep organisation in Southern California. They're focused on some end user engagements, and they will also be heavily focused on the Akida launch. They tend to be a very high performance silicon manufacturers' rep, so it's really a good fit for us.
We added James Roe after the quarter ended but before this call. This is really a focus on end user engagements. There's quite a big difference between calling on an OEM customer, where you're eventually going to sell either the software package and more likely the Akida chip, versus calling on police stations, municipal law enforcement, federal law enforcement, schools, hospitals.
Those end user engagement are really the strong suit of James. He's covered the Americas in prior lives, he's a very seasoned sales executive, and he also has worked very closely with police departments. He's got what they call a POST certification, which is a Police Officer Surveillance Training certificate. So he can walk into any police department, flash his POST badge and he gets readily accepted.
We also announced a change in our composition of our board of directors. As most of you know, Mick Bolto retired at just about the time of the AGM. We had Steve Liebeskind. Manny Hernandez is now chair of the remuneration committee, Julie Stein is chair of the audit committee. Julie is also now our lead independent director.
We have split up nominating and governance, instead of creating a third committee, nomination goes with remuneration and governance goes with audit. We have a relatively small board, we've got four non-executive directors so just enough to cover all of the committee assignments, and Julie and I have split responsibilities with me as interim executive chairman dealing with investors and things on that side of the house. And Julie really handling managing the board, administration of a bunch of big, big projects, you'll see. If not already, you'll see very shortly, we have recrafted six of our internal policies. We need to keep those as a public company. In those ASX listing rules, ASIC guidelines, and we need to keep those up-to-date and contemporary, So you'll see six revised policy statements as well as three revised charters.
Those are pretty much the highlights that I'm going to cover. I'm going to jump right into the deck of BrainChip Studio and the Accelerator. It is really now bifurcated, we have an end user and a set of engagements where software will be sold, hardware will be sold, and this is primarily or exclusively Vision Systems. As you know, civil surveillance, commercial surveillance should have listed here maybe even as a third bullet you know what we might call BrainChip gaming which is the engagement that we have with Gaming Products International. BrainChip Studio as an accelerator and accelerator for OEMs is that API version where OEMs can generate their own API embedded in their stack and begin to be software, it could be hardware if the deployment is large enough.
Then again this would be for vision systems or video analytics. Now you see in the bottom right hand corner one of the advertisements that we've been running, it really has been well received. We'll talk about the sales pipeline in a few minutes, but we're generating lots of leads, lots of qualified opportunities, and lots of engagements around the globe. We'll touch on that when we get to the markets. If we look at Studio and Accelerator, the automotive developments that we're working on continue with multiple multinational companies. These are both automobile manufacturers as well as third party suppliers to those automobile manufacturers. And these are large third parties, these not small job shops. We're talking about multi-billion dollar companies both in the US and in Europe.
A lot of the engagement is for autonomous vehicle development or advanced driver assisted systems. So those are moving quite nicely. The end game in automotive will certainly be Akida, it won't be BrainChip Studio, but BrainChip Studio let's them really try things out, get to understand the technology. In the case of one European automobile manufacturer, the Akida kernel has been in their hands for a couple months now. They're learning a lot sharing with us and we're learning a lot. It's helping us hone the Akida definition to make sure that we have all the features and hooks and benefits the automotive manufacturers will need. Those surveillance and public safety engagements have increased significantly.
Luis Coello has been on in Europe now for a quarter or two. Tom Stengel had focused more on end users but now we've got James Roe dedicated to end users in North America, and we still have Greg Ryan working lots of deals in Australia. As you'll see in a few more slides of an update on where we are with civil surveillance, public safety, law enforcement generally. Gaming Products International, that joint development is going quite well. It's on schedule, it was well received by customers. I visited Macau after going through Sydney recently, I guess it was back in May, the G2E conference, they demoed their advanced table management system. Extremely well received, all of the top gaming companies in the world were there mainly from Sands, MGM, Win, Seminoles. So well received.
We have customer trials. We expect to get under way sometime during the month of September and maybe even August, more likely actually August from what I learned just last night and it'll be further showcased in a big event, the October G2E conference in Las Vegas. The features and functionality have come along nicely, there's a great deal of video analytics that go on, and that's working in concert with an RFID System again to secure currency, look at consumer or customer behaviour, betting patterns, all kinds of analytics at the gaming community really puts value on. So GPI is moving along quite nicely, I actually had dinner last night with our chairman here in Paris.
Jumping to Akida, someone pointed out to me recently that nobody knows what Akida means. It's a nice easy word to say but it is a Greek word for Spike. Peter, aside from being a genius in spiking neural networks, he also likes to be a little bit of a marketing guy. This was his discovery and we're using that as the branding for spiking neural network IC development. The Akida, let me go through the Akida development as a product because I want to be sure that people understand the development environment versus the FPGA versus the IC, and potentially even versus intellectual property sales. The Akida development environment which was released on July 24th includes the data to spike converter. Remember data isn't natively spiked, you just take the data and whether that's pixel-based data or it could be data from financial technology, agricultural technology, IoT Systems, you have to take that and convert it into spikes. We're certainly well skilled in vision systems, we created the artificial retina which we use in BrainChip Studio. The first data to spike converter that will be deployed in the development environment will be for pixel-based vision systems that will evolve as we bring on collaboration from other sources of data. We've also connected it directly to a DVS camera which is native spikes out. So the development environment, I'll show you the block diagram, it's really intended to give customers and this is going to be broadly distributed. There was one question that I got about early customer engagements and qualified customer engagements for the development system. That's really an effort to get it out, get it into the hands of people that we've been working closely with, make sure that it's debugged well before we go big and broad.
We do plan to do an architectural announcement shortly which really unveils what's inside Akida and how it works. Now part of this is to get the development environment out in the field, make sure that we get as much feedback as we can from customers that we've been working with as well as wide swath of researchers both in academic realms as well as in industry. The architectural announcement we need to be careful about, we need to time it properly because there's a great deal of invention that's going in to Akida that needs to be patented. The last thing you want to do is put something in the public domain before you get patents filed that would obviate the usefulness of the patents.
So Peter's working on patents. So we'll keep his trade secrets because we think they're very difficult to reverse engineer. There's no reason to tell people how we do things. Others will be seeking patents on. But the architectural announcement will come out shortly, we'll give you a real deep dive into the capability both the design as well as performance benchmarks. What kind of data to spike converters will be available? What kind of bus interfaces will be available? What the spiking neural network fabric looks like? How it can be connected? It will be a very deep dive on the architect for architecture. Once the architectural announcement is out, we're going to hit the road doing editorial road shows. Europe is already slated for early September, the 4th and 5th of September, we'll do the US shortly thereafter and then we'll probably come to Australia and I'll probably do Australia myself.
Bob Beachler was doing London, Munich, as well as the United States Roadshow probably bringing Peter with him on some of those as well. So that roadshow I think we'll be well received, we already got editors set up in Europe. Editorial meetings in Australia I said will be scheduled for sometime in September and I'll probably do those myself. The development environment is out in the public domain with a handful of customers. It will go wider and wider and wider as we get feedback but it really is intended to allow people to simulate in software exactly what they'll see when they get the FPGA and then when they get eventual IC.
Of course, it will run slower than it would on the IC or the FPGA but they will be able see performance benchmarks, they'll be able to see what kind of neural fabric they can build, how many layers deep they want to go, number of synopses versus number of neurons, and all the connectivity. So we'll be able to implement in the development environment a full system which will be a mirror image of what they had seen in the FPGA and then again once we have the IC available. The next slide really gives you a little bit of a sense of what the development environment looks like. Everything in the dotted line box is included in the development environment, you take data from the left, it goes to python scripts for pre-processing then it drops down into the execution engine which is the Akida execution engine we used to call that the kernel, but the kernel was one or two cores but this is many, many, many cores. So it really gives them the opportunity in this development environment to really build out a full system.
You can see the neuron model, you can see the training methods and the data to spike converters are all in the execution engine. You've got the model Zoo, I think that's an interesting name, it really is a zoo or a library of models. Models that have been either developed by us or as we have the development environment in the field where I've broadly distributed will be acquiring models to populate that model Zoo over time that will get larger and larger and larger. Now supporting tools just about everything that you need in the world of AI, particularly in neural networks, you know Jupiter, Python, the list goes on. So this is the development engine, this will be a software drop, and again it simulates the entire Akida development environment so that you can do your development. When the FPGA comes out you just port it to the FPGA and run it and when we get to the point where we have an IC, you just port it to the IC. Or if we have a PCIE card for enterprise applications, you know you would port to the PCIE card.
Next slide is a little bit about really where we're going with Akida as a maturing process. We've got the development environment. As I said that's going be distributed broadly. While the development environment is distributed and we're getting feedback, lots and lots of feedback we hope. In parallel we'll be developing, are developing now the FPGA development systems. So you put the FPGA on a board, wrap around it all of the things you need to do the development of your neural network. The FPGA development system won't necessarily be a product for sale, it really will be a development environment for those that want to use Akida.
It's interesting I'm going to back up a minute to the development environment. The development environment is intended to be just that, a development environment. Some of our earlier engagements and dialogue we're having with potential customers now, the development environment may be productised in that we'll do some custom work to help people bring that system up for specific use cases. They'll license it, we'll get a licensing fee, a royalty fee on an ongoing basis. It is one venue where we could have early time to money. The development environment will become more and more robust as we get feedback, as we engage with specific contracts, with specific use cases in specific markets. It is something that will likely to be an end product for certain segment of customers.
Intellectual property is an interesting arena as well for the purpose of time to money and some very large opportunities. When you think about going to the edge, IoT devices think about surveillance cameras, even cell phones. Cell phones I would consider an edge device. You're not going to sell an Akida chip in all likelihood into a cell phone. If you rip your cell phone apart you're going find three or four ICs in there, very large scale integration. Most of the major cell phone manufacturers, think about Apple, Samsung, Huawei, all of them do some in-house development of their own chips or they use outside resources to develop custom chips. They would like to put IP such as Akida into one of their custom chips. So we have early discussions going on with a major cell phone manufacturer. It's actually been quite a number of meetings, a meeting at their facility, a meeting at our facility developing a statement of work to define what it is that they would like out of Akida as an IP lock integrated into one of their very large scale system wide chips.
Then of course you've got the Akida neural system on a chip, which is the chip that you see to the right. That is really the end game for us, is to be a supplier of that chip in volume, both at the edge or on the next bullet like PCIE accelerator. So at the edge you buy the chip and in the enterprise in the server room or a data center, in the Cloud you plug in a PCIE card and that will have multiple Akida chips on it ganged so you can get a multiple of the number of synapses or neurons that you'd get at on an Akida chip you can put multiples, two, four, six, eight on a PCIE card. When we do the architectural announcement we'll talk about the low power nature of Akida. We'll talk about its processing power that will all come to the fore as the architectural announcement comes out, but we've really targeted some outstanding specs which I think will put us in the leadership position in spiking their own networks or neuromorphic IC's generally.
So this is just a little bit more of what I just mentioned, so the Edge applications it’s very small. Whether it's a small IC or it’s an IP block that gets integrated and extremely low power. Again you can think about cameras you can think about IoT sensors, you can think about cell phones. Cell phones would likely be an IP block. IoT sensors would probably be a chip and cameras would likely be an IP block, although it's more highend cameras or civil surveillance the IC is also a potential opportunity.
When you look at enterprise applications, you're talking about data centres, with servers you're talking about Clad and SaaS based systems, multiple devices on a PCIE card and you really eke out the greatest performance at the lowest power. This will be orders of magnitude lower power than a GPU installation of a CNN running in SNN on the Akida products, it's just going to be significantly lower power and really serves for end use cases with much better performance.
There was a question, I'm going try and make sure I get to questions as I go through this, there was a question about, Peter answered a questions at the AGM, about in an autonomous vehicle, would the Akida chip sit side by side with the GPU which is doing the computation or could Akida replace the GPU altogether? Of course Peter thinks big and he thinks for the future and he said, "Yes we could potentially replace the GPU." That's far in the future. The spike in neural network will likely first find its way into transducers and sensor interfaces and then maybe as a card level solution or multiple chips, off loading certain tasks as a kind of co-processor to the GPU, with the end game in the future of certainly spiking their own networks mature and customers get more familiar with the architecture and what they can accomplish will eat more and more into the GPU CPU landscape.
I think I touched on this a little bit already, the automotive manufacturers and third party vendor stuff continues very nicely. We're getting architectural insight, we're seeing development of specific application learning rules. What we really benefit from in these close engagements whether it's in the automotive space whether it's in Fintech, Agtech, cyber, is we can develop the learning rules that are necessary to look at specific data flows, how we take that data, how we convert it to spikes and what learning rules are most applicable for the use cases. You know if you want to take data in agriculture or you want to take data in Fintech, you need to understand the nature of the data, the patterns that you're looking for, and the correlations that you want to make to other patterns. So it's really defining the learning rules and then doing the correlation via other learning rules in order to come up with actionable results. You know I'd like to think of it as, we take data, we turn it into information, we take that information and we build a knowledge base and that becomes actionable by the end customer.
And that could be in agriculture for optimisation and yield in produce or plants. In Fintech it's certainly, I think you all understand the learning rules would be about algorithms, mean regression, candlesticks, comparing and correlating so that you can make trades more quickly and improve your number of wins versus losses in any given, used to be a week, then it went to a day, then it went to a minute, and then it went by the fraction of a second.
In Fintech we've got an initial dialogue. I think it's moving along nicely. I think we're coming to some common ground on what we think we could accomplish in Fintech and it's a solid team. I've talked to a couple of the engineering guys there myself. Cyber security's moving along, nicely actually. We've found a bit of intellectual property that we think we can pick up on the cheap. It's already done in a spiking neural network and we've de-packed an inspection and other cyber methods and I think we'll close on that very quickly and that will give us a jump start on the learning rules in cyber security.
At this point the Akida development system, would be used in these particular cases at least the first two where we have dialogue going on with live customers, will be used to develop kind of custom learning rules where generic learning rules with custom applications, for Agtech and for Fintech. And cyber security because it's already in a spiking domain and the learning rules exist, that'll be a general purpose development environment, be the IP that we can distribute widely to as many cyber customers as we think is necessary to kind of hone our skills.
Next slide is about the technology. Up until now we've talking about the product development and you know the road map from the development environment which was announced and is available now, will go more broadly available over the next couple of months. The FPGA development of the Akida, well underway. Sometime after we get the FPGA out and we get about, a bit of feedback from customers then we'll start into the development or the design or the RTL design of the hardened IC. I'm going to talk about that again in a bit about what the alternatives are for us to manufacture that IC. What the cost associated with each of those alternatives is as well. So as you know, the Akida learning rules continue to mature. It's been in the hands of a couple of customers now. It's been connected to a DVS camera and we've implemented multilayer spiking neural networks. You know we've done benchmarks for CIFAR-10, Amnest and GoogLeNet. These are three industry standard benchmark tests for vision applications and inter-processing applications. We're getting great results.
The complete neuromorphic system on a chip, represents as I've said a number of times over a decade of work by Peter and our engineering teams, that development is well risk reduced. A lot of what's going into the chip, you know the data to spike converters, we've already got the artificial retinas. The team has played around with audio, we'll have good learning customers in those three categories that I spoke to previously. The neural fabric, the kernel, we've been beating the kernel up for many months now. And that kernel just gets replicated core upon core upon core to develop the neural fabric. The fabric will be relatively complex. It'll have different types of blocks and different types of connectivity and again Bob will take the covers off of that when we do the architectural announcement.
We do expect to have the hardware implementation done on schedule. Our schedule is a little bit fungible in that as we get feedback from the development environment engagements, we'll be tweaking the FPGA design to make sure that what we learn through that development environment engagement is incorporated into the FPGA. So question, as I said here about porting convolutional neural networks to spiking neural networks. This is really a great way for us to open up a very wide market very quickly. And Peter and the research team has spent a lot of time porting CNN's to our SNN fabric, demonstrated that it's very doable and it's very efficient. It wouldn't be as efficient a design as if you started with a native spiking neural network, but it will outperform lower power, lower latency CNN's.
So porting CNN's which there are thousands and thousands and thousands out there this has been a well worn path. Rather than having to evangelise spiking neural networks on its own, going into a customer saying, "Look, show me your CNN, we'll map it for you, we'll port it. And we'll show you what the benchmark results are." The ability to convert existing applications and implementations to our spiking neural network again it opens up a large and immediate market while native SNN development is going on in the background.
Cashflow for the quarter, we took in receipts in $301,000 we ended the quarter with $11.9 million. I was a little disappointed in the $301,000. It did invoice SN Tech, I've got a number of questions on SN Tech. We did invoice SN Tech for $609,000 which we expected to collect in the quarter. Based on facts, circumstances, and communications with SN Tech, that is a valid invoice and they're disputing the invoice. We're trying to get our mind around what it is the dispute is about, but the contract is very fresh. If you recall back in December, we re-crafted the contract with SN Tech, took away exclusivity as we brought on Gaming Products International in the gaming space, which was a fantastic move for us. So that ink is barely dry, we know what the contract says. We know what our rights are under that contract. We believe this is a valid invoice that needs to get paid. We'll take all actions necessary to see that that happens. I'll keep, of course, continuous disclosure. I will keep you all up-to-date on what that dispute is about, once we get our mind around it and what the resolution or remedies are.
Today, we've got 28 full-time employees and three full-time contractors in sales and marketing. Contractors in sales and marketing are card carrying, full-time, dedicated to us. But they live in marketplaces where it's better for us to have contractors than on the payroll employees. We're relatively a small team. If you look at that, that's 31 people. We are leading the pack in Neuromorphic IC Development. We're competing with big companies, we're competing with the Intel's of the world, IBM as well as a bunch of startups. We've got a leg up on them. We think we're much, much farther down the road, we have significant intellectual property protection. Peter's foundational patent 011 really covers the spec in neural landscape quite well. We've got other patents that are in process, 075 will be an important one for us. Peter, of course, is developing and writing patents for Akida, as we move into that IC development.
So that's the cash flow for the quarter, move on to... Just a quick outlook, we do expect continued growth. We've harvested over 500 leads, we've got 55 well qualified opportunities and another 100 that we're combing through. 16 design wins with significant first year value. We're currently supporting about 30 committed or active trials most which are under non-disclosure agreements. So we can't talk about names of companies, we could talk generally, about we have a cell phone engagement or have a good dialogue. We can keep you abreast of progress in those general terms, the automotive space. We've similarly talked about as we get into AG Tech and FIN Tech and Cyber, we'll be doing the same thing.
The sales pipeline is robust. It's getting bigger and bigger every day. As I mentioned I think on the last call, we're now in Police Departments up and down the East Coast of the United States, some on the West Coast, a little bit in the middle of the country. We're well outside of just being in France now. Luis Coello has brought us into Scandinavia, he's brought us into Spain. He's brought us throughout Europe, the UK is very prominent in his qualified opportunities database. So the outlook again, growth throughout the second half of the year based on what we see in qualified opportunities turning into design wins, and design wins turning into revenue. Final slide is just to remind everybody that we are addressing a multi-billion dollar market. This is something over $6 billion of served available market.
That means opportunities that our technology whether it's Brain Chip Studio, Accelerator, or Akida in one of its potential many forms. The development system, IP, hardened IC, could address this market. We don't have to take the lions share of this market. This is not a winner take all kind of marketplace, but we can get our fair share of this. We could build a big business, it's a very high margin profit model. It will shift over time as you have licensing, which is licensing is virtually 100 per cent gross margin. If you're done with the development, you've got your cost of sales so you've got a lot of follow through. When we get to the IC, it'll be more of an IC model. Gross margins for high performance integrated circuits can be between 60 and 65 per cent. Control your operating expenses, and can turn out a model that's 30 per cent of operating income. That's the kind IC model that we will pursue.
I think that's all of the slides. Just let me make sure I've touched on questions. There was a question about our team being at a conference, sitting side by side with IBM. Whether we were collaborating or competing. We are competing. We're not giving away trade secrets and Nicholas is very careful about that. IBM certainly has their eye on us, they know about Akida and time content of the architectural. I talked a bit about the SN Tech space, GPA, the ATS system has gone well. We really expect to see trials in August, the first is in the middle of August. Then the big event will be the G2E Conference in October. So the Department of Defense project, which I'm familiar with, when you do the Department of Defense rarely you get any insight into their classified programs. But we have an engagement and actually have a card in the hands of the part that's classified Department of Defense project. That dialogues gone quite well, they have been running with it for a while.
The key to pricing depending on the development as a customised form of a use case where there's AD Tech Fin or Cyber, that will be a big license and an ongoing royalty, and a maintenance fee. If we're looking at the IC that will be you buy the silicone, you develop what you're going to develop on it, so that will be a typical integrated circuit model. Intellectual property that will be an interesting as well, that would be a big licensing fee and an ongoing royalty. So a little bit like the Akida development environment, the IP pricing model will be similar. How we build Akida and what the costs are associated with it, was another question. When you get to the point the FPGA's done, you're really comfortable that the design is solid and you move into what we call RTL design. Where you're really doing the place and route of the RTL that was implemented in the FPGA. Place and route we're not going to build a place and route team, in an integrated circuit like this it'll take 25 guys, maybe 35 guys I've seen as many as that number of people put on a place and route project like this.
We'll outsource that, there's many, many vendors in Silicone Valley that do place and route. They've got all the tools, you don't have to spend millions of dollars on synopsis and mentor or cadence design tools. They use their tools, they charge you a fee and you get your design out. Once you have your design you've got a couple of models to go with, you can use that same vendor and get a turn key, well they'll cut the masks, they'll develop the masks, they'll actually work with the foundry build the IC. Work with the assembly house, assemble the product and they'll work with the test house, test the product and then they give you the IC as a turn key. It's going to cost you more, you'll be paying a premium for them doing all of that work, but you will have saved the capital expense of mask set, the development of the test platform, the load boards and all of the test development that has to go on. It'll save all that upfront cost, but the IC will cost you more as a finished good.
The other alternative is what we call a COT program, or customer owned tooling, where you get through your place and route, which you've outsourced. You get the GDS file, GDS2 file, you take that file and then you go build your own mask set. That's a costly process, a mask set and 28 anti metres on, I don't know the specific but I would guess it's still $20-2.5 million. So you've got that upfront fee, then you have to work with the package house, and you have to work with the test house. Develop your own test platforms, so there's a lot more upfront cost in a customer owned tooling model, than there is a turn key model. It basically comes down to simple math, based on your forecast of first year, second year, third year, how many units are you going to build? How long will it take you to recapture that upfront cost if you go in one direction or the other? So those are decisions that we'll make as we get closer to launching the place and route. Certainly we'll keep you folks well informed on that.
Talked about the models, talked about the autonomist vehicles, SNN converters, question about neoterm revenue and from which sector will we see the largest share? Neoterm is certainly going to be Brain Chip Studio and Accelerator. It will be in the surveillance business, I guess it will be a little bit of a horse race at the front end because gaming is coming along so nicely, and it's a very fast time to market for us. Because we have an engagement with the, I'll call him the 800 pound gorilla in the likes of Gaming Products International. Over time surveillance is a much bigger market but I think near term revenue will certainly be Brain Chip Studio and the Accelerator. Although we will have the opportunity as the development environment gets out there and we're talking to specific customer and Ad Tech specific customer, and FIN Tech those could be meaningful pieces of revenue that are Akida related. They're not the IC but it is the Akida design being implemented and software with dedicated learning rules, and dedicated application layers.
Akida early access, I talked about early access really just being a short term thing. Get some feedback and then it'll be generally available. Talked about mobile phone, facial recognition, we continue to improve facial recognition on an ongoing basis. We do really facial classification, we don't do biometric facial recognition. It tends to be less sensitive, the question was, a report about facial recognition really kind of falling down when you get into specific races or gender, or age categories. Because we do classification we don't have that biometric problem, but again we're doing facial classification. We'll dish up the top three matches, weight them and give them scores, and say, "Here's number one, and our number one top accuracy is very, very good." Across all of those dynamics I just mentioned. What is a question about storage 'cause we touched on storage with the quantum announcement. I think I mentioned on our last call that literally the day after the quantum announcement went out we had an inbound from one of the top five storage manufacturers in the world. Top five are HP, IBM, Dell, EMC, Hitachi, I don't know if I said five, but that's the top category.
So we're having great meetings with one of those top five, they visited our place a number of times. We visited their place a number of times, and we're looking at specific use cases across a wide set of opportunities there. There was a question about a Nasdaq listing, we've spent no time thinking about this since the first couple of quarters after I joined the company. It was a question about, what would the listing requirements be? If recollection serves me well it's a minimum $4 stock price, so you'd see a very large reverse split. $50 million market gap and $10 million in cash, but that's not on our plate for now. We've got so much going on in building a company, we've got a loyal and dedicated, enthusiastic investor base in Australia, which also touches on the last question. Which was, are we contemplating, I guess what we call in Australia consolidation and a reverse split as we term it here in the US. I'll just tell you, the board has had no discussion about that. It's not on our minds right now, the board, the management team, the rank and file are all focused on generating revenue from Brain Chip Studio and Accelerator. Getting Akida to market as quickly as possible.
So that's it from here, I've run a little bit blind, it'd be nice if we had an opportunity to just have an interactive call. But I think I covered most of the questions, if not all, and a good deal of material. So with that I'll sign off and we'll talk to you again in about 90 days. Thank you very much.