BrainChip Holdings (ASX:BRN) talks pattern recognition and applications

Interviews

by Jessica Amir

BrainChip Holdings (ASX:BRN) CEO Louis DiNardo talks about the company's BrainChip Studio SDK and wide range of autonomous-learning applications.


Jessica Amir: Hello I’m Jessica Amir from the Finance News Network. With menow is CEO and Managing Director of BrainChip Holdings (ASX:BRN) Lou DiNardo. Lou, welcome back.

Lou DiNardo: Hi

Jessica Amir: BrainChip is developing semi-conductors that are used in a range of applications, such as facial recognition. Can you tell us more about it?

Lou DiNardo: It’s a combination of software and algorithms that run on a hardware accelerated piece of silicon. As we develop algorithms that can doedgedetection, facial recognition, as for object detection as well as facial recognition, sometimes it’s easier to find a pattern. Take a snapshot of someone’s shirt or tie or cuff link. And if that’s a suspect you want to find, we’ll have a pattern that we’ll follow through on a video. Facial recognition is nothing more than a pattern for us, but we take that software and we accelerate it by running it in silicon.

Jessica Amir: Now to your BrainChip Studio platform. What does it provide developers and end users?

Lou DiNardo: BrainChip Studio is a software package, thatcan run on any standard server, Linux or Windows. And it runs those algorithms and the software solutions to do facial recognition, or pattern recognition. The advantages to the user are that it’s a bit like an off-the-shelf shrink-wrap package. It’s like buying Microsoft Office. It’s got a good slick user interface, easy to use. Click on learn and you can learn the pattern in the face. Click on find and we’ll find it in any stream of video, whether its live or stored footage. And then you get a report out,‘this is the suspect, we found him 75 times in the last hour, at these locations’. So it’s very easy for law enforcement and anti-terrorism authorities to use.

Jessica Amir: Speaking about that, you’ve had some great responses?

Lou DiNardo: Yes, we’ve had some great responses. We’re working with the Department of Homeland Security in France, the French Police Force, an airport in Bordeaux and engaged in trials with many others. Interesting, another market, which one we realised you can find objects or detect patterns, we’re in the casino business. And in their surveillance systems, we can recognise a card or a chip. We count cards, determine who won and who lost, or if a deal is paid off properly or not. Whether someone was cheating, how long a player has sat at the table and whether they should get complimentary services for a free dinner or a free room. So we run from anti-terrorism to counting cards in casinos.

Jessica Amir: Looking at your autonomous solution in more detail now. What’s the demand like and what does it actually provide?

Lou DiNardo: It’s important to recognise that autonomous learning in an artificial intelligence arena, is much different from what people call deep learning. Deep learning, you can think about Google, Microsoft, Intel and others, and taking larger databases and very very large sample sets, in order to match an object with an identification or classifier. We, (BrainChip) learns autonomously, there is no programming, there’sno mathematical computation, we put digital or synthetic synapse on a piece of silicon and it works like your brain. It starts to learn autonomously.

In some cases we have a model that’s been created, we have a face that we know, or trying to find a pattern that we know. That is supervised autonomous learning, so comparing to a model and coming up with a result.

Unsupervised autonomous learning is when we don’t know what we’re looking for. So you can imagine a drone flying over a battlefield that sees sand and sand and sand, but will then identify a pattern. We don’t know what it is, we just see an object and we find another object that’s different. We find a third object that’s different, we feed that back to commander control and they then label it and say it’s a tank, that’s a soldier, that’s a missile. And now we have autonomously in an unsupervised way, learned, labelled the data and then we can carry on with the mission.

Jessica Amir: Can you tell us about the work that you’re doing specifically at some airports in France, casinos and also border security?

Lou DiNardo: Bordeaux airport is very concerned about perimeter intrusion. So there’re cameras going down the fence line on either side, on four corners of the airport. And they have cameras on either side of planes that are parked at jet ways. And their concern is that someone hops the fence and again, this could be a crime or terrorism. Someone hops the fence and approaches an aeroplane, or the base of the aeroplane in the gate, that someone approaches the aeroplane from the bottom or from underneath. We’re looking through those cameras and we can identify a rabbit versus a person, versus a dog.

And they had a system installed, it was failing quite often and they were getting a thousand false alerts a month. So they had to send a patrol out, determine that it was a rabbit and not a person. They installed our system, we reduced the false alerts by 96 per cent and they’re very happy with that solution. Now we’ll take that to other airports and other similar applications.

Jessica Amir: Great. Changing pace now to your financial results. What were some of the highlights from FY17?

Lou DiNardo: FY17 was a productive year for us. We did launch the software package in early ‘18 on the back of all the effort that went on in ‘17. And there was a lot of development that went on in the hardware. We built out our team, we hired a vice-president of marketing and business development. We’ve hired generally engineering staff, both in North America and in Europe and we’ve raised two rounds of funding. We raised a round of funding in May of ‘17 as well as October of‘17.

Jessica Amir: What’s the focus long-term Lou?

Lou DiNardo: Focus long-term is to take more than the vision applications that we’ve talked about. Invision we look at repeating patterns and that’s how we identify objects, whether it be a face or a pattern. Our ability to recognise patterns goes well beyond vision. We can recognise patterns in data streams.

So if you can think about cyber security, when someone launches a denial-of-service attack, they’re sending repeated requests for service, which then shut down the infrastructure. We can identify those patterns very quickly and help operators avoid those denial-of-service attacks.

Similarly in financial technology or Fintech, we can look at high frequency trading and determine what patterns are being used and if people are trying to gain the system, so to speak. So financial technology, the cyber security arena and I think one of themost exciting things about the development of the standalone artificial intelligent processor, which we’re now developing, is in the autonomous vehicle that we call ADAS, advanced driver assisted systems.

Our ability to recognise patterns, streamline data so it can be transmitted from cameras back to the central processor, in an automobile. It’s a perfect fit for a company processor.

Jessica Amir: Last question now Lou. When can we see some more developments on that?

Lou DiNardo: That’ll be in late ‘18 probably. We’re developing the core processor now. We’re engaged with one large European automobile manufacturer, who’s using BrainChip Studio and actually is taking delivery of the first BrainChip Accelerator of guard. And that’s a preamble to using our AI processor.

Jessica Amir: Lou DiNardo, thank you so much for the update.

Lou DiNardo: Thank you very much.


Ends