Bristol Robotics Laboratory are developing a facial recognition system that could change the world as we know it.
Photo and video: Bristol Robotics Laboratory
Earlier this month news broke that a facial recognition system developed at Bristol Robotics Laboratory (BRL) could be in use within three years at train station ticket barriers, according to the Daily Mail. The system could replace physical tickets initially on ‘fast-track’ lanes for rail users who sign up by providing a 3D scan of their face, according to the report. We visited the Centre for Machine Vision at BRL to discuss the system with the researchers allegedly developing it.
The researchers working on the facial recognition technology could neither confirm nor deny the veracity of the reports that have circulated in the Daily Mail, having just earlier the same day given a commitment that they wouldn’t discuss with the media any use of their system on the London Underground, the National Rail, or trains in general. We did however have an enlightening conversation about the various other applications of their work, the ethical implications, and the science behind it.
The technology is a big step forward from traditional face recognition which generally relies on two-dimensional images. The new 3D imaging technique allows far higher levels of accuracy of face recognition than have been seen before which could enable a wide range of new applications, from payment in shops to detection of suspicious behaviour in airports.
Interview with Lyndon Smith and Wenhao Zhang, at the Centre for Machine Vision at Bristol Robotics Lab
Let’s start with a brief introduction to your work
Lyndon Smith: This is the Centre for Machine Vision at Bristol Robotics Lab, we’ve been going for about 20 years. We’ve got a niche which is called 3D machine vision. Most people in machine vision are looking at video, but we have more of a systems based approach where we have our own systems that recover 3D data from complex surfaces and unusual objects. We’ve applied this technique which we’ve developed across a wide range of disciplines from medicine to skin analysis through to polished stone surfaces and face recognition. There are a vast range of applications for 3D machine vision.
We were in contact recently with a person from Heathrow Airport. When they take photos of people most of the time it’s two-dimensional. They take normal photographs, so if you hold a picture of someone else this could fool the system, but in our case because we are recovering the shape of the face and the orientation at each point, the system knows its photograph and just ignores it.
So the big difference between your system and that employed in airports is that the original capture is done in 3D?
LS: That’s right. We have a camera and a set of lights in known positions and we recover the 3D data directly. This has a lot of advantages. You get a richer data set and you can use that for all sorts of things. For example, if you have someone at an ATM that has stolen your pin, you could have an invisible system that could detect if it’s the real owner of the card.
Wenhao Zhang: We’ve been developing face recognition at the Centre for Machine Vision for several years. At the moment the system has been a little constrained by the environment, so the camera must be triggered at a specific location. Now we’ve been expanding that to make it more applicable to real world applications so that we can apply this technology to more applications. As we expand the robustness and the accuracy we can expect this technology to be more influential in future.
Where do you see it going?
LS: All over the place. The face is a powerful biometric. Nowadays people are wandering around with cards and pins and god knows what. This is not Big Brother, we are just trying to help people have an easier time getting on with their life. If they didn’t want to do this then they don’t have to. Can you imagine a situation where you just go around and present your face and you just walk into a shop and you don’t have to mess around with cards and stuff. It detects who you are. With Wenhao’s system you’re getting this 3D data that gives you this extra reliability, especially if you have higher resolution, it’s like a fingerprint in 3D, and it’s your face. Really, you could just buy stuff by presenting your face in front of the till.
How far are we from being at a point where this would be faster than an Oyster card, for example?
LS: Well, I think we are pretty much getting there.
WZ: The efficiency of the system is now so quick it can recognise multiple users passing at the same time, and every recognition only takes a few milliseconds. So within a second it can recognise so many people that it would mean that rather than a lot of people queuing at a gate, or a till just trying to pay, or just trying to get through they could just walk straight through.
LS: How much time do you waste hanging around tills?
How many milliseconds are we at now – and how effectively does the system scale?
WZ: We’re currently down to about 10 milliseconds for a face. There are some problems that we need to resolve, the first one is the size of the database. The more people we get in the database the slower it gets.
LS: There are ways we can overcome this depending on how the system is being applied. For example, if you know certain people are going on a flight then you can reduce the dataset to make it quicker and more reliable.
So in your opinion, we are pretty much there right now?
LS: Yes, well the accuracy of the technology isn’t 100% yet, but we’re in the high nineties. We need to increase the resolution but I’m convinced that when you get higher resolution data of the face we’ll be getting to 100%. As you go up, the number of potential applications goes up dramatically. This is why this technology hasn’t really got off the ground yet, it hasn’t had enough [accuracy] to get to the really interesting applications which require more reliability.
How far are we from that level of accuracy?
WZ: That depends. If we want to apply it in front of ATM machines where there is an enclosure and the lighting is controlled and people are at a certain distance, then we can get very high resolution data. For this, we’ve got a system called Photoface and it’s got about 98% accuracy.
Does the accuracy stay that high when you add more people to the system?
WZ: We’ve tested it up to a few hundred people, we don’t yet know how it will behave when we test the system on greater numbers of people, but at this stage it seems quite robust.
Have these types of systems ever been tested before with hundreds of thousands of people?
WZ: This technology is new and one of the difficulties is getting ethical approval [to use people’s faces], it’s not that easy. To do any sort of evaluation we need to collect data ourselves, it’s not that easy.
LS: It needs a big test, you are right. But I’ve got a lot of faith in the potential of this technology because if you think of the existing stuff, the systems are foxed by some very simple things like changes in background light. You’d think it would be simple but it causes big problems for 2D systems. With our system, because you’re recovering the shape of the face it is going to be robust. Another thing is the orientation of the face. People don’t look straight at the camera, especially if it’s a security camera at a bit of an angle. This causes difficulties for measures that change depending on the angle. Once you’ve got a 3D face you can re orientate it or recompile it with virtual lights. You can also see through make-up, when people put make-up on their face it’s like an optical illusion that changes the apparent shape of the face. When you can separate the 2D from the 3D, you can see the actual shape of the face and it’s not fooled by this kind of thing.
What do you expect will be the first real world application?
LS: I think the first application you’ll see on the commercial side will be accessing a secure door, you’ll get one person approaching, standing a fixed distance and the door opening. Once that’s working reliably, you’ll start to see it being used more with crowds. As the reliability and the robustness increases you’ll see it mushroom and you’ll see it all over the place, airports will be a big one. Another thing you can do with it is monitor covertly. In airports you might want security to look for suspicious behaviour. These systems will monitor over a period of time, so rather than just getting a snapshot, you’re getting continuous video. So you can detect things like micro-expressions. If you’re talking to someone and your face changes in a fraction of a second that could indicate if someone is not telling the truth or something like that. It’s another different field that there might be some potential for.
Does the system use deep learning?
NB: Deep learning is a rapidly emerging form of artificial intelligence that has recently been applied by Google’s Deep Dream neural network for computer vision.
LS: Yes, for various applications we’ve used deep learning. We just published a paper applying deep learning to animal faces, it worked quite well.
WZ: We’re not currently using it for human face recognition, though for animal faces we’re getting such good results. The reason we’re not yet using it for human faces is we don’t have enough data to train the computer.
Would that make the system more accurate?
LS: Yes, there’s little doubt. Data acquisition is the challenge, once you’ve got good data you can do all sorts of things.
WZ: Google and technology giants like Facebook can get unlimited amounts of facial data, but only for 2D analysis. For our 3D system we have to collect people’s data ourselves.
Do you have any fears about how this technology could be misused?
LS: My chief fear is the misunderstanding of the way it could be misused. I have quite a lot of faith in the people we work with, I don’t believe that there’s any Big Brother thing going on, but the danger is that people get that perception. In reality in London or any other big city in the UK if you walk down the street you are being photographed thousands of times a day, so this is all going on all the time anyway. All we’re trying to do is… we’re not trying to get in to this Big Brother thing, we’re just trying to develop a system that will help people and help them in their everyday lives.
WZ: We already supply our information to lots of merchants and organisations – say our date of birth, address, all sorts of data about us. If this information is better protected we may ultimately be safer.
LS: This will be a tool to help people if they want to use it. We’re not going to force people if they don’t want to. That’s why I see it chiefly as a system that could be used by people if they opt into it. In the airport you’d opt in and you wouldn’t have to mess around with tickets, you’d just walk through. If people want to carry on with the old ways, then fair enough as far as I’m concerned. It could be an enabler of a vast reduction of labour associated with these things.
Do you think people will accept the technology?
LS: Anything new, people are understandably a bit wary about, but I think as soon as people realise that this is just something to help them with their lives, and when they realise the benefits of not having to mess around with the old ways, they will probably accept it. Who wants to do something laborious when you can do something easy that will do the same thing?
WZ: There will be a period while people adapt, but eventually you will get there with applications that make life easier.