Research

I am employed as an Associate Professor in Knowledge Exchange and External Engagement at the Centre for Machine Vision within the Bristol Robotics Laboratory. I have worked on a large number of projects over a wide range of applications. Our work is typically highly applied, generating commercial prototypes as project deliverables. My main research interests are in computer vision and machine learning, along with designing and building 3D (photometric stereo) acquisition hardware.

Scene recognition

An InnovateUK funded project with Q-Bot Ltd. to improve the automation of their novel underfloor insulation robot. It has the following aims:

  • Map out underfloor architecture.
  • Isolate services such as pipework or cables.
  • Optimise robot path through the void.
  • Spray insulation to reduce heat loss.


Plant Phenotyping using Photometric Stereo

A BBSRC funded project in partnership with Edinburgh University.

  • Allows non-intrusive objective phenotype measurement
  • Variables such as leaf area, leaf angle, petiole angle canbe measured semi-automatically over the plant life cycle
  • Images show the 8 lights source rig and software developed as part of this project, and an example of the surface normal information captured of a growing Arabidopsis plant

Weed detection in pasture

An InnovateUK funded project with Soilessentials Ltd. and Aralia Systems Ltd. to localise broad leaf weeds (dock) in pasture in order to reduce herbicide spraying.

  • Real time weed detection in pasture is far more difficult than in-crop weed detection
  • Operates in real time with over 90% detection rate with low false positives
  • Vision system integrated with solenoid control circuitry
  • Future work will identify the presence of clover and other weed species

Pig Face Recognition

A NERC/SARIC funded project in partnership with AB-Agri, SRUC and Manchester University to biometrically recognise individuals to aid precision livestock farming.

  • RFID tags are time consuming, stress inducing and have limited range
  • A Convolutional Neural Network is trained on unconstrained images of the pigs at a drinker in on-farm conditions
  • 97% accuracy
  • The snout region, forehead and eyes appear to be the most discriminating areas

Non-contact 3D Handprint Recognition

A UWE Bristol funded project that combined high-resolution visible wavelengths with more penetrating near-infrared wavelengths to capture vascular structures for additional security.

  • Real time operation
  • Previously unseen levels of surface morphology for fingerprints/hand prints
  • Extremely difficult to spoof due to multi-modality of capture (surface details + vein structure)
  • As seen on BBC Click!


How's My Cow?

An InnovateUK funded project with Kingshay Farming and Conservation Ltd. currently undergoing commercial trials.

  • Non-intrusive, unobtrusive overhead 3D image capture of cows
  • Measures body condition score, mobility and weight automatically
  • Highly repeatable and objective, twice daily measurements allow interventions to be made earlier

Photoskin

An EPSRC funded project whose aim is to capture skin reflectance data as part of the Photometric Stereo (PS) process for improved 3D face reconstruction. In addition to being important fundamental research, this will bring benefits to face recognition applications and the CGI/gaming industries.

It continues on from the EPSRC funded Photoface device which was developed with colleagues at Imperial College London and is shown below:

The Photoface 3D capture device

Photoface generates a 3D model via PS which allows us to estimate surface orientation for each pixel. While similar devices use lasers or multiple cameras, PS works through using multiple images of the object illuminated from different angles. In the image four light sources (flash guns) can be seen placed on the rear wall (around the monitor) and the camera is positioned just above the monitor. When a person walks through the device, they trigger the ultrasound sensor (on the left) and the flashguns fire in sequence and a synchronised image is captured in around 15milliseconds which effectively freezes the motion. The image below shows examples of the four differently illuminated images and the subsequent 3D rendering.

Examples of the differently illuminated captures and the subsequent 3D rendering.

About me

My PhD, entitled "3D Face Recognition using Photometric Stereo", examined how the inherent products of PS, the surface normals, can be used for effective face recognition. Taking inspiration from how humans process faces (in particular our use of low spatial frequencies and caricature representations), through reducing resolution and variance based methods the dimensionality of the data was significantly reduced without proportionately affecting recognition rates (around 96% at a False Acceptance Rate of 0.001).

This was extended to expression analysis using surface normals and recognition performance was boosted slightly by removing those pixels which were found to encode expression.

As part of this, the Photoface database was created which contains over 3000 sessions of about 450 people captured in a natural environment under far less constrained conditions than similar databases. It is fully labelled and contains a wealth of metadata for each capture (e.g. gender, facial hair, pose, expression, bespectacled). It is available to download for research purposes - please feel free to contact me for instructions on how to do this.

Previously I worked as a software developer in both public and private sectors for about ten years working on a number of diverse projects, programming for mainframes, desktops and web applications. Mainly I was a JEE developer. Prior to that I studied for a BSc(Hons) in Psychology and an MSc in Computer Science at The University of Bristol.