TFSAS2M: A novel transfer learning modelled approach for face-skull overlay via augmented salient-point-based scan mapping

Main Article Content

Sharma Tripti , Dubey Sipi

Abstract

Overlaying skull CT (computer tomography) scans over facial data requires effective salient-point estimation & mapping. In order to perform this task, salient-points for both facial data & skull CT data are estimated, and their locations are mapped using correlative matching. This requires efficient modelling of face recognition algorithms, which can estimate facial location from both skull CT & image scans. The result of these algorithms is given to a feature extraction & selection unit, which estimates different facial salient-points via geometrical analysis. Correlation algorithms that match these points, perform mapping tasks between any 2 salient-point pairs without considering their original inter-dependencies. Due to which, CT scan of one person, can be easily mapped with facial image of another person, which limits the system’s trustworthiness for real-time deployments. Moreover, most of the currently proposed salient-point mapping algorithms work with frontal facial and CT data, which further limits their deployment capabilities. Thus, in this text, a novel transfer learning model is proposed, which performs face-skull overlay via augmented salient-point-based scan mapping. The proposed model initially uses a deep convolutional neural network (DCNN) based on VGGNet-19 architecture, and trains it for facial & skull data separately. This network is evaluated on query images to validate the CT-to-face mapping, which is followed by augmented fusion. The augmented fusion model is responsible for selecting best matching CT scans from database, for any non-matching facial image data. Due to which, the model is able to achieve high accuracy, with better precision & recall performance when compared with state-of-the-art overlay models. The proposed TFSAS2M model was tested on various facial-CT interlinked datasets, and its performance was evaluated in terms of accuracy of facial-to-CT mapping, accuracy of overlay, precision of overlay, and recall of facial-to-CT mapping. Due to use of transfer learning & augmented salient-point mapping, the proposed model showcased 99.2% accuracy of facial-to-CT mapping, 97.4% accuracy for overlay, 94.8% precision for overlay, and 96.5% recall for facial-to-CT mapping, which makes the model suitable for real-time clinical usage. Moreover, this text also recommends some future research approaches, which can be used in order to improve efficiency of the proposed model.


 

Article Details

Section
Articles