br Table br Comparison of
Comparison of the developed classifier's performance with methods in Refs. [47,65].
Method Method FNR FPR Classification Error
Comparison of the developed classifier's performance with methods in Refs.
Method Method Sensitivity Specificity Accuracy
against overfitting by giving smaller clusters less influence as compared with larger clusters.
The results in Table 4 are representative of the results that can be obtained from pre-processed smears; hence, they provide a lower limit for the false negative and false positive rates on the cell level of 0.72% and 2.53% respectively. This implies that if the classifier is presented with well-prepared slides, then more sensitivity values (> 99%) can always be obtained, as seen from the ROC curve in Fig. 10. The clas-sifier shows promising results in the classification of the cancerous cells, with an overall accuracy of 98.88% on this dataset (Dataset 1). The results in Table 5 are representative of the results that can be obtained from a Pap smear slide with many different types of cells. False negative and false positive rates on the smear level of 1.92% and 2.84% re-spectively were obtained; thus, the classifier still provided promising results on this dataset. The results in Table 6 are representative of the results that can be obtained from single Linezolid from a Pap smear slide from the pathology laboratory. A false negative rate of 1.60% means that very few abnormal cells were classified as normal and, therefore, the misclassification of an abnormal smear is unlikely (specifi-city = 95.20%). The 4.80% false positive rate means that a few normal slides were classified as abnormal (sensitivity = 98.40%). The overall accuracy, sensitivity and specificity of the classifier on full Pap smear slides from the pathology lab were 93.88%, 95.92% and 91.84% re-spectively. The higher sensitivity of the classifier to cancerous cells could be attributed to the robustness of the feature selection method that selected strict nucleus constrained features that potentially indicate signs of malignancy. Despite the overall effectiveness of the approach, it does, however, involve many methods, making it computationally ex-pensive. This in part curtails the full potency of the approach and therefore, in the near future, deep learning approaches will be explored to reduce the complexity.
The paper presents an approach for cervical cancer classification from Pap smears using an enhanced fuzzy c-means algorithm. A Trainable Weka Segmentation was proposed to achieve cell segmenta-tion, and a three-step sequential elimination debris rejection approach was also proposed. Simulated annealing, coupled with a wrapper filter, was used for feature selection. The evaluation and testing, conducted with the DTU/Herlev datasets. and prepared pathological slides from Mbarara Regional Referral Hospital, confirmed the rationale of the approach of selecting ‘good’ features, to embed sufficient dis-criminatory information that can increase the accuracy of cervical
Conflicts of interest
This paper has the assent of all co-authors and the authors declare that there are no conflicts of interest. The authors declare that there are no conflict of interests regarding the publication of coprolites paper.
Authors declare that there is NO conflict of interest on this research. This research has been approved by Mbarara University of Science and Technology Research Ethics Committee (MUREC) and supported by
Mbarara Regional Referral Hospital Cancer Prevention Unit.
This research has been Funded by the African Development Bank and the Commonwealth Scholarship Commission and all consent to the publication of the Research Findings including on their websites.
The authors are grateful to the African Development Bank- HEST project for providing funds for this research, and the Commonwealth Scholarship Commission for the split-site scholarship to William Wasswa, in partnership with the University of Strathclyde. The authors are also grateful to Mr Abraham Birungi, from the Pathology Department at Mbarara University of Science and Technology, Uganda, for providing support with pap-images. Thanks also to Dr Mario Giardini, from the University of Strathclyde, for providing support with some image analysis.
 Ragothaman S, Narasimhan S, Basavaraj MG, Dewar R. Unsupervised segmentation of cervical cell images using Gaussian mixture model. IEEE comput. Soc. Conf. Comput. Vis. Pattern recognit. Work 2016. https://doi.org/10.1109/CVPRW.2016. 173.