Archives

  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2020-03
  • 2020-07
  • 2020-08
  • Dorsomorphin br Based on our dataset we trained and

    2020-08-03


    Based on our dataset, we trained and tested the SVM classifier embedded with the PSO feature selection algorithm to minimize the potential bias during feature selection and lesion classification. In addition, the 10-fold cross validation was performed. Specifically, our dataset was randomly divided into 10-folds. As illustrated in figure 3, SVM was trained with 9 folds of data and tested with the remaining one-fold of data. The process was repeated 10 times. In each
    repetition, the training dataset was used to identify an optimal feature vector using PSO algorithm, which is detailed in the supplemental materials. The following objective function is adopted to control the training outcome [20, 25]:
    In the above Equation, the parameters α, β, γ are weighted coefficients determined based on the feature distribution in the Euclidean space, and AUC is the computed area under a receiver operating characteristics (ROC) curve [26]. When the objective function reaches its minimum, the training process is finished. Then the trained classifier is applied to make prediction for individual case in the testing fold. Thus, through the 10 Dorsomorphin training and testing iteration cycles, each of 275 cases in our dataset will be independently tested once and receive a classification score indicating the likelihood of the case being malignant. Finally, based on the classification scores of all 275 cases, the performance of the proposed CAD scheme will be evaluated and compared using AUC and other evaluation indices (e.g. classification sensitivity, specificity, positive and negative predictive values).
    Figure 3. Flowchart of proposed 10-fold cross-validation based training and testing method
    3. Experiments and Results
    3.1 Evaluation of single features
    Figure 4 demonstrates the Pearson correlation coefficients of all 59 initially computed features. For the two-view or four-view images of each case, the value of correlation coefficients falls into eight categories as illustrated in the histogram charts of figure 4. The charts show that Dorsomorphin more than 70% and 40% of the absolute correlation coefficients were smaller than 0.4 and 0.2, respectively, which indicates that the feature pool designed in our study provided a comprehensive view of the cases with a relatively small redundancy.
    Figure 4. Feature correlation coefficients analysis of using a) two-view images and b) four-view images.
    Figure 5 shows the sorted results of the areas under ROC curves (AUC values) computed from all 59 individual image features, and table 3 summarizes the top 10 performed image features selected from two feature pools including the features computed from the two-view and four-view images, respectively. When using features computed from two-view images of the breast with suspicious lesion detected, the top three best performed features including MeanGradient, MeanDeviation_DCT, and Mean_FFT with the AUC values of 0.678±0.042, 0.668±0.042 and 0.665±0.042, respectively. Similarly, among all features computed from four-view images of two breasts, the top three features are Energy_FFT, Energy_DCT, and Mean_Density with the AUC values of 0.689±0.041, 0.668±0.042 and 0.667±0.042, respectively. Among the top 10 features, there are six common features selected from both two-view and four-view image feature pools namely, MeanGradient, MeanDeviation_DCT, Energy_DCT, StdGradient, Energy_FFT, and Mean_DCT. This indicates that there is a common base to support classification between malignant and benign cases using the global mammographic image features. However, using two-view and four-view images can also make difference because adding two images of the negative breast may dilate the overall case-based difference between malignant and benign cases. As a result, there is also difference of the top 10 performed image features in two feature pools.
    Table 3. Ten best performed features for two-view and four-view image prediction
    Top features for two-view Top features for four-view
    image prediction image prediction
    MeanGradient Energy_FFT
    MeanDeviation_DCT Energy_DCT
    Mean_FFT Mean_Density
    Energy_DCT RMS
    StdGradient Convexity
    Energy_FFT MeanDeviation_DCT
    Mean_Wavelet Mean_DCT
    Mean MeanGradient
    RMS_DCT StdGradient
    Mean_DCT Entropy_DCT
    Figure 5. The graph of sorting list of AUC values of 59 individual features
    3.2. Performance Assessment of the SVM Classifiers
    Using classification scores generated by the SVM classifiers on the total 275 cases, we
    conducted data analysis to assess the SVM classifier’s performance with respect to the training
    dataset and test dataset. When using the training dataset, figure 6 (a) shows two ROC curves for
    between training and testing results indicated that the SVM classifiers were not over-trained and