a. Support Vector Machine (SVM): A significant amount of research has been conducted on support vector machines in recent years, and today applications of support vector machines are becoming more common in text classification. In essence, support vector machines define hyperplanes, which attempt to separate the values of a given target field. Hyperplanes are defined using kernel functions. The most popular kernel types are supported: linear, polynomial, radial basis, and sigmoid. Support Vector Machines can be used for both classification and regression. Several characteristics have been observed in vector space-based methods for text classification [15,16], including high dimensionality of the input space, sparsity of document vectors, linear separability in most classification problems of the text and the belief that few features are relevant. Suppose training data is provided with for. The dual formulation of soft-edge support vector machines (SVMs) with a kernel function K and a control parameter C is(1)st , ,The kernel function where <,> denotes an inner product between two vectors, is introduced to handle non-linearly separable cases without any explicit knowledge of feature mapping. Formulation (1) shows that the computational complexity of SVM training depends on the number of training data samples which is denoted by n. The computational complexity of training depends on the size of the input space. This becomes clear when we consider some typical kernel functions such as the linear kernel, ,The polynomial kernel, ,The Gaussian RBF (Radial Base Function) kernel, ,Where d is the degree of the polynomial and γ is a parameter to be controlled. The evaluation... in the center of the sheet... of the total number of correct predictions. It is calculated with the following formula True positive (TP) is the proportion of positive cases correctly classified and calculated with the following formula False positive (FP) is the proportion of negative cases incorrectly classified as positive and calculated with the following formula True negative (TN) is the proportion of negative cases correctly classified and calculated using the following formula False Negative (FN) is the proportion of positive cases incorrectly classified as negative and calculated using the following formula Precision (P) is the proportion of expected positive cases correct and calculated using the following formula formula Accuracy is the proportion of the total number of correct predictions. It is calculated with the following formula
tags