What Is Classification ?
Classification Is Processes of finding patterns from structured or unstructured observed data when the output variable is categorical. In the Classification model, Giving one or more than one inputs will try to predict the finite and discrete values outcomes.
example: house price prediction, monsoon prediction, gender of a person on handwriting,
Logistic Regression
Logistic Regression predicts the binary outcome from a linear combination of one or more predictor (independent) variables. It is also known as the logit model. It works when depending on a variable is binary. And independent variable is independent of each other.
Types Of Logistic Regression:
- Binary logistic regression: It has only two possibilities depending on the variable. Example- win or loss
- Multinomial logistic regression: three or more nominal categories. Example- eye, hair, skin colour.
- Ordinal logistic regression: three or more ordinal categories, ordinal means category in an order format. Example- user ratings (1-10).

#create Logistic Regression model logmodel = LogisticRegression() logmodel.fit(X_train, y_train) #predict on test data predictions = logmodel.predict(X_test) #confusion matrix, accurarcy print(classification_report(y_test, predictions)) print(confusion_matrix(y_test, predictions)) print(accuracy_score(y_test, predictions))
Full Code: click Hear
K-NN
KNN is a lazy and non-parametric algorithm. This means creating boundaries to classify the point (data). When newly data points come in, try to predict that to the nearest of the boundary line called a non-parametric. There is no or minimal training phase, because of which the training phase is pretty fast. The K-nearest neighbour algorithm uses training data during the testing phase. K-nearest neighbour sort form is K-NN.

#make model knn = KNeighborsClassifier(n_neighbors=7) knn.fit(X_train, y_train) #see score print(knn.score(X_test, y_test))
Full Code : Click hear
SVM
Support vector machine (SVM) Classifier plots each data point in n-dimensional space with the value. It values each dimension being the value of a particular coordinate. Then, we perform classification to find the hyperplane that differentiates the classes well on SVM handle categorical and multiple continuous variables. Categorical variables have to be converted to numeric by creating dummy variables. Kernel, Regularization, Gamma And Margin tuning parameters mathematical computations that require numeric variables.
Regularization: Regularization parameters give a value for how to avoid misclassifying each training observation.

Margin: Margin is the separation line (Gap) to the closest class data points. The larger the margin width, the better the classification.

Kernel SVM
Kernel: transformations applied on input variables which separate non-separable data to separable data. In nonlinear separation, the problem helps to build a more accurate classifier.
There are 9 different kernel parameters: linear, nonlinear, polynomial, Gaussian kernel, Radial basis function (RBF), sigmoid, Laplace, Hyperbolic tangent And ANOVA.
Gamma: Gamma is the kernel coefficient in the nonlinear kernel. Gamma is used for how far the impact of a single training example reaches: Example: RBF (Radial basis function), Polynomial, and Sigmoid. Higher values of Gamma will make the model more accurate, more complex, overfits and biased.

from sklearn import svm #create a classifier cls = svm.SVC(kernel="linear") #train the model cls.fit(X_train,y_train) #predict the response pred = cls.predict(X_test)
Full Code Link: Click Hear
Naive Bayes
Naive Bayes is a classification algorithm based on Bayes Theorem with all independent (features) variables being independent and not related to each covariate (predictors). That’s why Naive Bayes call so ‘naive’.
Bayes Theorem finds the probability of an event occurring given the probability of another event that has already occurred. Mathematically it is given as P(A|B) = [P(B|A)P(A)]/P(B) where A & B are events. P(A|B) called Posterior Probability is the probability of event A(response) given that B(independent) has already occurred. P(B|A) is the likelihood of the training data, i.e., the probability of event B(independent) given that A(response) has already occurred. P(A) is the probability of the response variable, and P(B) is the probability of the training data or evidence.

# implement Model ignb = GaussianNB() pred_gnb = ignb.fit(Xtrain,ytrain).predict(Xtest) #multinomial naive bayes imnb = MultinomialNB() pred_mnb = imnb.fit(Xtrain,ytrain).predict(Xtest)
Full Code: Click Hear
Decision Tree Classification
Decision Tree is an algorithm of supervised machine learning. Tree-like structure As A Root Node, Internal Node And Leaf Node in a decision tree. It starts at the Root Node, the decision tree’s first node. The data set is split based on Root Node. Again, nodes are selected to split the already split data further. This process of splitting the data goes on till we get leaf nodes, which are nothing but the classification labels.

Decision Tree captures Non-linear patterns, visualizes and interprets without any assumptions. It is also used in feature engineering. The data set is biased when an imbalanced dataset.
Information Gain: The process of selecting Root Nodes and Internal Nodes uses the statistical measure called Gain. Gain is the reduction of this uncertainty measure. Gain for any column calculated by differencing the Information Gain of a dataset to a variable from the Information Gain of the entire dataset.

Gini index: Gini is a metric for deciding how to split a Decision Tree. If select two items from a population at random, they must be of the same class and probability. The population is pure. Then it is denoted by 1. Gini measurement is the probability of a random sample being classified correctly if you randomly pick a label according to the distribution in the branch.
Entropy: Entropy is a probabilistic measure of uncertainty or impurity or calculates the lack of information when spilt the data. When a node is homogeneous, it is denoted by 0. this is desirable for a data scientist.
# Create Decision Tree classifer object clf = DecisionTreeClassifier() # Train Decision Tree Classifer clf = clf.fit(X_train,y_train) #Predict the response for test dataset y_pred = clf.predict(X_test)
Full Code : Click Hear
Random Forest Classification
Random Forest is an Algorithm of supervised machine learning. It Builds Multiple decision trees and randomly selects sample data and predicts. And compare all decision tree predictions also say voting method. Select the most contributing features and missing values using a Random forest classifier. It is slow in generating slow predictions. That’s why it is a time-consuming but highly accurate Algorithm. also well known as Bootstrap Aggregation And bagging Algorithm

#Create a Gaussian Classifier clf=RandomForestClassifier(n_estimators=100) #Train the model using the training clf.fit(X_train,y_train) y_pred=clf.predict(X_test)
Full Code : Click Hear
Conclusion
Classification algorithm work on the discrete outcome. This classifies algorithm patterns on data. for example, face detection, speech recognition, document classification, and handwriting recognition.