K-Nearest Neighbors in Python

In this part of Learning Python we Cover Machine Learning In Python
Written by Paayi Tech |07-Jul-2019 | 0 Comments | 160 Views

Introduction:

K-Nearest Neighbor (KNN) order is a standout amongst the most key and straightforward grouping techniques and ought to be one of the fundamental decisions for an arrangement consider when there is practically no earlier information about the conveyance of the information. K-nearest neighbor arrangement was created from the need to perform discriminant investigation when dependable parametric evaluations of likelihood densities are obscure or hard to decide. The K-nearest neighbors (KNN) calculation is a sort of regulated AI calculations. KNN is incredibly simple to execute in its most fundamental structure but performs very mind-boggling order errands. It is an apathetic learning calculation since it doesn't have a specific preparing stage. Or maybe, it utilizes the majority of the information for preparing while at the same time arranging another information point or occurrence. KNN is a non-parametric learning calculation, which implies that it doesn't expect anything about the hidden information. This is an amazingly helpful element since the more significant part of this present reality information doesn't generally pursue any hypothetical suspicion

 

Explanation:

The inspiration behind the KNN algorithm is one of the straightforward of all the supervised machine learning algorithms. It directly works out the distance of a different data point to all other training data points. The distance can be of any type, e.g., Euclidean or Manhattan, and so forth. It then chooses the K-nearest data points, where K value is any integer value. Lastly, it assigns the data point to the class to which the majority of the K data points belong.

This algorithm can be seen in action with the help of a simple example. Presume you have a dataset with two variables, which when plotted, looks like the one in the following figure.

KNN Algorithm Explanation

Figure 1

 

The task is to allocate a new data point with 'X' into "Blue" class or "Red" class. The coordinate values of the data point are x=45 and y=50. Assume that the value of K is 3. The KNN algorithm initiated by finding the distance of point X from all the points. Afterward, it finds the three nearest points with the least distance to point X. This is shown in the figure below. The three nearest points have been encircled.

The last step of the KNN algorithm is to assign a new point to the class to which most of the three nearest points belong. Figure 1 shows that the two of the three closest points have a place toward the class "Red" while one has a place with the class "Blue." Therefore the new data point will be classified "Red".

KNN Algorithm to assign new data point

Figure 2

Working:

Working of the KNN is very simple. It mainly works on the functionality of finding the distance, and it is a non-parametric machine learning algorithm.

Non-Parametric Algorithm?

KNN is a non-parametric machine learning algorithm. Unlike parametric algorithm where they find the assumptions of evidence in a given set of data non-parametric works on a principle that for similar input there is an approximately similar output.

Algorithms that do not make strong assumptions about the form of the mapping function are called nonparametric machine learning algorithms. By not making assumptions, they are free to learn any functional form from the training data.

Non-parametric algorithms are useful when you have a vast amount of data and no prior knowledge and when you don't have to worry too much about the right features.

Distance Methods:

The working of KNN depends on distance methods. The KNN first calculate the distance between the given feature vector x.

The k nearest neighbors we mostly use Euclidean distance.  Let x be an input sample with features (x1,x2,x3,…….,xp), n  be the total number of sample. Than Euclidean distance can be represented as:

KNN Algorithm distance method

 

Different Distance Methods:

 

There are different distance method used depending upon the type of data. Following are some distance methods.

 

Euclidean Distance:

The Euclidean distance gives the straight line between two points

Eucladian Distance

 

Manhattan Distance:

 

Manhattan distance calculates the distance in a rectilinear fashion.

Manhattan Distance

 

Minkowski distance:

 

Minkowski Distance is used as a generalized method for both Euclidean and Manhattan distance.

Minkowski distance

 

All of the above distances are used for finding the distance having continuous data. It is not possible to calculate the distance of a data set given in different dimensions. For categorical data, we use hamming distance.

 

Hamming Distance:

It is used for categorical data. In image classification, we use hamming distance to find the feature which are in different dimensions.

Lazy Learner:

K nearest neighbors often refer to be a lazy learner. As we describe in non-parametric heading that it unlike to other classifiers do not make presumption on the number of evidence in a data set rather than it memorize whole data. And the prediction is made on the basis that similar input has approximately similar output.  That is why it is called a lazy learner.

Usage of KNN:

  • Text Mining

  • Recommender Systems

  • Medicines

  • Finance

  • Bank Customer Profiling





Login/Sign Up

Comments




Related Posts



© Copyright 2019, All Rights Reserved. paayi.com

This site uses cookies. By continuing to use this site or clicking "I Agree", you agree to the use of cookies. Read our cookies policy and privacy statement for more information.