In the first part of the article, we’ll talk about Machine Learning tasks (Supervised, Unsupervised, and Reinforcement Learning) and such algorithms as Linear Regression, K-Nearest Neighbors (kNN), Convolutional Neural Network (CNN). All algorithms are accompanied with examples of code that you play with on your own.
Before we start reviewing the top ML algorithms, though, let’s first describe and define supervised, unsupervised and reinforced learning tasks, and learn how to differentiate them.
Machine Learning Tasks
Today, we can distinguish three major types of Machine Learning algorithms. Each of them comes with a few methods that can be applied to improve computer systems, to help them more efficiently solve specific problems or tasks that vary in complexity.
It’s quite important to understand that, usually, machine learning is not capable to deal with intelligent tasks.
This type of learning algorithms is the most effective when the relationship and correlation in the data are clear. In other words, supervised learning requires a set of training data that has been already organized and labeled in the correct way. In order to apply supervised learning, you’ll use input variables (X) and an output variable (Y). Then, the algorithm will learn the mapping function from the input to the output.
Supervised learning is classified into the following categories of algorithms:
- Classification: To predict the outcome of a given sample where the output variable is in the form of categories. Examples: credit scoring (according to the borrower's questionnaire, a decision is made on issuing/refusing a loan); medical diagnostics (the diagnosis is determined by a set of medical characteristics).
- Regression: To predict the outcome of a given sample where the output variable is in the form of real values. Examples: real estate (based on characteristics of the area, ecological situation, transport connectivity, estimate the cost of housing), healthcare (according to postoperative indicators, estimate organ healing time), credit scoring (according to borrower's questionnaire, estimate credit limit).
- Forecasting: The process of making predictions about the future based on the historical and present data. It is most commonly used to analyze trends. A common example might be the estimation of next year’s sales based on the sales of the current and previous years.
The most common examples of supervised learning are: Linear Regression, Logistic Regression, k-NN, SVM, Random Forests, and Naive Bayes.
What are the pros and cons of using supervised learning? It allows collecting data and produces data output from previous experiences. It also supports the development of performance criteria based on experience. In addition to that, supervised machine learning helps resolve various types of real-world computation problems. On the other hand, there are some cons, too: classifying big amounts of data is always challenging; also, training for supervised learning tasks requires a lot of computation time.
This type of learning algorithm is the opposite of supervised learning, because it uses unlabeled training data to model the underlying structure of it. Furthermore, unsupervised learning problems have the input variables (X) without other corresponding output variables.
The key advantage of using unsupervised algorithms is the ease of handling complex data, where relationships are too complex or the correlation is unidentified. In such cases, grouping and clustering information methods are applied.
Unsupervised learning can be classified into the following categories of algorithms:
- Clustering: A clustering problem is about discovering the inherent groupings in the data, such as grouping customers by purchasing behavior. Examples: marketing (based on the results of marketing research, among a multitude of consumers, select characteristic groups according to the degree of interest in the product being promoted).
- Association: An association rule learning problem is aimed at discovering the rules that describe large portions of your data, such as “people who buy X also tend to buy Y.”
- Dimension reduction: Reducing the number of variables under consideration. In many applications, raw data has very high-dimensional features and some features are redundant or irrelevant to the task. Reducing the dimensionality helps find the true, latent relationship.
The most common examples of unsupervised learning are: k-means clustering, DBSCAN, PCA, LDA, Apriori, and some types of neural networks like Kohonen’s Self-Organizing Maps or Neural Associative memory, including Hopfield and other attractor networks.
There are also downsides of using Unsupervised Learning. If you are looking for a 100% accurate result, it might not happen, because we do not have any input data to train from. The model itself works and learns from raw data without any previous knowledge. Likewise, it is a sluggish process; the learning phase is increasingly time-consuming, because it analyzes and calculates all possibilities at the same time. NOTE: The more features you want to add, the more the complexity will increase.
This type of learning is quite similar to the supervised learning, with the only difference that it comes with self-improvement mechanisms. Reinforcement learning is developed in such a way that its next action is based on its current state. The learning behaviors are pretty much close to the trial and error actions, which, eventually, can maximize the results. Reinforcement algorithms update training data with new information. This data is chosen based on a score or a predetermined value that assesses how well the data set is categorized. Then, it can either be added to the data set or discarded.
Because Reinforcement Learning requires a lot of data, it is most applicable in domains where simulated data is readily available like gameplay or robotics.
- Reinforcement Learning is widely used in building AI for computer gaming. AlphaGo Zero is the first computer program that managed to defeat a world champion in the ancient game of Go.
- In robotics and industrial automation, Reinforcement Learning is used to enable the robots to create efficient adaptive control systems for itself, which learn from their own experience and behavior.
- Other applications of Reinforcement Learning include text summarization engines, dialog agents (text, speech) that learn from user interactions and improve with time, learning optimal treatment policies in healthcare and Reinforcement Learning based agents for online stock trading.
The most common examples of Reinforcement Learning are: Monte Carlo, Q-Learning, SARSA, DQN, DDPG, TRPO, and PPO.
Machine Learning Algorithms
Generally speaking, Machine Learning is a much more advanced system which involves several algorithms working together, to generate a theoretically improved result. We have listed the cons and pros of some popular Machine Learning algorithms along with their specifics in terms of usage and compatibility in different areas.
Being one of the most popular algorithms, linear regression is also widely used in statistics and machine learning. Predictive modeling is related to minimizing the error and making the most accurate predictions possible.
Simply put, linear regression is an equation that describes a line that best fits the relationship between the input variables (x) and the output variables (y), by getting specific weight for the input variables (B).
Convolutional Neural Network (CNN)
CNNs are made up of multiple convolutional layers that are entirely or partially connected. In networks like these, specific multilayer perceptrons are used. To transfer results to the next layer, a convolution is used by the convolutional layers. This method of data transfer makes the network more integrated with a minimal parameter.
In Convolutional Neural Networks for Sentence Classification, Yoon Kim properly defines the procedures and outcomes of the classification of text tasks using CNNs. He presents a model built based on word2vec, experiments, and runs it against numerous benchmarks.
K-Nearest Neighbors (kNN)
This learning algorithm is frequently called ‘lazy’, as the training data does not require any processing beyond being assigned to a specific class.
kNN is a predictive algorithm, which is based on feature similarity. It compares all attributes of a dataset to the attributes of a class with an objective to find the likelihood and similarity of a new piece of data to that class.
The beauty of this concept is a fine combination of being simple and powerful when it comes to classifying uncorrelated and messy data. Another advantage of kNN is that it does not make any predetermined assumptions. It can be very well a versatile solution for not only classification but also for predictive problems.
Like any other algorithm, kNN has its cons, too. Avoid using it when computational requirements are a concern due to high CPU and memory usage. Also, this algorithm is not the best choice for working with clean linear data with small number of outliers.
This is just the beginning of our overview. In the second part of article, we’ll discuss the most common methods of statistical analysis, with all of their advantages and disadvantages. Stay tuned!
Data Phoenix Newsletter
Join the newsletter to receive the latest updates in your inbox.