One of the most sought-after skills of a modern Software Engineer or Computer Scientist is Machine Learning.
What we know about computers is that programmers write programs and computers follow the steps specified in programs. Wouldn’t it be wonderful if computers could learn and improve their performance with experience / feedback?
That’s where Machine Learning comes in.
Machine Learning algorithms enable computers learn from data and / or experience (experience, especially in the case of knowledge based learning) to improve their performance.
How do computers do that?
Let’s start with inductive learning.
Traditional programming - writing functions that turn inputs to outputs.
Inductive Learning - learning / inducing functions from inputs and outputs.
There is another aspect to Machine Learning. In some problem domains, it’s just too hard to write a program that solves the problem. For example, it’s too hard to write a program that can recognize handwritten letters or turn speech to text.
These are all application areas for Machine Learning.
Example Applications
Let’s try to define Machine Learning.
"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."
- Tom Mitchell [1].
So, Machine Learning Agents are agents whose performance improve with experience.
Machine Learning algorithms classified in accordance with the feedback available.
Knowledge Ontology for Machine Learning
References
So, what is Machine Learning?
What we know about computers is that programmers write programs and computers follow the steps specified in programs. Wouldn’t it be wonderful if computers could learn and improve their performance with experience / feedback?
That’s where Machine Learning comes in.
Machine Learning algorithms enable computers learn from data and / or experience (experience, especially in the case of knowledge based learning) to improve their performance.
How do computers do that?
Let’s start with inductive learning.
Traditional programming - writing functions that turn inputs to outputs.
Inductive Learning - learning / inducing functions from inputs and outputs.
There is another aspect to Machine Learning. In some problem domains, it’s just too hard to write a program that solves the problem. For example, it’s too hard to write a program that can recognize handwritten letters or turn speech to text.
These are all application areas for Machine Learning.
Example Applications
- US Postal Service - recognition of handwritten postal code
- Amazon product recommendation
- Netflix movie recommendation
- Netflix declared a $1 Million prize for the first program that can improve it’s movie recommendation algorithm by 10%. Alas! the prize has already been won! But there are quite a lot of money waiting to be won at Kaggle if you are interested[3]!
- Facebook’s customized Newsfeed for each and every user
- Gmail’s spam Email detection
- Google's Self driving car - object recognition
- iPhone Siri - speech recognition
Let’s try to define Machine Learning.
"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."
- Tom Mitchell [1].
So, Machine Learning Agents are agents whose performance improve with experience.
When do we use Machine Learning?
- Existence of Patterns.
- Pattern: Repeated feature among a set of objects.
- Existence of patterns means it could be possible for us to build a model that explains our data.
- Sidenote: Mathematics is the study of patterns.
- Hard to write an equation or algorithm that solves the problem.
- Availability of Data [2].
Machine Learning algorithms classified in accordance with the feedback available.
- Supervised Learning
- Learning / inducing a model / function from example input-output pairs.
- Unsupervised Learning
- Learning patterns from inputs when no output is supplied.
- Reinforcement Learning
- Learning how to behave from feedback given at the end of a sequence of steps.
- Supervised Learning
- Recommendation
- Classification
- Bayesian Learning
- Neural Network
- Deep Learning
- Decision Tree
- Support Vector Machine
- Genetic Programming
- Ensemble Learning
- Regression
- Unsupervised Learning
- Clustering
- Finding Independent Features
- Knowledge Based Learning
- Once you have learned something, how do you keep adding to your knowledge?
- Brings together Knowledge Representation and Machine Learning.
- Explanation Based Learning
- Relevance Based Learning
- Inductive Logic Programming
- Statistical Learning
- Learning Hidden Markov Models
- Reinforcement Learning
References
- Machine Learning by Tom Mitchell
- Learning From Data by Yaser-S-Abu-Mostafa and others
- Kaggle
- Coursera course on Machine Learning by Andrew Ng
- Artificial Intelligence: A Modern Approach
- Programming Collective Intelligence
No comments:
Post a Comment