In supervised learning, one gets examples of objects together with some labels (such as tissue samples labeled as cancerous or non- cancerous, or images of handwritten digits labeled with the correct digit in 0-9), and the goal is to learn a prediction model which given a new object, makes an accurate prediction. The notion of accuracy depends on the learning problem under study and is measured by a performance measure of interest. A supervised learning algorithm is said to be ’statistically consistent’ if it returns an ‘optimal’ prediction model with respect to the desired performance measure in the limit of infinite data. Statistical consistency is a fundamental notion in supervised machine learning, and therefore the design of consistent algorithms for various learning problems is an important question. While this has been well studied for simple binary classification problems and some other specific learning problems, the question of consistent algorithms for general multiclass learning problems remains open. The talk will detail several aspects of this conceptual tool and give principles for the design and analysis of consistent algorithms in machine learning.