Loss Function | Cost Function | Object Function

Loss Function, Cost Function, and Object Function are highly related. There are slight differences.


  • 1 The Loss Function also called error function is to measure the quality of the prediction. If the prediction is exactly same as the ground truth, the loss or error is zero. If not, the loss function is used to define "how bad" the mistake is. The examples are squared loss,  hinge loss, and 0/1 loss. 
  • 2 Cost Function. A cost function is to evaluate the overall cost of the model on training set. It could be a sum of loss functions over your training set plus some model complexity penalty. All loss function can generate a cost function, but not all the cost function can be decomposed using a loss function. For example, the area under ROC cannot be decomposed using a loss function. The examples include the mean square error and SVM cost function. 
  • 3 Object Function is the most general term, which can be either the loss function or its negative. It also can be reward function, profit function, utility function, fitness function and etc. 
The different objective functions are related with the different models defined based on the different physical meaning such as:
- maximize the posterior probabilities (e.g., naive Bayes)
- maximize a fitness function (genetic programming)
- maximize the total reward/value function (reinforcement learning)
- maximize information gain/minimize child node impurities (CART decision tree classification)
- minimize a mean squared error cost (or loss) function (CART, decision tree regression, linear regression, adaptive linear neurons, ...
- maximize log-likelihood or minimize cross-entropy loss (or cost) function
- minimize hinge loss (support vector machine)

Comments

Popular posts from this blog

The difference between autoencoders and a restricted Boltzmann machine?

SSD: Single Shot MultiBox Detector

You Only Look Once