[Google_Bootcamp_Day13]

Updated:

ML strategy 2

Human-level performance

Some of the tasks that humans do are close to “perfection” which is why machine learning tries to mimic human-level performance. The graph below shows the performance of humans and machine learning over time.

human_level

Machine learning progresses slowly when it surpasses human-level performance.
One of the reason is that human-level performance can be close to Bayes optimal error, especially for natural perception problem.
Bayes optimal error is defined as the best possible error. In other words, it means that any functions mapping from x to y can’t surpass a certain level of accuracy.
Other reason is that when the performance of machine learning is worse than the performance of humans, you can improve it with different tools. They are harder to use once it surpasses human-level performance.

These tools are …

  • get labeld data from humans
  • gain insight from manual error analysis
  • better analysis of bias/variance

Avoidable bias

bias_variance

Example: Cat vs Non-Cat example

  • Scenario A
    There is a 7% gap between the performance of the training set and the human level error. It means that the algorithm isn’t fitting well with the training set since the target is around 1%. To resolve the issue, we use bias reduction technique such as training a bigget neural network or running the training set longer.

  • Scenario B
    The training set is doing good since there is only a 0.5% difference with the human level error. The difference between the training set and the human level error is called avoidable bias. The focus here is to reduce the variance since the difference between the training error and the development error is 2%. To resolve the issue, we use variance reduction technique such as regularization or have a bigger training set.

Understanding human-level performance

Human-level error gives an estimate of Bayes error.

Example 1: Medical image classification ex1

The definition of human-level error depends on the purpose of the analysis, in this case, by definition the Bayes error is lower or equal to 0.5%.

Example 2: Error analysis ex2

  • Scenario A
    In this case, the choice of human-level performance doesn’t have an impact. The avoidable bias is between 4-4.5% and the variance is 1%. Therefore, the focus should be on bias reduction technique.

  • Scenario B
    In this case, the choice of human-level performance doesn’t have an impact. The avoidable bias is between 0-0.5% and the variance is 4%. Therefore, the focus should be on variance reduction technique.

  • Scenario C
    In this case, the estimate for Bayes error has to be 0.5% since you can’t go lower than the human-level performance otherwise the training set is overfitting. Also, the avoidable bias is 0.2% and the variance is 0.1%. Therefore, the focus should be on bias reduction technique.

Summary of bias/variance with human-level performance

  • Human-level error is a proxy for Bayes error
  • If the diff(human-level error, training error) > diff(training error, development error), then focus on bias reduction technique
  • If the diff(human-level error, training error) < diff(training error, development error), then focus on variance reduction technique

Surpassing human-level performance

Example1: Classification task ex1

  • Scenario A
    In this case, the Bayes error is 0.5%, therefore the avoidable bias is 0.1% and the variance is 0.2%.

  • Scenario B
    In this case, there is not enough information to know if bias reduction or variance reduction has to be done on the algorithm. It doesn’t mean that the model cannot be improve, it means that the conventional ways to know if bias reduction or variance reduction are not working in this case.

There are many problems where machine learning significantly surpasses human-level performance, especially with structured data

  • Online advertising
  • Product recommendations
  • Logistics (predicting transit time)
  • Loan approvals

Improving your model performance

There are 2 fundamental assumptions of supervised learning.
The first one is to have a low avoidable bias which means that the training set fits well.
The second one is to have a low or acceptable variance which means that the training set performance generalizes well to the development set and test set.

Summary summary

[Source] https://www.coursera.org/learn/machine-learning-projects

Categories:

Updated:

Leave a comment