tailieunhanh - Statistical recognition

Invite you to consult the lecture content "Statistical recognition" below. Contents of lectures introduce to you the content: Discriminative methods, generative methods, steps for statistical recognition, labeling with games,. Hopefully document content to meet the needs of learning, work effectively. | Statistical Recognition Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and Kristen Grauman Object categorization: the statistical viewpoint vs. MAP decision: Object categorization: the statistical viewpoint vs. Bayes rule: posterior likelihood prior MAP decision: Object categorization: the statistical viewpoint Discriminative methods: model posterior Generative methods: model likelihood and prior posterior likelihood prior Discriminative methods Direct modeling of Zebra Non-zebra Decision boundary Model and Generative methods Low Middle High Middle Low Generative vs. discriminative learning Generative Discriminative Class densities Posterior probabilities Generative vs. discriminative methods Generative methods + Can sample from them / compute how probable any given model instance is + Can be learned using images from just a single category – Sometimes we don’t need to model the likelihood when all we want is to make a decision Discriminative methods + Efficient + Often produce better classification rates – Require positive and negative training data – Can be hard to interpret Steps for statistical recognition Representation Specify the model for an object category Bag of features, part-based, global, etc. Learning Given a training set, find the parameters of the model Generative vs. discriminative Recognition Apply the model to a new test image Generalization How well does a learned model generalize from the data it was trained on to a new test set? Underfitting: model is too “simple” to represent all the relevant class characteristics High training error and high test error Overfitting: model is too “complex” and fits irrelevant characteristics (noise) in the data Low training error and high test error Occam’s razor: given two models that represent the data equally well, the simpler one should be preferred Images in the training set must be annotated with the “correct answer” that the model is expected to | Statistical Recognition Slides adapted from Fei-Fei Li, Rob Fergus, Antonio Torralba, and Kristen Grauman Object categorization: the statistical viewpoint vs. MAP decision: Object categorization: the statistical viewpoint vs. Bayes rule: posterior likelihood prior MAP decision: Object categorization: the statistical viewpoint Discriminative methods: model posterior Generative methods: model likelihood and prior posterior likelihood prior Discriminative methods Direct modeling of Zebra Non-zebra Decision boundary Model and Generative methods Low Middle High Middle Low Generative vs. discriminative learning Generative Discriminative Class densities Posterior probabilities Generative vs. discriminative methods Generative methods + Can sample from them / compute how probable any given model instance is + Can be learned using images from just a single category – Sometimes we don’t need to model the likelihood when all we want is to make a decision .

TỪ KHÓA LIÊN QUAN
crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.