This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
groupmeeting-winter2015 [2015/03/10 11:07] skong2 [Week 4 - Greg - Jan 29th] |
groupmeeting-winter2015 [2015/03/10 17:13] (current) bhkong |
||
---|---|---|---|
Line 35: | Line 35: | ||
**Abstract**: | **Abstract**: | ||
- | ==== Week 4 - Greg - Jan 29th ==== | + | ==== Week 4 -Sam - Jan 29th ==== |
**Topic**: | **Topic**: | ||
**Abstract**: | **Abstract**: | ||
- | |||
==== Week 5 - Shu - Feb 5th ==== | ==== Week 5 - Shu - Feb 5th ==== | ||
- | **Paper**: | + | **Paper**: Beyond R-CNN detection: Learning to Merge Contextual Attribute |
- | **Abstract**: | + | **Abstract**: We will briefly review the R-CNN [1], which actually does classification over thousands of objectness regions extracted from the image. We will see what it missed – interaction between objects and context within the image. When people make use contextual information in addition to CNN, performance is improved [2]. This is also recently supported by an interesting study [3], which compares the action classification performance between state-of-the-art CV methods and linear SVM over the fMRI data. The conclusions in the paper are very interesting, but we emphasize the most "trivial" yet convincing one – human brain exploits semantic inference for action classification, which is absent in CV methods for action classification. So, exploiting the contextual information will be a reasonable step to improve detection. But how can we represent, extract and utilize the contextual information? To answer these questions, I will present two other papers which are seemingly unrelated to the questions. The first one is [4], which presents how to represent/learn/use texture attribute to improve texture and material classification; the second one is [5] which uses patch match techniques for chair detection in a finer way. Based on these two papers, we will try to answer the questions – how can we represent, learn and use the contextual information to boost detection? |
==== Week 6 -Minhaeng - Feb 12th ==== | ==== Week 6 -Minhaeng - Feb 12th ==== | ||
- | **Paper**: | + | **Paper**: Knowing a good HOG filter when you see it: Efficient selection of filters for detection |
- | Knowing a good HOG filter when you see it: Efficient selection of filters for detection | + | |
+ | **Abstract**:[[http://ttic.uchicago.edu/~smaji/papers/goodParts-eccv14.pdf|http://ttic.uchicago.edu/~smaji/papers/goodParts-eccv14.pdf]] | ||
- | **Abstract**:http://ttic.uchicago.edu/~smaji/papers/goodParts-eccv14.pdf | ||
==== Week 7 - Phuc - Feb 19th @ 10AM ==== | ==== Week 7 - Phuc - Feb 19th @ 10AM ==== | ||
Line 62: | Line 61: | ||
**Abstract**: | **Abstract**: | ||
- | |||
==== Week 8 - Peiyun - Feb 26th ==== | ==== Week 8 - Peiyun - Feb 26th ==== | ||
Line 76: | Line 74: | ||
**Abstract**: | **Abstract**: | ||
- | ==== Week 10 - Sam - Mar 12th ==== | + | ==== Week 10 - Greg - Mar 12th ==== |
**Paper**: | **Paper**: |