全校師生:
我校定于2017年7月5日舉辦研究生靈犀學(xué)術(shù)殿堂——Arthur Gretton報告會,現(xiàn)將有關(guān)事項通知如下:
1.報告會簡介
報告人:Arthur Gretton
時 間:2017年7月5日(星期三) 上午9:00(開始時間)
地 點(diǎn): 長安校區(qū) 89院之間報告廳
主 題: Learning Interpretable Features to Compare Distributions
內(nèi)容簡介:I will present adaptive two-sample tests with optimized testing power and interpretable features. These will be based on the maximum mean discrepancy (MMD), a difference in the expectations of features under the two distributions being tested. Useful features are defined as being those which contribute a large divergence between distributions with high confidence. These interpretable tests can further be used in benchmarking and troubleshooting generative models, in a goodness-of-fit setting. For instance, we may detect subtle differences in the distribution of model outputs and real hand-written digits which humans are unable to find (for instance, small imbalances in the proportions of certain digits, or minor distortions that are implausible in normal handwriting).
2.歡迎各學(xué)院師生前來聽報告。報告會期間請關(guān)閉手機(jī)或?qū)⑹謾C(jī)調(diào)至靜音模式。
黨委研究生工作部
電子信息學(xué)院
2017年6月30日
報告人簡介
Associate Professor of the Gatsby Computational Neuroscience Unit from the part of the Centre for Computational Statistics and Machine Learning at UCL. His research focus on using kernel methods to reveal properties and relations in data. A first application is in measuring distances between probability distributions. These distances can be used to determine strength of dependence, for example in measuring how strongly two bodies of text in different languages are related; testing for similarities in two datasets, which can be used in attribute matching for databases (that is, automatically finding which fields of two databases correspond); and testing for conditional dependence, which is useful in detecting redundant variables that carry no additional predictive information, given the variables already observed. I am also working on applications of kernel methods to inference in graphical models, where the relations between variables are learned directly from training data.