This is the wiki for the Sabuncu Lab's journal club (or paper reading group). You can find information on the meeting format, schedule, papers and presenters.

We have an email list: moc.spuorgelgoog|bulc-balbas#moc.spuorgelgoog|bulc-balbas

If you want to be added to this list, ask someone who you know is a member.

**SCHEDULE (Fall 2017)**

Wednesdays 1 pm at Rhodes 380.

We start on September 6.

**AREAS OF INTEREST**

Machine learning, computer vision, image processing, statistical modeling and inference, biomedical image/data analysis, …

**FORMAT**

At each meeting, there will be a presenter who will be responsible for leading the discussion of a paper. The paper will be chosen by the presenter (possibly from paper stack below) and posted here about a week before the corresponding meeting. The presenter will not use any slides but can rely on the whiteboard to illustrate ideas, equations, etc. Participants should come to the meeting with the paper either printed out (double-sided!) or available on a portable device (laptop, iPad, etc). Participants are strongly encouraged to read the paper beforehand (they should have spent at least 1-2 hours to gain a basic understanding).

Here are some guidelines for the presenter, who should come to the meeting with answers to each of these questions.

1) What problem is the paper addressing? Is this an interesting mathematical problem? At a very high level, is this a classical mathematical problem? Are there classical solutions (e.g., something you can find on wikipedia) that you can think of?

2) What other state-of-the-art methods/algorithms (say published in last 3-5 years) are out there that address the same/similar problem? Do authors run benchmarking experiments (i.e. empirical comparisons)?

3) What is the application the authors choose (if any)? Is this an interesting application? What was lacking for existing solutions? Were they too slow? Maybe they didn’t really solve the problem exactly?

4) What’s wrong with the proposed method? What are the acknowledged/unacknowledged weaknesses?

5) How can the proposed method be improved? What are the natural future directions of research? Are there new applications you can think of?

6) What would you do differently if you approached this problem?

7) What is the core innovation and contribution of this paper? Is it a mathematical derivation? If so, can you point to it and understand the steps? Was there a far-reaching theoretical result/insight? If so, can you summarize? Was there a novel empirical finding? Is there an accompanying open software package that others can pick up and use on their own data/problem?

**FUTURE MEETINGS**

**September 6, 2017**

Mert will lead the discussion.

Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks." arXiv preprint arXiv:1703.10593 (2017). link

We will skip **September 13** as Mert, Evan and Zhilu are at MICCAI.

**September 20, 2017**

Evan will lead the discussion.

Zhao, Mingmin, et al. "Learning Sleep Stages from Radio Signals: A Conditional Adversarial Architecture." International Conference on Machine Learning. 2017. link

**PAPER STACK**

Johansson, Fredrik D., Uri Shalit, and David Sontag. "Learning representations for counterfactual inference." arXiv preprint arXiv:1605.03661 (2016).

Friedman, Jerome H. "Greedy function approximation: a gradient boosting machine." Annals of Statistics (2001): 1189–1232.

H Mhaskar, Q Liao, T Poggio "When and Why Are Deep Networks Better than Shallow Ones?" 2017 pdf link

"Gradient Descent Learns Linear Dynamical Systems" M Hardt, T Ma, B Recht - arXiv preprint arXiv:1609.05191, 2016

"Image-to-image translation with conditional adversarial networks." Isola, P., Zhu, J.Y., Zhou, T. and Efros, A.A., 2016. arXiv preprint arXiv:1611.07004.

"Dropout as a Bayesian approximation: Representing model uncertainty in deep learning." Gal, Yarin, and Zoubin Ghahramani. arXiv preprint arXiv:1506.02142 2 (2015).

Zhou, Jian, and Olga G. Troyanskaya. "Predicting effects of noncoding variants with deep learning-based sequence model." Nature methods 12.10 (2015): 931-934.

g-drive copy

Johnson, Matthew, David K. Duvenaud, Alex Wiltschko, Ryan P. Adams, and Sandeep R. Datta. "Composing graphical models with neural networks for structured representations and fast inference." In Advances in neural information processing systems, pp. 2946-2954. 2016.

**PAST MEETINGS**

**March 29, 2017**

Zhilu will lead the discussion.

Gal, Yarin, Riashat Islam, and Zoubin Ghahramani. "Deep Bayesian Active Learning with Image Data." arXiv preprint arXiv:1703.02910 (2017). arxiv link

**April 5, 2017**

No meeting as it's spring break.

**April 12, 2017**

No meetings as Mert's in NYC

**April 19, 2017**

Evan will lead the discussion.

Rasmussen, Carl Edward. "Gaussian processes for machine learning." (2006). pdf link

**April 26, 2017**

Mert will lead the discussion.

Chapters 1 and 2 of Andrew Wilson's PhD thesis. link

**May 3, 2017**

Evan will lead the discussion.

"Generative adversarial nets." In Advances in neural information processing systems Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y., 2014 (pp. 2672-2680). pdf link

**May 31, 2017**

Zhilu will lead the discussion.

Doersch, Carl. "Tutorial on variational autoencoders." arXiv preprint arXiv:1606.05908 (2016).pdf link

**June 7, 2017**

Mohammad will lead the discussion.

"Spatial Transformer Networks." M Jaderberg, K Simonyan, A Zisserman, K Kavukcuoglu. NIPS 2015

**June 14, 2017**

Evan will lead the discussion.

"Understanding deep learning requires rethinking generalization" C Zhang, S Bengio, M Hardt, B Recht, O Vinyals - arXiv preprint arXiv:1611.03530, 2016