My current project involves working with a class of fairly niche and interesting neural networks that aren’t usually seen on a first pass through deep learning. I thought I’d write up my reading and research and post it.
An annotated reading list on modern computational methods for Bayesian inference — Markov chain Monte Carlo (MCMC), variational inference (VI) and some other (more experimental) methods.
A talk I gave about a recent project to model hate speech on Reddit. In this blog post, I describe the thought processes behind the project, and the stumbling blocks encountered along the way.
This blog post summarizes some literature on probabilistic and Bayesian matrix factorization methods, keeping an eye out for applications to one specific task in NLP: text clustering.
In this blog post I hope to show that there is more to Bayesianism than just MCMC sampling and suffering, by demonstrating a Bayesian approach to a classic reinforcement learning problem: the multi-armed bandit.
This is a compilation of notes, tips, tricks and recipes for Bayesian modelling that I’ve collected from everywhere: papers, documentation, peppering my more experienced colleagues with questions.
A recent project on trying to model hate speech on Reddit through text clustering — from ‘nimble navigators’ to ‘swamp creatures’, ‘spezzes’ to the ‘Trumpire’.
Latent Dirichlet allocation is a well-known and popular model in machine learning and natural language processing, but it really sucks sometimes. Here’s why.
What is learning using privileged information (LUPI), how do I do it, and why should I care? A brief introduction to LUPI and SVM+.
Everything that you wanted to know (and more!) about linear discriminant analysis (LDA) — how it works, why it works, and how to use it.
I interned at Quantopian this summer, where I contributed some exciting stuff to their open-source portfolio analytics engine, pyfolio. Check it out!
This is the first post of what will (hopefully) be a cool blog. Hope you like it!