# Recent Posts

# On “Better Exploiting Latent Variables in Text Modeling”

I’ve been working on latent variable language models for some time, and intend to make it the topic of my PhD. So when Google Scholar recommended “Better Exploiting Latent Variables in Text Modeling”, I was naturally excited to see that this work has continued beyond the Bowman’s paper on VAE language models. Of course, since then, there have been multiple improvements on the original model. More recently, Yoon Kim from Harvard has been publishing papers on this topic that have been particularly interesting.

read more
# Can Chinese Rooms Think?

There’s a tendency as a machine learning or CS researcher to get into a philosophical debate about whether machines will ever be able to think like humans. This argument goes so far back that the people that started the field have had to grapple with it. It’s also fun to think about, especially with sci-fi always portraying AI vs human world-ending/apocalypse type showdowns, with humans prevailing due to love/friendship/humanity.

However, there’s a tendency for people in such a debate to wind up talking past each other.

# Computing Log Normal for Isotropic Gaussians

Consider a matrix $\mathbf{X}$ with rows of datapoints $\mathbf{x_i}$ which are $(n, d)$. The matrix $\mathbf{M}$ is made up of the $\boldsymbol{\mu}_j$ of $k$ different Gaussian components. The task is to compute the log probability of each of these $k$ components for all $n$ data points. In [1]: import theano
import theano.tensor as T
import numpy as np
import time
X = T.matrix('X')
M = T.

read more