Hello! I'm Jason.

I created this site to write about things that I find interesting: probability & Bayesian inference, data visualization, puzzles & games, finance, and books.


I've learned that there are many automatic differentiation libraries in the Python ecosystem. Often these libraries are also machine learning libraries, where automatic differentiation serves as a means to an end - for example in optimizing model parameters in a neural network. However, the autograd library might be one of the purest, "simplest" (relatively speaking) options out there. Its goal is to efficiently compute deriviatives of numpy code, and its API is as close to numpy as possible. This means it's easy to get started right away if you're comfortable using numpy. In particular autograd claims to be able to differentiate as many times as one likes, and I thought a great way to test this would be to apply the Taylor Series approximation to some interesting functions.

As a follow up to my prior article on Black-Scholes in PyTorch, I wanted to explore more complex applications of automatic differentiation. As I showed before, automatic differentiation can be used to calculate the sensitivities, or "greeks", of a stock option, even if we use monte carlo techniques to calculate option price. Many exotic options can only be priced using monte carlo techniques, so automatic differentiation may be able to provide more accurate sensitivities in less time than traditional methods.

I've been experimenting with several machine learning frameworks lately, including Tensorflow, PyTorch, and Chainer. I'm fascinated by the concept of automatic differentiation. It's incredible to me that these libraries can calculate millions of partial derivatives of virtually any function with only one extra pass through the code. Automatic differentiation is critical for deep learning models, but I wanted to see how it could be applied to value financial derivatives.

I wrote yesterday about tracking my steps with a Garmin watch. Perhaps to keep me motivated and active, Garmin provides a daily step goal that moves up or down based on my activity. I've always been curious about how this algorithm works, but I couldn't find any resources that described it. Let's see if I can reverse engineer it instead.

Without a doubt, getting a puppy changes your life for the better. But I wanted to quantify this somehow. I used Bayesian inference to identify whether I logged more steps in the days since our puppy arrived.

I will be presenting along with my former colleague, Zachary Brown at the Valuation Actuary Symposium later this month in Washington, D.C. Our topic is "Improving Actuarial Communication". My portion of the talk is focused on improving visual communication and the benefits that new tools like Python and Jupyter Notebooks can bring to traditional actuarial work. My full slides, presented in an interactive format using reveal.js, can be found within.

I transitioned from a role as an actuarial consultant into the world of fintech a few years ago. The actuarial profession is relatively small and extremely specialized, but I believe actuarial methods and insights can play a significant role in the burgeoning fintech field. I wrote an article for The Actuary magazine summarizing the fintech landscape and the role that actuaries should be playing in it.

It turns out you can identify a doctored coin with a fairly high degree of certainty... It just takes lots and lots of trials.

This week's riddler reminds me of a cross between sudoku or kakuro puzzles and some good old fashioned geometry... you might call them geometric sudoku puzzles!

Just kidding, no stones in this week's riddler puzzle. But before you build your campfire, what shapes can you make with the sticks?