Using explainability frameworks to interpret financial models

In this post we'll walk through how to interpret an XGBoost model trained on a mortgage dataset using the What-if Tool, SHAP, and Cloud AI Platform.

Which factors contribute to my sleep quality?

I collected my own sleep data using an Oura ring and analyzed it with BigQuery.

Interpreting bag of words models with SHAP

I recently gave a talk at Google Next 2019 with my teammate Yufeng on how to go from building a machine learning model with AutoML to building your own custom models, deployed on Cloud AI Platform. Here’s an architecture diagram of the full demo:

Preventing bias in ML models, with code

With all the tools democratizing machine learning these days, it’s easier than ever to build high accuracy machine learning models. But even if you build a model yourself using an open source framework like TensorFlow or Scikit Learn, it’s still mostly a black box - it’s hard to know exactly why your model made the prediction it did. As model builders, we’re responsible for the predictions generated by our models and being able to explain...

Hello World, I have my own blog!

Hello there! I have previously blogged on Medium, but decided to host my own using Jekyll and Firebase Hosting (ain’t nobody got time for metered paywalls). Stay tuned for more posts and in the meantime give me a shout on Twitter if there’s anything you’d like to see on here.