Fraud detection with TensorFlow and Explainable AI

Anomaly detection can be a good candidate for machine learning, since it is often hard to write a series of rule-based statements to identify outliers in data. In this post I'll look at building a model for fraud detection on financial data. If you're thinking *groan, that sounds boring*, don't go away just yet! Fraud detection addresses some interesting challenges in ML.

Five things I've learned in five years at Google

Today is my five year Googleversary! I thought I'd take a few minutes to share some things I've learned over the past five years related to developer advocacy, working in tech, and other random thoughts.

Getting hyped about automated hyperparameter tuning

Learn how to use custom containers on Cloud AI Platform to train an XGBoost model with automated hyperparameter tuning.

Using explainability frameworks to interpret financial models

In this post we'll walk through how to interpret an XGBoost model trained on a mortgage dataset using the What-if Tool, SHAP, and Cloud AI Platform.

Which factors contribute to my sleep quality?

I collected my own sleep data using an Oura ring and analyzed it with BigQuery.

Interpreting bag of words models with SHAP

Learn how to build a bag of words text classification model and interpret the model's output with SHAP.

Preventing bias in ML models, with code

With all the tools democratizing machine learning these days, it’s easier than ever to build high accuracy machine learning models. But even if you build a model yourself using an open source framework like TensorFlow or Scikit Learn, it’s still mostly a black box - it’s hard to know exactly why your model made the prediction it did. As model builders, we’re responsible for the predictions generated by our models and being able to explain...

Hello World, I have my own blog!

I'm starting my own blog!