TL;DR Build and train a Deep Neural Network for binary classification in TensorFlow 2. Use the model to predict the presence of heart disease from patient data.
Machine Learning is used to solve real-world problems in many areas, already. Medicine is no exception. While controversial, multiple models have been proposed and used with some success. Some notable projects by Google and others:
Today, we’re going to take a look at one specific area - heart disease prediction.
About 610,000 people die of heart disease in the United States every year – that’s 1 in every 4 deaths. Heart disease is the leading cause of death for both men and women. More than half of the deaths due to heart disease in 2009 were in men. - Heart Disease Facts & Statistics | cdc.gov
Please note, the model presented here is very limited and in no way applicable for real-world situations. Our dataset is extremely small, conclusions made here are in no way generalizable. Heart disease prediction is a vastly more complex problem than depicted in this writing.
Complete source code in Google Colaboratory Notebook
Here is the plan:
- Explore patient data
- Data preprocessing
- Create your Neural Network in TensorFlow 2
- Train the model
- Predict heart disease from patient data
Patient Data
Our data comes from this dataset. It contains 303 patient records. Each record contains 14 attributes:
Label | Description |
---|---|
age | age in years |
sex | (1 = male; 0 = female) |
cp | (1 = typical angina; 2 = atypical angina; 3 = non-anginal pain; 4 = asymptomatic) |
trestbps | resting blood pressure (in mm Hg on admission to the hospital) |
chol | serum cholestoral in mg/dl |
fbs | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) |
restecg | resting electrocardiographic results |
thalach | maximum heart rate achieved |
exang | exercise induced angina (1 = yes; 0 = no) |
oldpeak | ST depression induced by exercise relative to rest |
slope | the slope of the peak exercise ST segment |
ca | number of major vessels (0-3) colored by flourosopy |
thal | (3 = normal; 6 = fixed defect; 7 = reversable defect) |
target | (0 = no heart disease; 1 = heart disease presence) |
How many of the patient records indicate heart disease?
That looks like a pretty well-distributed dataset, considering the number of rows.
Let’s have a look at how heart disease affects different genders:
Here is a Pearson correlation heatmap between the features:
How disease presence is affected by thalach
(“Maximum Heart Rate”) vs age
:
Looks like maximum heart rate can be very predictive for the presence of a disease, regardless of age.
How different types of chest pain affect the presence of heart disease:
Having chest pain might not be indicative of heart disease.
Data Preprocessing
Our data contains a mixture of categorical and numerical data. Let’s use TensorFlow`s Feature Columns.
source: https://www.tensorflow.org/
Feature columns allow you to bridge/process the raw data in your dataset to fit your model input data requirements. Furthermore, you can separate the model building process from the data preprocessing. Let’s have a look:
feature_columns = []
# numeric cols
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'ca']:
feature_columns.append(tf.feature_column.numeric_column(header))
# bucketized cols
age = tf.feature_column.numeric_column("age")
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
data["thal"] = data["thal"].apply(str)
thal = tf.feature_column.categorical_column_with_vocabulary_list(
'thal', ['3', '6', '7'])
thal_one_hot = tf.feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
data["sex"] = data["sex"].apply(str)
sex = tf.feature_column.categorical_column_with_vocabulary_list(
'sex', ['0', '1'])
sex_one_hot = tf.feature_column.indicator_column(sex)
feature_columns.append(sex_one_hot)
data["cp"] = data["cp"].apply(str)
cp = tf.feature_column.categorical_column_with_vocabulary_list(
'cp', ['0', '1', '2', '3'])
cp_one_hot = tf.feature_column.indicator_column(cp)
feature_columns.append(cp_one_hot)
data["slope"] = data["slope"].apply(str)
slope = tf.feature_column.categorical_column_with_vocabulary_list(
'slope', ['0', '1', '2'])
slope_one_hot = tf.feature_column.indicator_column(slope)
feature_columns.append(slope_one_hot)
Apart from the numerical features, we’re putting patient age
into discrete ranges (buckets). Furthermore, thal
, sex
, cp
, and slope
are categorical and we map them to such.
Next up, lets turn the pandas DataFrame into a TensorFlow Dataset:
def create_dataset(dataframe, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
return tf.data.Dataset.from_tensor_slices((dict(dataframe), labels)) \
.shuffle(buffer_size=len(dataframe)) \
.batch(batch_size)
And split the data into training and testing:
train, test = train_test_split(
data,
test_size=0.2,
random_state=RANDOM_SEED
)
train_ds = create_dataset(train)
test_ds = create_dataset(test)
The Model
Let’s build a binary classifier using Deep Neural Network in TensorFlow:
model = tf.keras.models.Sequential([
tf.keras.layers.DenseFeatures(feature_columns=feature_columns),
tf.keras.layers.Dense(units=128, activation='relu'),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(units=128, activation='relu'),
tf.keras.layers.Dense(units=2, activation='sigmoid')
])
Our model uses the feature columns we’ve created in the preprocessing step. Note that, we’re no longer required to specify the input layer size.
We also use the Dropout layer between 2 dense layers. Our output layer contains 2 neurons, since we are building a binary classifier.
Training
Our loss function is binary cross-entropy defined by:
where is binary indicator if the predicted class is correct for the current observation and is the predicted probability.
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy']
)
history = model.fit(
train_ds,
validation_data=test_ds,
epochs=100,
use_multiprocessing=True
)
Here is a sample of the training process:
Epoch 95/100
0s 42ms/step - loss: 0.3018 - accuracy: 0.8430 - val_loss: 0.4012 - val_accuracy: 0.8689
Epoch 96/100
0s 42ms/step - loss: 0.2882 - accuracy: 0.8547 - val_loss: 0.3436 - val_accuracy: 0.8689
Epoch 97/100
0s 42ms/step - loss: 0.2889 - accuracy: 0.8732 - val_loss: 0.3368 - val_accuracy: 0.8689
Epoch 98/100
0s 42ms/step - loss: 0.2964 - accuracy: 0.8386 - val_loss: 0.3537 - val_accuracy: 0.8770
Epoch 99/100
0s 43ms/step - loss: 0.3062 - accuracy: 0.8282 - val_loss: 0.4110 - val_accuracy: 0.8607
Epoch 100/100
0s 43ms/step - loss: 0.2685 - accuracy: 0.8821 - val_loss: 0.3669 - val_accuracy: 0.8852
Accuracy on the test set:
model.evaluate(test_ds)
0s 24ms/step - loss: 0.3669 - accuracy: 0.8852
[0.3669000566005707, 0.8852459]
So, we have ~88% accuracy on the test set.
Predicting Heart Disease
Now that we have a model with some good accuracy on the test set, let’s try to predict heart disease based on the features in our dataset.
predictions = tf.round(model.predict(test_ds)).numpy().flatten()
Since we’re interested in making binary decisions, we’re taking the maximum probability of the output layer.
print(classification_report(y_test.values, predictions))
precision recall f1-score support
0 0.59 0.66 0.62 29
1 0.66 0.59 0.62 32
micro avg 0.62 0.62 0.62 61
macro avg 0.62 0.62 0.62 61
weighted avg 0.63 0.62 0.62 61
Regardless of the accuracy, you can see that the precision, recall and f1-score of our model are not that high. Let’s take a look at the confusion matrix:
Our model looks a bit confused. Can you improve on it?
Conclusion
Complete source code in Google Colaboratory Notebook
You did it! You made a binary classifier using Deep Neural Network with TensorFlow and used it to predict heart disease from patient data.
Next, we’ll have a look at what TensorFlow 2 has in store for us, when applied to computer vision.