Medical Data Visualizer - Difference in results

I think that I am done with this challenge but when the code is executed in replit I get a failure message:

FAIL: test_heat_map_values (test_module.HeatMapTestCase)
Traceback (most recent call last):
  File "/home/runner/boilerplate-medical-data-visualizer-1/", line 47, in test_heat_map_values
    self.assertEqual(actual, expected, "Expected different values in heat map.")
AssertionError: Lists differ: ['0.0[59 chars], '0.2', '0.0', '0.0', '0.0', '0.0', '0.0', '0[548 chars]0.1'] != ['0.0[59 chars], '0.3', '0.0', '0.0', '0.0', '0.0', '0.0', '0[548 chars]0.1']

First differing element 9:

Diff is 1023 characters long. Set self.maxDiff to None to see it. : Expected different values in heat map.

Ran 4 tests in 15.995s

FAILED (failures=1)

If I understand correctly this means that the expected correlation values and the calculated differ. To calculate the correlation I use the method: df.corr()
Why is that happening? Do I have to change the code or is it normal?

The code in replit

The project

import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np

# Import data
df = pd.read_csv("./medical_examination.csv")

# Add 'overweight' column
df['overweight'] = (df["weight"] / ((df["height"]) / 100)**2)

df["overweight"].mask(df["overweight"] <= 25, 0, inplace=True)
df["overweight"].mask(df["overweight"] > 25, 1, inplace=True)

# Normalize data by making 0 always good and 1 always bad. If the value of 'cholesterol' or 'gluc' is 1, make the value 0. If the value is more than 1, make the value 1.
df["cholesterol"].mask(df["cholesterol"] == 1, 0, inplace=True)
df["cholesterol"].mask(df["cholesterol"] > 1, 1, inplace=True)
df["gluc"].mask(df["gluc"] == 1, 0, inplace=True)
df["gluc"].mask(df["gluc"] > 1, 1, inplace=True)

# Draw Categorical Plot
def draw_cat_plot():
    # Create DataFrame for cat plot using `pd.melt` using just the values from 'cholesterol', 'gluc', 'smoke', 'alco', 'active', and 'overweight'.
    df_cat = pd.melt(df, id_vars=["cardio"], value_vars=["cholesterol", "gluc", "smoke", "alco", "active", "overweight"])

    # Group and reformat the data to split it by 'cardio'. Show the counts of each feature. You will have to rename one of the columns for the catplot to work correctly.
    df_cat = df_cat.sort_values(by=["cardio", "variable", "value"], inplace=False)

    # Draw the catplot with 'sns.catplot()'
    fig = sns.catplot(x="variable", hue="value", col="cardio", data=df_cat, kind="count")
    fig = fig.fig

    # Do not modify the next two lines
    return fig

# Draw Heat Map
def draw_heat_map():
    # Clean the data
    df_heat = df.copy()

    df_heat.drop(df_heat.loc[df_heat["ap_lo"] > df_heat["ap_hi"]].index, inplace=True)

    for col_name in ["height", "weight"]:
        a = df_heat.loc[df_heat[col_name] < df_heat[col_name].quantile(0.025)]
        b = df_heat.loc[df_heat[col_name] > df_heat[col_name].quantile(0.975)]
        df_heat.drop(a.index.union(b.index), inplace=True)

    # Calculate the correlation matrix
    corr = df_heat.corr(method="pearson")

    # Generate a mask for the upper triangle
    mask = np.zeros_like(corr)

    mask[np.triu_indices_from(mask)] = True

    # Set up the matplotlib figure
    fig, ax = plt.subplots(figsize=(10, 9), dpi=300)

    # Draw the heatmap with 'sns.heatmap()'
    ax = sns.heatmap(corr, mask=mask, square=False, linewidths=0.1, center=0, annot=True, vmin=-0.16 ,vmax=0.3, fmt=".1f")

    # Do not modify the next two lines
    return fig

Thank you

That part looks… weird? Dunno, I’ve never seen such a selection.
Anyway, it’s wrong because you need to make all selections at the same time. Here you first filter out based on height, which ofcourse changes the remaining entries in df_heat and thus influences the calculations for “weight”.

1 Like

You are probably right … I thought that this way the code would be more compact but these lines look very sus. I will try and filter the data in a different way … maybe check the conditions all together and then see what happens. Thank you very much.