Tell us what’s happening:
convert numpy to list in your app (gitpoddemos) ?
I solve the programs in jupeter notebook .and very good give me answer and the output of dectionary is list.
but, in the gitpod output of dectionary not list (they are numpy arrays) .please help me to solve it.
Thanks
Your code so far
Your browser information:
User Agent is: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36
Challenge Information:
Data Analysis with Python Projects - Mean-Variance-Standard Deviation Calculator
Can you figure out how to convert an array to a list?
If not, please share your code.
For a project, it has been stated that the output of a function is a dictionary data type. It should be in the form of a list, and the inputs are a list that needs to be converted into a 3x3 matrix using NumPy.
def calculate(lst):
try:
arr_np = np.array(lst)
arr = arr_np.reshape(3,3)
mean1=[]
mean2=[]
variance1=[]
variance2=[]
std1=[]
std2=[]
max1=[]
max2=[]
min1=[]
min2=[]
sum1=[]
sum2=[]
for i in range(3):
mean1.append(arr[:,i].mean())
mean2.append(arr[i].mean())
variance1.append(arr[:,i].var())
variance2.append(arr[i].var())
std1.append(arr[:,i].std())
std2.append(arr[i].std())
max1.append(arr[:,i].max())
max2.append(arr[i].max())
min1.append(arr[:,i].min())
min2.append(arr[i].min())
sum1.append(arr[:,i].sum())
sum2.append(arr[i].sum())
axis1_mean=mean1
axis1_var=variance1
axis1_std=std1
axis1_max=max1
axis1_min=min1
axis1_sum=sum1
axis2_mean=mean2
axis2_var=variance2
axis2_std=std1
axis2_max=max2
axis2_min=min2
axis2_sum=sum2
flattend_mean = arr.mean()
flattend_var = arr.var()
flattend_std = arr.std()
flattend_max = arr.max()
flattend_min = arr.min()
flattend_sum = arr.sum()
keys=('mean','variance','standard deviation','max','min','sum')
values=[[axis1_mean, axis2_mean, flattend_mean],[axis1_var, axis2_var, flattend_var],[axis1_std, axis2_std, flattend_std]
,[axis1_max, axis2_max, flattend_max],[axis1_min, axis2_min, flattend_min],[axis1_sum, axis2_sum, flattend_sum]]
dict = {keys[i]: values[i] for i in range(len(keys))}
print (dict)
except ValueError:
print( "List must contain nine numbers.")
calculate([0,1,2,3,4,5,6,7,8])
Two things to fix first:
return a dictionary
Your function needs to return
a dictionary, not print()
a dictionary.
You have a syntax error here:
shoeyb.rahimi09:
print (dict)
Also, you do have lists as required. You can test it like this:
print(type(dict['mean']))
>>> <class 'list'>
print(type(dict['mean'][0]))
>>> <class 'list'>
I tested it and it says exactly that its data type is a list. But the output is numpy
I’m not sure what you mean. What makes you say that the output is a numpy array?
The values in the returned dictionary should be lists and not Numpy arrays.
I can’t replicate this with your code. This is on gitpod?
Try printing the function:
print(calculate([0,1,2,3,4,5,6,7,8]))
Also please share your full code, not just the function
yes its in gitpad ,vscode
the problem is this:
Create a function named calculate()
in mean_var_std.py
that uses Numpy to output the mean, variance, standard deviation, max, min, and sum of the rows, columns, and elements in a 3 x 3 matrix.
The input of the function should be a list containing 9 digits. The function should convert the list into a 3 x 3 Numpy array, and then return a dictionary containing the mean, variance, standard deviation, max, min, and sum along both axes and for the flattened matrix.
I’m familiar thanks.
Please share your full code, not just the function
This is the whole code. I want to write that function and I ran into a problem.
import numpy as np
def calculate(lst):
try:
arr_np = np.array(lst)
arr = arr_np.reshape(3,3)
mean1=[]
mean2=[]
variance1=[]
variance2=[]
std1=[]
std2=[]
max1=[]
max2=[]
min1=[]
min2=[]
sum1=[]
sum2=[]
for i in range(3):
mean1.append(arr[:,i].mean())
mean2.append(arr[i].mean())
variance1.append(arr[:,i].var())
variance2.append(arr[i].var())
std1.append(arr[:,i].std())
std2.append(arr[i].std())
max1.append(arr[:,i].max())
max2.append(arr[i].max())
min1.append(arr[:,i].min())
min2.append(arr[i].min())
sum1.append(arr[:,i].sum())
sum2.append(arr[i].sum())
axis1_mean=mean1
axis1_var=variance1
axis1_std=std1
axis1_max=max1
axis1_min=min1
axis1_sum=sum1
axis2_mean=mean2
axis2_var=variance2
axis2_std=std1
axis2_max=max2
axis2_min=min2
axis2_sum=sum2
flattend_mean = arr.mean()
flattend_var = arr.var()
flattend_std = arr.std()
flattend_max = arr.max()
flattend_min = arr.min()
flattend_sum = arr.sum()
keys=('mean','variance','standard deviation','max','min','sum')
values=[[axis1_mean, axis2_mean, flattend_mean],[axis1_var, axis2_var, flattend_var],[axis1_std, axis2_std, flattend_std]
,[axis1_max, axis2_max, flattend_max],[axis1_min, axis2_min, flattend_min],[axis1_sum, axis2_sum, flattend_sum]]
dict = {keys[i]: values[i] for i in range(len(keys))}
print(type(dict['mean']))
print(type(dict['mean'][0]))
return dict
except ValueError:
print( "List must contain nine numbers.")
calculate([0,1,2,3,4,5,6,7,8])
Try printing the function:
print(calculate([0,1,2,3,4,5,6,7,8]))
not solved
still the same
I don’t think the problem has anything to do with the format.
Here is the std dev from the example:
'standard deviation': [[2.449489742783178, 2.449489742783178, 2.449489742783178], [0.816496580927726, 0.816496580927726, 0.816496580927726], 2.581988897471611],
Please compare with your output:
'standard deviation': [[2.449489742783178, 2.449489742783178, 2.449489742783178], [2.449489742783178, 2.449489742783178, 2.449489742783178], 2.581988897471611]
Please check to see where they are different.
What did the output look like though? (Please paste the output here rather than screenshots?)
Can you also paste the exact errors that you are getting from the tests? I’m not able to replicate what you are saying.
this output:
<class 'list'>
<class 'list'>
{'mean': [[np.float64(3.0), np.float64(4.0), np.float64(5.0)],
[np.float64(1.0), np.float64(4.0), np.float64(7.0)],
np.float64(4.0)],
'variance': [[np.float64(6.0), np.float64(6.0), np.float64(6.0)],
[np.float64(0.6666666666666666),
np.float64(0.6666666666666666),
np.float64(0.6666666666666666)],
np.float64(6.666666666666667)],
'standard deviation': [[np.float64(2.449489742783178),
np.float64(2.449489742783178),
np.float64(2.449489742783178)],
[np.float64(2.449489742783178),
np.float64(2.449489742783178),
np.float64(2.449489742783178)],
np.float64(2.581988897471611)],
'max': [[np.int64(6), np.int64(7), np.int64(8)],
[np.int64(2), np.int64(5), np.int64(8)],
np.int64(8)],
'min': [[np.int64(0), np.int64(1), np.int64(2)],
[np.int64(0), np.int64(3), np.int64(6)],
np.int64(0)],
'sum': [[np.int64(9), np.int64(12), np.int64(15)],
[np.int64(3), np.int64(12), np.int64(21)],
np.int64(36)]}
Your std dev is wrong, please refer to my previous comment:
I don’t think the problem has anything to do with the format.
Here is the std dev from the example:
'standard deviation': [[2.449489742783178, 2.449489742783178, 2.449489742783178], [0.816496580927726, 0.816496580927726, 0.816496580927726], 2.581988897471611],
Please compare with your output:
'standard deviation': [[2.449489742783178, 2.449489742783178, 2.449489742783178], [2.449489742783178, 2.449489742783178, 2.449489742783178], 2.581988897471611]
Please check to see where they are differ…