What is a Library?

Essentially a list of pre-written code that you can use to streamline and clean up your program.

Libraries can help simplify complex programs

APIS are specifications for how the procedures in a library behave, and how they can be used

Documentations for an API/library is necessary in understanding the behaviors provided by the API/library and how to use them

Libraries that we will go over: Requests, Pillow, Pandas, Numpy, Scikit-Learn, TensorFlow, matplotlib.

Required Installations

Please run the following commands in your vscode terminal in order to continue the lesson

  • pip install numpy
  • pip install matplotlib
  • pip install scikit-learn
  • pip install pillow
  • pip install pandas
  • pip install tensorflow
  • pip install requests

Images using requests and pillow libraries

‘Requests’ is focused on handling HTTP requests and web data while ‘Pillow’ is designed for data manipulation and analysis It’s common to see them used together in data-related assignments where data is fetched by HTTP requests using Requests and then processed and analyzed with Pandas.

Here’s an example:

import requests
from PIL import Image
from io import BytesIO

# Step 1: Download an image using Requests
image_url = "https://th-thumbnailer.cdn-si-edu.com/bgmkh2ypz03IkiRR50I-UMaqUQc=/1000x750/filters:no_upscale():focal(1061x707:1062x708)/https://tf-cmsv2-smithsonianmag-media.s3.amazonaws.com/filer_public/55/95/55958815-3a8a-4032-ac7a-ff8c8ec8898a/gettyimages-1067956982.jpg"  # Replace with the actual URL of the image you want to download
response = requests.get(image_url)

if response.status_code == 200:
    # Step 2: Process the downloaded image using Pillow
    image_data = BytesIO(response.content)  # Create an in-memory binary stream from the response content
    img = Image.open(image_data)  # Open the image using Pillow

    # Perform image processing tasks here, like resizing or applying filters
    img = img.resize((100, 300))  # Resize the image and replace x,y with desired amounts

    # Step 3: Save the processed image using Pillow
    img.save("processed_image.jpg")  # Save the processed image to a file

    print("Image downloaded, processed, and saved.")
else:
    print(f"Failed to download image. Status code: {response.status_code}")

Image downloaded, processed, and saved.

In this code, we use the Requests library to download an image from a URL and then if the download is successful the HTTP status code 200 will pop up, and from there we create an in-memory binary stream (BytesIO) from the response content. We then use the Pillow library to open the image, make any necessary changes, and save the processed image to a file.

Here’s a step by step tutorial on how we wrote this code: 1)We started by importing the necessary libraries, which were Requests, Pillow, and io.

2)Download the Image

3)Use the Requests library to send an HTTP GET request to the URL to download the image. Check the response status code to make sure the download goes through(status code 200).

4)If the download is successful, create an in-memory binary stream (BytesIO) from the response content. Process the Image:

5)Utilize the Pillow library to open the image from the binary stream. Change photo to desired preference(ie: size) Save the Processed Image:

6)Save the processed image to a file using Pillow. Choose a filename and file format for the saved image.

Hack 1

Write a Python code that accomplishes the following tasks:

Downloads an image from a specified URL using the Requests library. Processes the downloaded image (like resizing) using the Pillow library. Save the processed image to a file.

#Code here
import requests
from PIL import Image
from io import BytesIO

# Step 1: Download an image using Requests
image_url = "https://th-thumbnailer.cdn-si-edu.com/bgmkh2ypz03IkiRR50I-UMaqUQc=/1000x750/filters:no_upscale():focal(1061x707:1062x708)/https://tf-cmsv2-smithsonianmag-media.s3.amazonaws.com/filer_public/55/95/55958815-3a8a-4032-ac7a-ff8c8ec8898a/gettyimages-1067956982.jpg"  # Replace with the actual URL of the image you want to download
response = requests.get(image_url)

if response.status_code == 200:
    # Step 2: Process the downloaded image using Pillow
    image_data = BytesIO(response.content)  # Create an in-memory binary stream from the response content
    img = Image.open(image_data)  # Open the image using Pillow

    # Perform image processing tasks here, like resizing or applying filters
    img = img.resize((100, 300))  # Resize the image and replace x,y with desired amounts

    # Step 3: Save the processed image using Pillow
    img.save("cat.jpg")  # Save the processed image to a file

    print("Image downloaded, processed, and saved.")
else:
    print(f"Failed to download image. Status code: {response.status_code}")

Image downloaded, processed, and saved.

Math Operations With Python Libraries

Numpy(Numerical Python) is used for numerical and scientific computing. It provides tools for handling large sets of numbers, such as data tables and arrays. Numpy makes it easier and more efficient to do mathematical tasks.

The Matplotlib library lets you create a visual representation of your data (graphs, charts, and etc.)

Example Sine Graph

Uses numpy and matplotlib libaries

import numpy as np
import matplotlib.pyplot as plt

# Generate sample data with NumPy
x = np.linspace(0, 2 * np.pi, 100) 
# Create an array of values from 0 to 2*pi
# 100 is included to have 100 points distributed between 0 and 2π to make graph smoother
y = np.sin(x)
# Compute the sine of each value

# Create a simple line plot using Matplotlib
plt.plot(x, y, label='Sine Function', color='blue', linestyle='-')  # Create the plot
plt.title('Sine Function')  # Set the title
plt.xlabel('x')  # Label for the x-axis
plt.ylabel('sin(x)')  # Label for the y-axis
plt.grid(True)  # Display a grid
plt.legend()  # Show the legend
plt.show()  # Display the plot

png

Hack 2

Using the data from the numpy library, create a visual graph using different matplotlib functions.

import numpy as np
import matplotlib.pyplot as plt

# Generate data for two lines
x = np.linspace(0, 10, 50)  # Create an array of values from 0 to 10
y1 = 2 * x + 1  # Set of data poits

# Create and display a plot using Matplotlib
# your code here
plt.plot(x, y1, label='f(x)=2x+1', color='purple', linestyle='-')  # Create the plot
plt.title('f(x)=2x+1')  # Set the title
plt.xlabel('x')  # Label for the x-axis
plt.ylabel('2x+1')  # Label for the y-axis
plt.grid(True)  # Display a grid
plt.legend()  # Show the legend
plt.show()  # Display the plot

png

Tensor Flow is used in deep learning and neural networks, while scikit-learn is used for typical machine learning tasks. When used together, they can tackle machine learning projects. In the code below, Tensor Flow is used for model creation and training. Scikit-learn is used for data-processing and model evaluation.

Pip install tensorflow scikit-learn

import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import StandardScaler
from tensorflow import keras
from tensorflow.keras import layers
# Generate synthetic data
np.random.seed(0)
X = np.random.rand(100, 1)  # Feature
y = 2 * X + 1 + 0.1 * np.random.randn(100, 1)  # Target variable with noise
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Standardize the features
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Create a simple linear regression model using TensorFlow and Keras
model = keras.Sequential([
    layers.Input(shape=(1,)),
    layers.Dense(1)
])
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Train the model
model.fit(X_train, y_train, epochs=100, batch_size=32, verbose=2)
# Make predictions on the test set
y_pred = model.predict(X_test)
# Calculate the Mean Squared Error on the test set
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error: {mse:.4f}")
2023-10-26 10:27:30.441251: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-26 10:27:30.443968: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-10-26 10:27:30.479328: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-10-26 10:27:30.479369: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-10-26 10:27:30.479392: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-10-26 10:27:30.486114: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-10-26 10:27:30.486907: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-26 10:27:31.666942: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.25.2
  warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"


Epoch 1/100
3/3 - 0s - loss: 7.5697 - 353ms/epoch - 118ms/step
Epoch 2/100
3/3 - 0s - loss: 7.5474 - 13ms/epoch - 4ms/step
Epoch 3/100
3/3 - 0s - loss: 7.5239 - 12ms/epoch - 4ms/step
Epoch 4/100
3/3 - 0s - loss: 7.5018 - 12ms/epoch - 4ms/step
Epoch 5/100
3/3 - 0s - loss: 7.4790 - 9ms/epoch - 3ms/step
Epoch 6/100
3/3 - 0s - loss: 7.4553 - 9ms/epoch - 3ms/step
Epoch 7/100
3/3 - 0s - loss: 7.4335 - 11ms/epoch - 4ms/step
Epoch 8/100
3/3 - 0s - loss: 7.4103 - 10ms/epoch - 3ms/step
Epoch 9/100
3/3 - 0s - loss: 7.3880 - 12ms/epoch - 4ms/step
Epoch 10/100
3/3 - 0s - loss: 7.3659 - 15ms/epoch - 5ms/step
Epoch 11/100
3/3 - 0s - loss: 7.3416 - 18ms/epoch - 6ms/step
Epoch 12/100
3/3 - 0s - loss: 7.3209 - 15ms/epoch - 5ms/step
Epoch 13/100
3/3 - 0s - loss: 7.2972 - 15ms/epoch - 5ms/step
Epoch 14/100
3/3 - 0s - loss: 7.2745 - 15ms/epoch - 5ms/step
Epoch 15/100
3/3 - 0s - loss: 7.2539 - 12ms/epoch - 4ms/step
Epoch 16/100
3/3 - 0s - loss: 7.2295 - 15ms/epoch - 5ms/step
Epoch 17/100
3/3 - 0s - loss: 7.2080 - 10ms/epoch - 3ms/step
Epoch 18/100
3/3 - 0s - loss: 7.1860 - 11ms/epoch - 4ms/step
Epoch 19/100
3/3 - 0s - loss: 7.1628 - 13ms/epoch - 4ms/step
Epoch 20/100
3/3 - 0s - loss: 7.1420 - 13ms/epoch - 4ms/step
Epoch 21/100
3/3 - 0s - loss: 7.1186 - 13ms/epoch - 4ms/step
Epoch 22/100
3/3 - 0s - loss: 7.0965 - 14ms/epoch - 5ms/step
Epoch 23/100
3/3 - 0s - loss: 7.0734 - 15ms/epoch - 5ms/step
Epoch 24/100
3/3 - 0s - loss: 7.0518 - 12ms/epoch - 4ms/step
Epoch 25/100
3/3 - 0s - loss: 7.0287 - 11ms/epoch - 4ms/step
Epoch 26/100
3/3 - 0s - loss: 7.0067 - 15ms/epoch - 5ms/step
Epoch 27/100
3/3 - 0s - loss: 6.9846 - 10ms/epoch - 3ms/step
Epoch 28/100
3/3 - 0s - loss: 6.9627 - 12ms/epoch - 4ms/step
Epoch 29/100
3/3 - 0s - loss: 6.9406 - 16ms/epoch - 5ms/step
Epoch 30/100
3/3 - 0s - loss: 6.9208 - 15ms/epoch - 5ms/step
Epoch 31/100
3/3 - 0s - loss: 6.8979 - 15ms/epoch - 5ms/step
Epoch 32/100
3/3 - 0s - loss: 6.8761 - 14ms/epoch - 5ms/step
Epoch 33/100
3/3 - 0s - loss: 6.8554 - 16ms/epoch - 5ms/step
Epoch 34/100
3/3 - 0s - loss: 6.8350 - 13ms/epoch - 4ms/step
Epoch 35/100
3/3 - 0s - loss: 6.8135 - 15ms/epoch - 5ms/step
Epoch 36/100
3/3 - 0s - loss: 6.7924 - 18ms/epoch - 6ms/step
Epoch 37/100
3/3 - 0s - loss: 6.7711 - 16ms/epoch - 5ms/step
Epoch 38/100
3/3 - 0s - loss: 6.7501 - 16ms/epoch - 5ms/step
Epoch 39/100
3/3 - 0s - loss: 6.7286 - 15ms/epoch - 5ms/step
Epoch 40/100
3/3 - 0s - loss: 6.7081 - 16ms/epoch - 5ms/step
Epoch 41/100
3/3 - 0s - loss: 6.6866 - 12ms/epoch - 4ms/step
Epoch 42/100
3/3 - 0s - loss: 6.6654 - 16ms/epoch - 5ms/step
Epoch 43/100
3/3 - 0s - loss: 6.6458 - 13ms/epoch - 4ms/step
Epoch 44/100
3/3 - 0s - loss: 6.6236 - 16ms/epoch - 5ms/step
Epoch 45/100
3/3 - 0s - loss: 6.6039 - 16ms/epoch - 5ms/step
Epoch 46/100
3/3 - 0s - loss: 6.5837 - 19ms/epoch - 6ms/step
Epoch 47/100
3/3 - 0s - loss: 6.5635 - 15ms/epoch - 5ms/step
Epoch 48/100
3/3 - 0s - loss: 6.5445 - 16ms/epoch - 5ms/step
Epoch 49/100
3/3 - 0s - loss: 6.5248 - 19ms/epoch - 6ms/step
Epoch 50/100
3/3 - 0s - loss: 6.5052 - 16ms/epoch - 5ms/step
Epoch 51/100
3/3 - 0s - loss: 6.4831 - 11ms/epoch - 4ms/step
Epoch 52/100
3/3 - 0s - loss: 6.4640 - 8ms/epoch - 3ms/step
Epoch 53/100
3/3 - 0s - loss: 6.4437 - 9ms/epoch - 3ms/step
Epoch 54/100
3/3 - 0s - loss: 6.4242 - 13ms/epoch - 4ms/step
Epoch 55/100
3/3 - 0s - loss: 6.4029 - 13ms/epoch - 4ms/step
Epoch 56/100
3/3 - 0s - loss: 6.3824 - 17ms/epoch - 6ms/step
Epoch 57/100
3/3 - 0s - loss: 6.3622 - 14ms/epoch - 5ms/step
Epoch 58/100
3/3 - 0s - loss: 6.3422 - 15ms/epoch - 5ms/step
Epoch 59/100
3/3 - 0s - loss: 6.3223 - 18ms/epoch - 6ms/step
Epoch 60/100
3/3 - 0s - loss: 6.3000 - 16ms/epoch - 5ms/step
Epoch 61/100
3/3 - 0s - loss: 6.2812 - 16ms/epoch - 5ms/step
Epoch 62/100
3/3 - 0s - loss: 6.2621 - 13ms/epoch - 4ms/step
Epoch 63/100
3/3 - 0s - loss: 6.2419 - 10ms/epoch - 3ms/step
Epoch 64/100
3/3 - 0s - loss: 6.2212 - 11ms/epoch - 4ms/step
Epoch 65/100
3/3 - 0s - loss: 6.2028 - 11ms/epoch - 4ms/step
Epoch 66/100
3/3 - 0s - loss: 6.1829 - 15ms/epoch - 5ms/step
Epoch 67/100
3/3 - 0s - loss: 6.1638 - 8ms/epoch - 3ms/step
Epoch 68/100
3/3 - 0s - loss: 6.1438 - 12ms/epoch - 4ms/step
Epoch 69/100
3/3 - 0s - loss: 6.1249 - 13ms/epoch - 4ms/step
Epoch 70/100
3/3 - 0s - loss: 6.1049 - 9ms/epoch - 3ms/step
Epoch 71/100
3/3 - 0s - loss: 6.0872 - 9ms/epoch - 3ms/step
Epoch 72/100
3/3 - 0s - loss: 6.0668 - 12ms/epoch - 4ms/step
Epoch 73/100
3/3 - 0s - loss: 6.0482 - 11ms/epoch - 4ms/step
Epoch 74/100
3/3 - 0s - loss: 6.0279 - 12ms/epoch - 4ms/step
Epoch 75/100
3/3 - 0s - loss: 6.0098 - 12ms/epoch - 4ms/step
Epoch 76/100
3/3 - 0s - loss: 5.9902 - 10ms/epoch - 3ms/step
Epoch 77/100
3/3 - 0s - loss: 5.9708 - 9ms/epoch - 3ms/step
Epoch 78/100
3/3 - 0s - loss: 5.9513 - 10ms/epoch - 3ms/step
Epoch 79/100
3/3 - 0s - loss: 5.9323 - 11ms/epoch - 4ms/step
Epoch 80/100
3/3 - 0s - loss: 5.9128 - 12ms/epoch - 4ms/step
Epoch 81/100
3/3 - 0s - loss: 5.8933 - 13ms/epoch - 4ms/step
Epoch 82/100
3/3 - 0s - loss: 5.8749 - 11ms/epoch - 4ms/step
Epoch 83/100
3/3 - 0s - loss: 5.8557 - 13ms/epoch - 4ms/step
Epoch 84/100
3/3 - 0s - loss: 5.8362 - 14ms/epoch - 5ms/step
Epoch 85/100
3/3 - 0s - loss: 5.8178 - 11ms/epoch - 4ms/step
Epoch 86/100
3/3 - 0s - loss: 5.7984 - 8ms/epoch - 3ms/step
Epoch 87/100
3/3 - 0s - loss: 5.7816 - 8ms/epoch - 3ms/step
Epoch 88/100
3/3 - 0s - loss: 5.7610 - 9ms/epoch - 3ms/step
Epoch 89/100
3/3 - 0s - loss: 5.7441 - 8ms/epoch - 3ms/step
Epoch 90/100
3/3 - 0s - loss: 5.7250 - 9ms/epoch - 3ms/step
Epoch 91/100
3/3 - 0s - loss: 5.7077 - 11ms/epoch - 4ms/step
Epoch 92/100
3/3 - 0s - loss: 5.6892 - 10ms/epoch - 3ms/step
Epoch 93/100
3/3 - 0s - loss: 5.6706 - 8ms/epoch - 3ms/step
Epoch 94/100
3/3 - 0s - loss: 5.6523 - 14ms/epoch - 5ms/step
Epoch 95/100
3/3 - 0s - loss: 5.6341 - 9ms/epoch - 3ms/step
Epoch 96/100
3/3 - 0s - loss: 5.6154 - 10ms/epoch - 3ms/step
Epoch 97/100
3/3 - 0s - loss: 5.5968 - 8ms/epoch - 3ms/step
Epoch 98/100
3/3 - 0s - loss: 5.5799 - 9ms/epoch - 3ms/step
Epoch 99/100
3/3 - 0s - loss: 5.5607 - 10ms/epoch - 3ms/step
Epoch 100/100
3/3 - 0s - loss: 5.5423 - 8ms/epoch - 3ms/step
1/1 [==============================] - 0s 119ms/step
Mean Squared Error: 5.8828

A decrease in loss and time metrics (ms/epoch and ms/step) shows the efficiency increases as the training epochs increases

Hack

fill in the missing code to match the custom data set

import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import StandardScaler
from tensorflow import keras
from tensorflow.keras import layers
# Generate a custom dataset (replace this with your data loading code)
# Synthetic data: House prices based on number of bedrooms and square footage
np.random.seed(0)
num_samples = 100
bedrooms = np.random.randint(1, 5, num_samples)
square_footage = np.random.randint(1000, 2500, num_samples)
house_prices = 100000 + 50000 * bedrooms + 100 * square_footage + 10000 * np.random.randn(num_samples)
# Combine features (bedrooms and square footage) into one array
X = np.column_stack((bedrooms, square_footage))
y = house_prices.reshape(-1, 1)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Standardize the features
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Create a regression model using TensorFlow and Keras
model = keras.Sequential([
    layers.Input(shape=(X_train.shape[1],)),  # Input shape adjusted to the number of features
    layers.Dense(1)  # Output layer for regression
])
# Compile the model for regression
model.compile(optimizer='adam', loss='mean_squared_error')  # Using MSE as the loss function
# Train the model
model.fit(X_train, y_train, epochs=100, batch_size=32, verbose=2)
# Make predictions on the test set
y_pred = model.predict(X_test)
# Calculate the Mean Squared Error on the test set
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error: {mse:.4f}")
2023-10-26 23:44:24.284652: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-26 23:44:24.368327: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-10-26 23:44:24.726901: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-10-26 23:44:24.726934: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-10-26 23:44:24.729100: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-10-26 23:44:24.939491: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-10-26 23:44:24.940957: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-26 23:44:26.335568: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.25.2
  warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"


Epoch 1/100
3/3 - 0s - loss: 171448238080.0000 - 286ms/epoch - 95ms/step
Epoch 2/100
3/3 - 0s - loss: 171448254464.0000 - 4ms/epoch - 1ms/step
Epoch 3/100
3/3 - 0s - loss: 171448238080.0000 - 4ms/epoch - 1ms/step
Epoch 4/100
3/3 - 0s - loss: 171448254464.0000 - 7ms/epoch - 2ms/step
Epoch 5/100
3/3 - 0s - loss: 171448238080.0000 - 6ms/epoch - 2ms/step
Epoch 6/100
3/3 - 0s - loss: 171448238080.0000 - 7ms/epoch - 2ms/step
Epoch 7/100
3/3 - 0s - loss: 171448238080.0000 - 7ms/epoch - 2ms/step
Epoch 8/100
3/3 - 0s - loss: 171448238080.0000 - 7ms/epoch - 2ms/step
Epoch 9/100
3/3 - 0s - loss: 171448238080.0000 - 6ms/epoch - 2ms/step
Epoch 10/100
3/3 - 0s - loss: 171448238080.0000 - 6ms/epoch - 2ms/step
Epoch 11/100
3/3 - 0s - loss: 171448221696.0000 - 6ms/epoch - 2ms/step
Epoch 12/100
3/3 - 0s - loss: 171448238080.0000 - 6ms/epoch - 2ms/step
Epoch 13/100
3/3 - 0s - loss: 171448238080.0000 - 5ms/epoch - 2ms/step
Epoch 14/100
3/3 - 0s - loss: 171448238080.0000 - 6ms/epoch - 2ms/step
Epoch 15/100
3/3 - 0s - loss: 171448205312.0000 - 6ms/epoch - 2ms/step
Epoch 16/100
3/3 - 0s - loss: 171448221696.0000 - 6ms/epoch - 2ms/step
Epoch 17/100
3/3 - 0s - loss: 171448221696.0000 - 5ms/epoch - 2ms/step
Epoch 18/100
3/3 - 0s - loss: 171448221696.0000 - 5ms/epoch - 2ms/step
Epoch 19/100
3/3 - 0s - loss: 171448205312.0000 - 6ms/epoch - 2ms/step
Epoch 20/100
3/3 - 0s - loss: 171448205312.0000 - 5ms/epoch - 2ms/step
Epoch 21/100
3/3 - 0s - loss: 171448205312.0000 - 7ms/epoch - 2ms/step
Epoch 22/100
3/3 - 0s - loss: 171448205312.0000 - 5ms/epoch - 2ms/step
Epoch 23/100
3/3 - 0s - loss: 171448205312.0000 - 8ms/epoch - 3ms/step
Epoch 24/100
3/3 - 0s - loss: 171448172544.0000 - 6ms/epoch - 2ms/step
Epoch 25/100
3/3 - 0s - loss: 171448188928.0000 - 7ms/epoch - 2ms/step
Epoch 26/100
3/3 - 0s - loss: 171448172544.0000 - 5ms/epoch - 2ms/step
Epoch 27/100
3/3 - 0s - loss: 171448172544.0000 - 5ms/epoch - 2ms/step
Epoch 28/100
3/3 - 0s - loss: 171448172544.0000 - 5ms/epoch - 2ms/step
Epoch 29/100
3/3 - 0s - loss: 171448172544.0000 - 5ms/epoch - 2ms/step
Epoch 30/100
3/3 - 0s - loss: 171448172544.0000 - 7ms/epoch - 2ms/step
Epoch 31/100
3/3 - 0s - loss: 171448172544.0000 - 4ms/epoch - 1ms/step
Epoch 32/100
3/3 - 0s - loss: 171448172544.0000 - 4ms/epoch - 1ms/step
Epoch 33/100
3/3 - 0s - loss: 171448172544.0000 - 5ms/epoch - 2ms/step
Epoch 34/100
3/3 - 0s - loss: 171448172544.0000 - 4ms/epoch - 1ms/step
Epoch 35/100
3/3 - 0s - loss: 171448156160.0000 - 6ms/epoch - 2ms/step
Epoch 36/100
3/3 - 0s - loss: 171448172544.0000 - 4ms/epoch - 1ms/step
Epoch 37/100
3/3 - 0s - loss: 171448156160.0000 - 5ms/epoch - 2ms/step
Epoch 38/100
3/3 - 0s - loss: 171448156160.0000 - 6ms/epoch - 2ms/step
Epoch 39/100
3/3 - 0s - loss: 171448139776.0000 - 5ms/epoch - 2ms/step
Epoch 40/100
3/3 - 0s - loss: 171448156160.0000 - 4ms/epoch - 1ms/step
Epoch 41/100
3/3 - 0s - loss: 171448139776.0000 - 5ms/epoch - 2ms/step
Epoch 42/100
3/3 - 0s - loss: 171448139776.0000 - 4ms/epoch - 1ms/step
Epoch 43/100
3/3 - 0s - loss: 171448156160.0000 - 6ms/epoch - 2ms/step
Epoch 44/100
3/3 - 0s - loss: 171448123392.0000 - 5ms/epoch - 2ms/step
Epoch 45/100
3/3 - 0s - loss: 171448139776.0000 - 5ms/epoch - 2ms/step
Epoch 46/100
3/3 - 0s - loss: 171448123392.0000 - 6ms/epoch - 2ms/step
Epoch 47/100
3/3 - 0s - loss: 171448123392.0000 - 5ms/epoch - 2ms/step
Epoch 48/100
3/3 - 0s - loss: 171448123392.0000 - 5ms/epoch - 2ms/step
Epoch 49/100
3/3 - 0s - loss: 171448123392.0000 - 5ms/epoch - 2ms/step
Epoch 50/100
3/3 - 0s - loss: 171448107008.0000 - 4ms/epoch - 1ms/step
Epoch 51/100
3/3 - 0s - loss: 171448123392.0000 - 7ms/epoch - 2ms/step
Epoch 52/100
3/3 - 0s - loss: 171448107008.0000 - 4ms/epoch - 1ms/step
Epoch 53/100
3/3 - 0s - loss: 171448107008.0000 - 5ms/epoch - 2ms/step
Epoch 54/100
3/3 - 0s - loss: 171448107008.0000 - 6ms/epoch - 2ms/step
Epoch 55/100
3/3 - 0s - loss: 171448107008.0000 - 5ms/epoch - 2ms/step
Epoch 56/100
3/3 - 0s - loss: 171448107008.0000 - 6ms/epoch - 2ms/step
Epoch 57/100
3/3 - 0s - loss: 171448107008.0000 - 5ms/epoch - 2ms/step
Epoch 58/100
3/3 - 0s - loss: 171448107008.0000 - 4ms/epoch - 1ms/step
Epoch 59/100
3/3 - 0s - loss: 171448107008.0000 - 6ms/epoch - 2ms/step
Epoch 60/100
3/3 - 0s - loss: 171448107008.0000 - 4ms/epoch - 1ms/step
Epoch 61/100
3/3 - 0s - loss: 171448090624.0000 - 5ms/epoch - 2ms/step
Epoch 62/100
3/3 - 0s - loss: 171448074240.0000 - 4ms/epoch - 1ms/step
Epoch 63/100
3/3 - 0s - loss: 171448074240.0000 - 5ms/epoch - 2ms/step
Epoch 64/100
3/3 - 0s - loss: 171448074240.0000 - 5ms/epoch - 2ms/step
Epoch 65/100
3/3 - 0s - loss: 171448074240.0000 - 5ms/epoch - 2ms/step
Epoch 66/100
3/3 - 0s - loss: 171448074240.0000 - 4ms/epoch - 1ms/step
Epoch 67/100
3/3 - 0s - loss: 171448090624.0000 - 7ms/epoch - 2ms/step
Epoch 68/100
3/3 - 0s - loss: 171448074240.0000 - 5ms/epoch - 2ms/step
Epoch 69/100
3/3 - 0s - loss: 171448074240.0000 - 6ms/epoch - 2ms/step
Epoch 70/100
3/3 - 0s - loss: 171448074240.0000 - 5ms/epoch - 2ms/step
Epoch 71/100
3/3 - 0s - loss: 171448074240.0000 - 5ms/epoch - 2ms/step
Epoch 72/100
3/3 - 0s - loss: 171448074240.0000 - 6ms/epoch - 2ms/step
Epoch 73/100
3/3 - 0s - loss: 171448041472.0000 - 5ms/epoch - 2ms/step
Epoch 74/100
3/3 - 0s - loss: 171448057856.0000 - 5ms/epoch - 2ms/step
Epoch 75/100
3/3 - 0s - loss: 171448041472.0000 - 6ms/epoch - 2ms/step
Epoch 76/100
3/3 - 0s - loss: 171448041472.0000 - 4ms/epoch - 1ms/step
Epoch 77/100
3/3 - 0s - loss: 171448041472.0000 - 7ms/epoch - 2ms/step
Epoch 78/100
3/3 - 0s - loss: 171448041472.0000 - 5ms/epoch - 2ms/step
Epoch 79/100
3/3 - 0s - loss: 171448041472.0000 - 5ms/epoch - 2ms/step
Epoch 80/100
3/3 - 0s - loss: 171448041472.0000 - 5ms/epoch - 2ms/step
Epoch 81/100
3/3 - 0s - loss: 171448041472.0000 - 4ms/epoch - 1ms/step
Epoch 82/100
3/3 - 0s - loss: 171448041472.0000 - 6ms/epoch - 2ms/step
Epoch 83/100
3/3 - 0s - loss: 171448041472.0000 - 5ms/epoch - 2ms/step
Epoch 84/100
3/3 - 0s - loss: 171448041472.0000 - 4ms/epoch - 1ms/step
Epoch 85/100
3/3 - 0s - loss: 171448025088.0000 - 6ms/epoch - 2ms/step
Epoch 86/100
3/3 - 0s - loss: 171448041472.0000 - 5ms/epoch - 2ms/step
Epoch 87/100
3/3 - 0s - loss: 171448025088.0000 - 5ms/epoch - 2ms/step
Epoch 88/100
3/3 - 0s - loss: 171448025088.0000 - 6ms/epoch - 2ms/step
Epoch 89/100
3/3 - 0s - loss: 171448008704.0000 - 5ms/epoch - 2ms/step
Epoch 90/100
3/3 - 0s - loss: 171448008704.0000 - 5ms/epoch - 2ms/step
Epoch 91/100
3/3 - 0s - loss: 171447992320.0000 - 5ms/epoch - 2ms/step
Epoch 92/100
3/3 - 0s - loss: 171448008704.0000 - 5ms/epoch - 2ms/step
Epoch 93/100
3/3 - 0s - loss: 171448008704.0000 - 7ms/epoch - 2ms/step
Epoch 94/100
3/3 - 0s - loss: 171447992320.0000 - 6ms/epoch - 2ms/step
Epoch 95/100
3/3 - 0s - loss: 171447992320.0000 - 7ms/epoch - 2ms/step
Epoch 96/100
3/3 - 0s - loss: 171448008704.0000 - 4ms/epoch - 1ms/step
Epoch 97/100
3/3 - 0s - loss: 171447992320.0000 - 7ms/epoch - 2ms/step
Epoch 98/100
3/3 - 0s - loss: 171447992320.0000 - 5ms/epoch - 2ms/step
Epoch 99/100
3/3 - 0s - loss: 171447992320.0000 - 4ms/epoch - 1ms/step
Epoch 100/100
3/3 - 0s - loss: 171447975936.0000 - 6ms/epoch - 2ms/step
1/1 [==============================] - 0s 48ms/step
Mean Squared Error: 158481886870.1379

HOMEWORK 1

Create a GPA calculator using Pandas and Matplot libraries and make: 1) A dataframe 2) A specified dictionary 3) and a print function that outputs the final GPA

Extra points can be earned with creativity.

# your code here
import pandas as pd

# Dictionary to correspond letter grade to GPA
APlettertoGPA = {
    "A": 5,
    "B": 4,
    "C": 3,
    "D": 2,
    "F": 1
}

# Create dataframe with rows for period, class, teacher, and letter grade
schoolClasses = pd.DataFrame(
    {
        "Period": [x+1 for x in range(5)], # 0, 1, 2, 3, 4 --> 1, 2, 3, 4, 5
        "Class": pd.Categorical(["AP Statistics", "AP Computer Science Principles", "AP Biology", "AP United States History", "AP English Language and Composition"]), # list of classes
        "Teacher": pd.Categorical(["Mr. Edelstein", "Mr. Mortenson", "Mrs. Pedraza", "Mr. Swanson", "Mrs. Dafoe"]), # list of teachers
        "Letter Grade": pd.Categorical(["D", "C", "B", "A", "B"]), # arbitrarily picked letters
    }
)

# Function that finds GPA from letter grade and outputs it
def outputGPA():
    GPAtotal = 0 # used later to calculate average GPA
    for index, row in schoolClasses.iterrows():
        class_name = row['Class'] # sets each class name in the list in the row 'Class' as class_name
        letter_grade = str(row['Letter Grade']) # sets each letter in the list in the row 'Letter Grade' as letter_grade
        GPA = APlettertoGPA[letter_grade] # finds the GPA corresponding to the letter grade using the dictionary defined earlier
        print(f"The GPA for {class_name} is {GPA}") # prints class name and GPA
        GPAtotal += GPA # adds to total GPA count
    avgGPA = GPAtotal/5 # finds average GPA
    print(f"The average GPA is {avgGPA}") # outputs average GPA
outputGPA()

# Various table formats
schoolClasses # basic one
# schoolClasses.index
# schoolClasses.dtypes
# schoolClasses.to_numpy()
# schoolClasses.describe()
# schoolClasses.T # this looks nice
# schoolClasses.sort_index(axis=1, ascending=False)
The GPA for AP Statistics is 2
The GPA for AP Computer Science Principles is 3
The GPA for AP Biology is 4
The GPA for AP United States History is 5
The GPA for AP English Language and Composition is 4
The average GPA is 3.6
Period Class Teacher Letter Grade
0 1 AP Statistics Mr. Edelstein D
1 2 AP Computer Science Principles Mr. Mortenson C
2 3 AP Biology Mrs. Pedraza B
3 4 AP United States History Mr. Swanson A
4 5 AP English Language and Composition Mrs. Dafoe B

HOMEWORK 2

Import and use the “random” library to generate 50 different points from the range 0-100, then display the randomized data using a scatter plot.

Extra points can be earned with creativity.

# your code here
import random
import numpy as np
import matplotlib.pyplot as plt

# Generate data for two lines
# Set of data points
x = np.linspace(0, 100, 50)  # Create an array of values from 0 to 100 with 50 distributed points on the scatterplot

setofValues = [] # define empty list of values for y
count = 0 # set variable count to make a list of 50
while count < 50:
    count += 1
    setofValues.append(random.randint(0, 100)) # adds random number from 0 to 100 into setofValues
y2 = setofValues # sets y2 equal to setofValues

# Create and display a plot using Matplotlib
plt.scatter(x, y2, color='green', s=18, alpha=0.5)  # Create the plot, s = 18 sets size, alpha = 0.5 sets some transparency
plt.title('Scatterplot of Random Data')  # Set the title
plt.xlabel('Value from 0 to 100')  # Label for the x-axis
plt.ylabel('Random Value from 0 to 100')  # Label for the y-axis
plt.grid(True)  # Display a grid
plt.show()  # Display the plot

png