Ashisane / Week 1

Neural Network from Scratch

Draw a digit below — watch pure math classify it.

Prediction ?

How It Works

This project builds a neural network from first principles using MNIST (60,000 train, 10,000 test) and serves the learned weights directly in-browser for real-time digit recognition.

01

Input Representation

Each MNIST image is 28x28 grayscale, flattened into a 784-length vector with values normalized to 0-1. This vector is the raw numeric signal the network learns from.

02

Feature Extraction (784 -> 128)

The first weight matrix projects input pixels into 128 hidden units. ReLU keeps positive activations and suppresses weak/noisy responses, turning random weights into useful digit-sensitive features.

03

Classification Head (128 -> 10)

Hidden activations are mapped to 10 logits, one for each digit class (0-9). Softmax converts logits into probabilities, and the highest probability is selected as the prediction.

04

Learning Signal and Backpropagation

Error is computed as actual - prediction, giving direction to weight updates. Output-layer updates are scaled by hidden activations and learning rate, then propagated back to input-layer weights so every update reflects contribution to final error.

05

Browser Inference Pipeline

Canvas input is cropped, scaled to fit a 20x20 box, centered by center-of-mass in a 28x28 frame, and normalized before inference. Matching MNIST-style preprocessing is what makes the deployed model robust enough to reach ~95% test accuracy.

784Input Pixels
128Hidden Neurons
10Output Classes
95.3%Test Accuracy