Lab01: Handwritten Digit Recognition with AI
Example Project Path: ai_labs/Lab01_HandwrittenDigits
Introduction
In this lab, you will train a deep learning model to recognize handwritten digits using Jupyter Notebook (inside Docker) on Windows 11, perform real-time digit recognition on Windows, and deploy the trained model to a Raspberry Pi 5 with a Hailo8 AI HAT and camera.
Expected Outcomes
- Understand the workflow for training and deploying an AI model.
 - Use Docker and Conda for Python development environments.
 - Run live digit recognition from webcam/camera on Windows and Raspberry Pi 5.
 - Experience hardware-accelerated AI inference with the Hailo8 module.
 
Hardware Requirements
- Windows 11 PC (Intel/AMD)
 - Raspberry Pi 5 (with Hailo8 AI HAT)
 - Raspberry Pi camera module
 - Optional: SSD for Pi 5
 
Part I: Train the AI Model on Windows 11
Step 1: Download and Prepare the Lab Files
- Download and Extract Lab Files (Lab01_HandwrittenDigits.zip)
- Download the provided lab zip file and extract it to:
C:\ai_labs\Lab01_HandwrittenDigits 
 - Download the provided lab zip file and extract it to:
 - Ensure your folder structure is:
Lab01_HandwrittenDigits/ |-- docker/ │ |-- Dockerfile │ `-- docker-compose.yml |-- notebooks/ │ `-- train_model.ipynb |-- src/ │ |-- camera_single_digit.py │ `-- camera_multiple_digits.py |-- saved_model/ │ `-- (model files will be saved here) |-- data/ | `-- (optional: for data storage) `-- requirements.txt 
Step 2. Create and Launch a Docker Container for Jupyter
- Open a terminal and change to the docker folder:
cd C:\ai_labs\Lab01_HandwrittenDigits\docker - Build and run the Docker container:
docker compose up --build - Open Jupyter Lab in your browser at:
http://127.0.0.1:8888/lab 
Step 3. Train and Export the Model
- In Jupyter Lab, open notebooks/train_model.ipynb.
 - Follow the instructions to complete any missing code as directed by your instructor.
 - Click Run All to execute all cells.
 - After completion, check that handwritten_digit_model.h5 appears in the saved_model folder.
 
Part II: Convert Model and Test on Windows 11
Step 1. Prepare Conda Environment for Model Conversion
- Open Anaconda PowerShell Prompt.
 - Change to the project directory:
cd C:\ai_labs\Lab01_HandwrittenDigits - Create and activate the environment:
conda create --prefix .\.conda python=3.10 conda activate .\.conda - Install required packages:
conda install -c conda-forge tensorflow numpy=1.26 opencv 
Step 2. Convert the Keras Model (.h5) to TensorFlow Lite (.tflite) for Raspberry Pi 5
- Change to the src folder:
cd src - Create a script named convert_h5_to_tflite.py with the following content:
# convert_h5_tflite.py import tensorflow as tf model = tf.keras.models.load_model('../saved_model/handwritten_digit_model.h5') converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() with open('../saved_model/handwritten_digit_model.tflite', 'wb') as f: f.write(tflite_model) print("Conversion complete: handwritten_digit_model.tflite") - Run the conversion:
python convert_h5_to_tflite.py - Confirm that handwritten_digit_model.tflite is created.
 
Step 3. Test Model Inference on Windows
- Run single-digit recognition:
Hold a single digit up to your webcam. Press q to quit.python camera_single_digit.py - Run multiple-digit recognition:
Show multiple digits. Press q to quit.python camera_multiple_digits.py - Deactivate Conda:
conda deactivate 
Part III: Deploy and Run on Raspberry Pi 5
Prerequisites
- Docker and Conda installed on Pi 5
 - Hailo8 AI HAT and Pi camera attached
 - Run sudo apt install hailo-all
 - Hailo Python SDK/RT installed under /usr/lib/python3/dist-packages
 - Pi camera tested and functional
 
Step 1. Create Project Directory
- In your terminal:
mkdir -p /ssd/ai_labs/Lab01_HandwrittenDigits cd /ssd/ai_labs/Lab01_HandwrittenDigits 
Step 2. Transfer the Trained Model
- Copy handwritten_digit_model.tflite from Windows to Pi 5, for example (adjust [PI_IP]):
 
scp C:\ai_labs\Lab01_HandwrittenDigits\saved_model\handwritten_digit_model.tflite pi@[PI_IP]:/ssd/ai_labs/Lab01_HandwrittenDigits/
Step 3. Install All Necessary Packages for System Python
- Update and install system Python packages:
sudo apt update
sudo apt install python3-picamera2 python3-opencv python3-numpy - Install TensorFlow Lite Runtime (ignore warning about --break-system-packages):
sudo pip3 install tflite-runtime --break-system-packages - Verify the installation:
/usr/bin/python3 -c 'import cv2, numpy; from picamera2 import Picamera2; from tflite_runtime.interpreter import Interpreter; print("All packages OK!")'You should see: All packages OK! 
Step 4. (Optional) Check Hailo Packages/Drivers
- List Hailo packages (if using Hailo for acceleration):
 
ls /usr/lib/python3/dist-packages | grep hailo ls /usr/lib/python3/dist-packages/hailo_platform/drivers
Step 5. Execute the Python Code for Camera Digit Recognition
- Use the Picamera2-based script, for example, pi_camera_single_digit_tflite_picamera2.py.
 - Run the script with system Python: 
/usr/bin/python3 pi_camera_single_digit_tflite_picamera2.py - Hold a handwritten digit in front of the camera. The prediction will show in a window. Press q to exit.
 
Appendix: Useful Commands
- Check the installed Python version: 
python3 --version - Check installed packages: 
pip3 list - Update Raspberry Pi OS: 
sudo apt update && sudo apt upgrade - Check camera connection: 
libcamera-hello(orrpicam-helloon Pi 5 with rpicam-apps) 
Appendix A: Useful Docker Commands
| Command | Description | 
|---|---|
| docker --version | Check Docker version | 
| docker compose version | Check Docker Compose version | 
| docker compose up --build | Build image & run container | 
| docker compose up | Start container (after initial build) | 
| docker compose down | Stop and remove the container | 
| docker compose stop | Stop the container | 
| docker compose start | Restart a stopped container | 
| docker ps | List running containers | 
| docker ps -a | List all containers | 
| docker images | List all images | 
| docker rm <container_id> | Remove a container | 
| docker rmi <image_id> | Remove an image | 
| docker system prune | Remove all stopped containers and unused data | 
| Note: Always run in docker folder | |
Appendix B: Useful Conda Commands
| Command | Description | 
|---|---|
| conda --version | Check Conda version | 
| conda update conda | Update Conda | 
| conda create --prefix ./.conda/tf311 python=3.10 | Create environment (Python 3.10) | 
| conda create --prefix ./.conda/tf311 python=3.11 | Create environment (Python 3.11) | 
| conda activate ./.conda/tf311 | Activate environment | 
| conda deactivate | Deactivate environment | 
| conda install -c conda-forge <package> | Install package via conda-forge | 
| pip install <package> | Install a package using pip | 
| conda list | List installed packages | 
| conda remove <package> | Remove a package | 
| conda env list | List all environments | 
| conda env remove --prefix ./.conda/tf311 | Remove the virtual environment | 
Notes:
- Docker and Conda are assumed pre-installed.
 - Jupyter notebook: train_model.ipynb in notebooks/
 - Recognition scripts: camera_single_digit.py and camera_multiple_digits.py in src/
 - On Pi, use conda for main packages, pip for TensorFlow Lite (tflite-runtime).