Lab01: Handwritten Digit Recognition with AI

Example Project Path: ai_labs/Lab01_HandwrittenDigits

Introduction

In this lab, you will train a deep learning model to recognize handwritten digits using Jupyter Notebook (inside Docker) on Windows 11, perform real-time digit recognition on Windows, and deploy the trained model to a Raspberry Pi 5 with a Hailo8 AI HAT and camera.

Expected Outcomes
  • Understand the workflow for training and deploying an AI model.
  • Use Docker and Conda for Python development environments.
  • Run live digit recognition from webcam/camera on Windows and Raspberry Pi 5.
  • Experience hardware-accelerated AI inference with the Hailo8 module.
Hardware Requirements
  • Windows 11 PC (Intel/AMD)
  • Raspberry Pi 5 (with Hailo8 AI HAT)
  • Raspberry Pi camera module
  • Optional: SSD for Pi 5

Part I: Train the AI Model on Windows 11

Step 1: Download and Prepare the Lab Files

  1. Download and Extract Lab Files
    • Download the provided lab zip file and extract it to:
      C:\ai_labs\Lab01_HandwrittenDigits
  2. Ensure your folder structure is:
    
    Lab01_HandwrittenDigits/
    |-- docker/
    │   |-- Dockerfile
    │   `-- docker-compose.yml
    |-- notebooks/
    │   `-- train_model.ipynb
    |-- src/
    │   |-- camera_single_digit.py
    │   `-- camera_multiple_digits.py
    |-- saved_model/
    │   `-- (model files will be saved here)
    |-- data/
    |   `-- (optional: for data storage)
    `-- requirements.txt
    
    

Step 2. Create and Launch a Docker Container for Jupyter

  1. Open a terminal and change to the docker folder:
    cd C:\ai_labs\Lab01_HandwrittenDigits\docker
  2. Build and run the Docker container:
    docker compose up --build
  3. Open Jupyter Lab in your browser at:
    http://127.0.0.1:8888/lab

Step 3. Train and Export the Model

  1. In Jupyter Lab, open notebooks/train_model.ipynb.
  2. Follow the instructions to complete any missing code as directed by your instructor.
  3. Set epochs=16 in the training code. (More epochs improve model accuracy on small datasets.)
  4. Click Run All to execute all cells.
  5. After completion, check that handwritten_digit_model.h5 appears in the saved_model folder.

Part II: Convert Model and Test on Windows 11

Step 1. Prepare Conda Environment for Model Conversion

  1. Open Anaconda PowerShell Prompt.
  2. Change to the project directory:
    cd C:\ai_labs\Lab01_HandwrittenDigits
  3. Create and activate the environment:
    conda create --prefix .\.conda\envs\tf310 python=3.10
    conda activate .\.conda\envs\tf310
  4. Install required packages:
    conda install -c conda-forge tensorflow numpy=1.26 opencv

Step 2. Convert the Keras Model (.h5) to TensorFlow Lite (.tflite)

  1. Change to the src folder:
    cd src
  2. Copy the trained model:
    copy ..\saved_model\handwritten_digit_model.h5 .
  3. Create a script named convert_to_tflite.py with the following content:
    
    # convert_h5_tflite.py
    import tensorflow as tf
    model = tf.keras.models.load_model('../saved_model/handwritten_digit_model.h5')
    converter = tf.lite.TFLiteConverter.from_keras_model(model)
    tflite_model = converter.convert()
    with open('../saved_model/handwritten_digit_model.tflite', 'wb') as f:
        f.write(tflite_model)
    print("Conversion complete: handwritten_digit_model.tflite")
    
  4. Run the conversion:
    python convert_to_tflite.py
  5. Confirm that handwritten_digit_model.tflite is created.

Step 3. Test Model Inference on Windows

  1. Run single-digit recognition:
    python camera_single_digit.py
    Hold a single digit up to your webcam. Press q to quit.
  2. Run multiple-digit recognition:
    python camera_multiple_digits.py
    Show multiple digits. Press q to quit.
  3. Deactivate Conda:
    conda deactivate

Part III: Deploy and Run on Raspberry Pi 5

Prerequisites
  • Docker and Conda installed on Pi 5
  • Hailo8 AI HAT and Pi camera attached
  • Run sudo apt install hailo-all
  • Hailo Python SDK/RT installed under /usr/lib/python3/dist-packages
  • Pi camera tested and functional

Step 1. Create Project Directory

  • In your terminal:
    
    mkdir -p /ssd/ai_labs/Lab01_HandwrittenDigits
    cd /ssd/ai_labs/Lab01_HandwrittenDigits
    

Step 2. Transfer the Trained Model

    • Copy handwritten_digit_model.tflite from Windows to Pi 5, for example (adjust [PI_IP]):
scp C:\ai_labs\Lab01_HandwrittenDigits\src\handwritten_digit_model.tflite pi@[PI_IP]:/ssd/ai_labs/Lab01_HandwrittenDigits/

Step 3. Install All Necessary Packages for System Python

  1. Update and install system Python packages:
    sudo apt update
    sudo apt install python3-picamera2 python3-opencv python3-numpy
  2. Install TensorFlow Lite Runtime (ignore warning about --break-system-packages):
    sudo pip3 install tflite-runtime --break-system-packages
  3. Verify the installation:
    /usr/bin/python3 -c 'import cv2, numpy; from picamera2 import Picamera2; from tflite_runtime.interpreter import Interpreter; print("All packages OK!")'
        
    You should see: All packages OK!

Step 4. (Optional) Check Hailo Packages/Drivers

    • List Hailo packages (if using Hailo for acceleration):
ls /usr/lib/python3/dist-packages | grep hailo
ls /usr/lib/python3/dist-packages/hailo_platform/drivers

Step 5. Execute the Python Code for Camera Digit Recognition

  1. Use the Picamera2-based script, for example, pi_camera_single_digit_tflite_picamera2.py.
  2. Run the script with system Python: /usr/bin/python3 pi_camera_single_digit_tflite_picamera2.py
  3. Hold a handwritten digit in front of the camera. The prediction will show in a window. Press q to exit.

Appendix: Useful Commands

    • Check installed Python version: python3 --version
    • Check installed packages: pip3 list
    • Update Raspberry Pi OS: sudo apt update && sudo apt upgrade
    • Check camera connection: libcamera-hello (or rpicam-hello on Pi 5 with rpicam-apps)

Appendix A: Useful Docker Commands

CommandDescription
docker --version Check Docker version
docker compose version Check Docker Compose version
docker compose up --build Build image & run container
docker compose up Start container (after initial build)
docker compose down Stop and remove container
docker compose stop Stop the container
docker compose start Restart a stopped container
docker ps List running containers
docker ps -a List all containers
docker images List all images
docker rm <container_id> Remove a container
docker rmi <image_id> Remove an image
docker system prune Remove all stopped containers and unused data
Note: Always run in docker folder

Appendix B: Useful Conda Commands

CommandDescription
conda --version Check Conda version
conda update conda Update Conda
conda create --prefix ./.conda/tf311 python=3.10 Create environment (Python 3.10)
conda create --prefix ./.conda/tf311 python=3.11 Create environment (Python 3.11)
conda activate ./.conda/tf311 Activate environment
conda deactivate Deactivate environment
conda install -c conda-forge <package> Install package via conda-forge
pip install <package> Install a package using pip
conda list List installed packages
conda remove <package> Remove a package
conda env list List all environments
conda env remove --prefix ./.conda/tf311 Remove virtual environment

Notes:

    • Docker and Conda are assumed pre-installed.
    • Jupyter notebook: train_model.ipynb in notebooks/
    • Recognition scripts: camera_single_digit.py and camera_multiple_digits.py in src/
    • On Pi, use conda for main packages, pip for TensorFlow Lite (tflite-runtime).
© 2025 Air Supply Information Center (Air Supply BBS)