with Docker on Allxon

Allxon provides essential remote device management solutions to simplify and optimize edge AI device operations. As an AI/IoT ecosystem enabler, Allxon connects hardware (IHV), software (ISV), and service providers (SI/MSP), serving as a catalyst for fast, seamless connectivity across all systems.

Allxon Over-the-Air (OTA) deployment works perfectly with Edge Impulse OTA model updates on NVIDIA Jetson devices. This tutorial guides you through the steps to deploy a new impulse on multiple devices.

Introduction

This guide demonstrates how to deploy and manage Edge Impulse models on NVIDIA Jetson devices using Allxon's Over-the-Air (OTA) deployment capabilities. Allxon provides essential remote device management solutions to streamline and optimize edge AI device operations.

Prerequisites

Before you begin, ensure you have the following:

  1. Updated impulse as a Docker container from Edge Impulse.
  2. Get Allxon officially supported devices.(https://www.allxon.com/)
  3. Create an Allxon account.

Getting Started with Allxon

Allxon's services are compatible with a variety of hardware models. Follow these steps to complete the required preparations.

Add a Device to Allxon Portal

  1. Install Allxon Agent: Use the command prompt to install the Allxon Agent on your device.
  2. Pair Your Device: Follow the instructions to add your device to Allxon Portal.

Once added, your devices will appear in the Allxon Portal for management and monitoring.

Allxon OTA Deployment

To perform an OTA deployment, ensure you have your updated Impulse deployed as a Docker container from Edge Impulse.

Steps to Deploy

  1. Generate OTA Artifact: Use the Allxon CLI to generate the OTA artifact.
  2. Deploy OTA Artifact: Follow the Deploy OTA Artifact guide to complete the deployment.

Example Scripts

Below are example scripts to help you set up the OTA deployment.

ota_deploy.sh

#!/bin/bash

set -e

mkdir -p /opt/allxon/tmp/core/appies_ota/model_logs/

./install.sh > /opt/allxon/tmp/core/appies_ota/model_logs/log.txt 2>&1

echo "Model deployment has started. Please check /opt/allxon/tmp/core/appies_ota/model_logs/log.txt for progress."

install.sh

#!/bin/bash

docker run --rm --privileged --runtime nvidia \

-v /dev/bus/usb/001/002:/dev/video0 \

-p 80:80 \

public.ecr.aws/z9b3d4t5/inferencecontainer:73d6ea64bf931f338de5183438915dc390120d5d \

--api-key <replace with your project api key e.g. ei_07XXX > \

--run-http-server 1337 &

Two minor modifications have been made to the Docker command from Edge Impulse:

  • The -it option has been removed from the Docker command to avoid an error related to the lack of standard input during deployment.
  • An & has been added to the end of the Docker command to send the process to the background.

Conclusion

By following these steps, you can efficiently deploy and manage Edge Impulse models on NVIDIA Jetson devices using Docker through Allxon. This setup leverages Allxon's remote management capabilities to streamline the process of updating and maintaining your edge AI devices.

 

Available at EDGE IMPULSE

Source: EDGE IMPULSE. (2024). Tutorials - OTA Model Updates - with Docker on Allxon. [online]