Getting Started with Jetson AGX Orin (Post-SDK Update) + TensorFlow Installation

by Sylvain Artois on Mar 26, 2025

Sunset, 1930 - Paul Klee - www.nga.gov
Sunset, 1930 - Paul Klee - www.nga.gov

A complete guide to setting up a clean NVIDIA Jetson AGX Orin development environment using JetPack 6.2, including TensorFlow GPU support, SSD optimization, and Python/Conda tooling.

Why It’s Challenging

Setting up a Jetson AGX Orin for machine learning workflows can be challenging for two main reasons:

  1. Architecture shift: Jetson devices are powered by NVIDIA’s ARM-based Tegra SoC, not the usual x86_64 architecture. This means many packages aren’t directly compatible or require special builds.
  2. Deep learning stack complexity: Python ML libraries (like TensorFlow or PyTorch) depend on low-level binaries (CUDA, cuDNN, NCCL…) that are tightly coupled to system versions and hardware drivers.

That said, the Jetson platform is incredibly powerful for edge AI once properly configured.

If you’re targeting production, NVIDIA’s Docker-based approach is highly recommended. Check out jetson-containers by dusty-nv, which provides prebuilt containers for TensorFlow, PyTorch, and many other ML tools — optimized and maintained by NVIDIA.

The rest of this article focuses on a bare-metal install using Python and Conda, for those who want to better understand or customize their environment.

Step 1 — Flash JetPack 6.2

Use the NVIDIA SDK Manager to flash JetPack 6.2 to your Jetson AGX Orin. This installs Ubuntu 22.04 + the full L4T stack (Linux for Tegra: drivers, CUDA, cuDNN, etc.).

To verify installation:

nvcc --version  # Should show CUDA 12.2+

If it’s KO, simply install nvidia-jetpack

sudo apt-get install nvidia-jetpack

Step 2 — Move Your Home and Docker Data to SSD

Jetson supports NVMe over PCIe Gen4 — use it! Moving your home directory and Docker images here will improve performance.

  1. Mount your SSD (e.g. /mnt/nvme)
  2. Move and symlink your home:
rsync -aP /home/username/ /mnt/nvme/home/
sudo nano /etc/fstab  # Mount SSD as /home
  1. Move Docker’s data dir:
sudo systemctl stop docker
sudo mv /var/lib/docker /mnt/nvme/docker
sudo ln -s /mnt/nvme/docker /var/lib/docker
sudo systemctl start docker

Step 3 — (Opinionated) Shell & Python Setup

These are personal choices that improve day-to-day developer experience. Feel free to skip or adapt.

Install ZSH + Oh My Zsh

sudo apt install zsh
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
chsh -s $(which zsh)

Update .zshrc with Jetson-specific paths (LD_LIBRARY_PATH, etc.).

Install Miniconda (aarch64)

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh
bash Miniconda3-latest-Linux-aarch64.sh
source ~/.zshrc  # or ~/.bashrc

Step 4 — Set Up Remote Access

Install and enable SSH:

sudo apt install openssh-server
sudo systemctl enable ssh
sudo systemctl start ssh

Consider disabling password authentication and using SSH keys.

Step 5 — Install jtop (Jetson Monitor)

sudo apt install python3-pip
sudo pip3 install -U jetson-stats
jtop

This shows temperature, CPU/GPU usage, power draw, and more.

Step 6 — Install TensorFlow (GPU)

As of March 2025, JetPack 6.2 does not provide an official TensorFlow build. You must use packages from JetPack 6.1 (v61) instead:

conda create -n tensorflow python=3.10
conda activate tensorflow

pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v61 \
    tensorflow==2.16.1+nv24.08

Why not the official method?

The JetPack 6.2 index (jp/v512) listed in NVIDIA docs doesn’t have any working TensorFlow versions:

ERROR: No matching distribution found for tensorflow==2.12.0+nv23.06

So until NVIDIA updates the index, v61 is the only viable route.

Step 7 — Validate the Setup

Create a quick TensorFlow script to confirm GPU support:

# test_tf.py
import tensorflow as tf

if tf.config.list_physical_devices('GPU'):
    print("✅ Found GPU:", tf.test.gpu_device_name())
else:
    print("❌ No GPU found")

Run with:

python test_tf.py

Expected output:

 Found GPU: /device:GPU:0

To silence NUMA warnings:

python -u test_tf.py 2>&1 | grep -v "NUMA"

Conclusion

With this setup:

  • You benefit from the full Jetson hardware acceleration stack (CUDA, cuDNN)
  • Your Python environment is isolated and reproducible via Conda
  • You’re ready to build, train, or deploy TensorFlow models on the edge

If you want more flexibility and reproducibility, check out NVIDIA’s jetson-containers. Otherwise, this guide should get you started quickly and cleanly — especially if you want full control over your environment.

Share on LinkedIn


Leave a Comment