Support Notes
Support information here in the product distribution is a snapshot. For the latest information,
see the PDF version on the TotalView documentation web site.
X Windows: X Windows is required on all platforms to run the TotalView and MemoryScape GUIs. Systems used for remote debugging, i.e. those running only the TotalView Server, do not need X Windows installed.
OpenMP: Most languages now support OpenMP. If your language supports it, and if your OpenMP code compiles successfully with one of our supported compilers, then your OpenMP is considered supported by TotalView.
OMPD: A compiler that supports either OpenMP 5.0 or 5.2 is required. For more information on OpenMP compilers, see
OpenMP Compilers and Tools on the OpenMP website. TotalView has been tested against the following OMPD compilers:
LLVM Clang 17.0.6
LLVM Flang 17.0.1
AMD Clang and AMD Clang ROCm 6.0 (AMD Clang version 17.0.0)
HPE CC/C++ and Fortran 17.0.0
CUDA debugging: Operating systems support: Linux x86-64, Linux PowerLE/OpenPOWER, and Linux-arm64 operating systems including NVIDIA Grace/Hopper systems. Current support is for the 9.2, and 10 - 12 tool chains.
NVIDIA GPUs support: Tesla, Fermi, Kepler, Pascal, Volta, Turing, Ampere, Hopper
Notes: 1) There is limited support for the Dynamic Parallelism feature; 2) On the NVIDIA Jetson Xavier Developer Kit, you must debug applications as root. For more information, please see
“Using the CUDA Debugger” in the Classic TotalView User Guide.
AMD ROCm debugging: Operating systems support: Linux x86-64. Current support is for ROCm 5.4 - 6.1.
AMD ROCm GPUs support: MI50, MI100, MI200, MI300 series GPUs
Notes:
1) TV 2024.1 (or later) is required for AMD GPU MI300 devices, which require ROCm 6.0 or later.
2) TotalView's support for AMD ROCm GPUs depends on preliminary releases of the ROCm development kit. As new ROCm releases become available, TotalView will incorporate updates, and will also continue to add new ROCm debugging capabilities to future TotalView releases.
TotalView Remote Client: The Remote Client supports debugging with TotalView on a Windows, macOS, or Linux x86 front-end system while connected to a remote back-end system. For macOS and Linux, supported front-end systems are the same as the full version of TotalView. For Windows, Windows 10 and 11 are supported.
For all systems, the front-end and back-end versions must be the same. For example, for the 2024.1 version of the TotalView Remote Client, the back-end debugger must also be version 2024.1.
ReplayEngine for reverse debugging: Supported on Linux x86-64 operating systems. On other platforms, ReplayEngine buttons and menu selections are grayed out in the UI. For more information, see
“Reverse Debugging with ReplayEngine”.
Replay Engine supports the IP transport mechanism on most MPI systems. It supports communication over Infiniband using either the IBverbs or the QLogic PSM transport layers on some systems. Please see the section
“Using ReplayEngine with Infiniband MPIs” in the
Classic TotalView User Guide for details.
LiveRecorder: Debugging LiveRecorder-generated recording files up to version 7.2 of LiveRecorder is supported on Linux x86-64 operating systems.
Python debugging: Python 2.7 and 3.5 - 3.11 debugging is supported on Linux x86-64 operating systems. For more information, please see
“Debugging Python” in the new UI’s TotalView User Guide..
C++ STL type transformations:
RZVernal and Tioga systems:
Platform Support
Platforms | Operating Systems | Compilers | MPI Products |
---|
Linux x86-64 | Red Hat Enterprise 7.9 and 8 and CentOS 7.9, 8 (Stream), and 9 Red Hat Fedora 36, 37, and 38 Ubuntu 18.04, 20.04, and 22.04 SuSE Linux Enterprise Server 12 and 15 Rocky Linux 8 | Intel oneAPI 2021 - 2023 Intel 18 -19 GNU (gcc, g++, gfortran) 4.3 - 13 PGI Workstation 11.2 - 18.10 Oracle Studio 12 NVIDIA OpenACC Clang 3.5 - 16 AMD Clang 5 | Argonne MPICH Argonne MPICH2 GNU SLURM HPE MPI 2 HPE MPT Intel MPI Intel oneAPI Open MPI OSU MVAPICH OSU MVAPICH2 Bullx MPI IBM Platform MPI Berkeley UPC (32-bit only) |
Apple (Intel) Apple (ARM64) | macOS Ventura (13) macOS Sonoma (14) | Intel oneAPI 2021 - 2023 Intel 18 -19 GNU (gcc, g++, gfortran) 4.3 - 13 Apple Clang 9 - 13 | Argonne MPICH Argonne MPICH2 Intel oneAPI Open MPI |
Cray XT / XE/ XK / XC | Cray Linux Environment (CLE) | PGI, GNU (gcc, g++, gfortran), and CCE | HPE Cray MPI |
Cray EX (Shasta) | HPE Cray OS (SLES) | PGI, GNU (gcc, g++, gfortran), and CCE | HPE Cray MPI |
Linux PowerLE / OpenPOWER | Ubuntu 18.04 Red Hat Enterprise Linux 7.5 | GNU (gcc, g++, gfortran) 4.3 - 13 NVIDIA OpenACC | Open MPI |
Linux-arm64 | Ubuntu 18.04, 20.04, and 22.04 Red Hat Enterprise 7.9 and 8 and CentOS 7.9, 8 (Stream), and 9 | GNU (gcc, g++, gfortran) 4.3 - 13 Arm Compiler 22 NVIDIA OpenACC Clang 3 - 7 | Open MPI |
IBM RS6000 Power AIX | AIX 7.1, 7.2, and 7.3 | GNU (gcc, g++, gfortran) 10, 11 IBM XLC 12.1, 13.1, 16.1 IBM Open XL 17.1 IBM XL Fortran 12.1, 13.1, 16.1 | Argonne MPICH Argonne MPICH2 Open MPI PE POE |
Oracle SPARC Solaris | Solaris 11 | GNU (gcc, g++, gfortran) 4.3 - 13 Oracle Studio 12 | Argonne MPICH Argonne MPICH2 Open MPI Sun Cluster Tools |
Note 1: The Classic TotalView UI requires X11. For important notes on installing TotalView on macOS, please see the section
Mac OS Installations” in the
TotalView Installation Guide.
Note 2: Support on the XK6 platform for Cray's OpenMP Accelerator Directives and Cray's OpenACC Directives.
For details, see the section “Directive-Based Accelerator Programming Languages” in the Classic TotalView User Guide. ReplayEngine supports debugging MPI-based programs using Cray MPI over the Gemini Interconnect found on Cray XE (x86_64 only) supercomputers.
Note 3: For details on installing and using TotalView on Cray EX (Shasta) systems, see "Running TotalView on a Cray EX (Shasta) system" in the Known Issues section of the TotalView release notes, available at
https://help.totalview.io.
Note 4: Classic TotalView UI only
Note 5: The TotalView Message Queue Display (MQD) feature with applications using IBM MPI Parallel Environment (PE) requires the threaded version of the MPI library.