onnxruntime: A high performance ML inferencing and training accelerator1

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.

... part of T2, get it here

URL: https://github.com/microsoft/onnxruntime

Author: Microsoft Corporation <onnxruntime [at] microsoft [dot] com>
Maintainer: The T2 Project <t2 [at] t2-project [dot] org>

License: MIT
Status: Stable
Version: 1.20.1

Download: https://github.com/microsoft/ onnxruntime 5c1b7cconnxruntime-git-1.20.1.tar.gz

T2 source: onnxruntime.cache
T2 source: onnxruntime.desc

Build time (on reference hardware): 7950% (relative to binutils)2

Installed size (on reference hardware): 0.01 MB, 5 files

Dependencies (build time detected): 00-dirtree bash binutils boost cmake coreutils cython diffutils eigen gawk grep gtest gzip libiconv linux-header ninja numpy openssl protobuf pybind python python-gpep517 sed setuptools tar tbb zlib sympy

Installed files (on reference hardware): n.a.

1) This page was automatically generated from the T2 package source. Corrections, such as dead links, URL changes or typos need to be performed directly on that source.

2) Compatible with Linux From Scratch's "Standard Build Unit" (SBU).