logo
0
0
WeChat Login

onnxruntime_go Cross-Platform ONNX Runtime Wrapper for Go with Encrypted Model Support

Go Reference

Overview

This library provides a Go interface for loading and executing ONNX neural networks using Microsoft's onnxruntime library. It extends the original yalue/onnxruntime_go with:

  • Encrypted Model Support - AES-256-GCM encryption for model protection
  • Authorization Integration - Integration with machineid/cert for enterprise licensing
  • Machine Binding - Derive encryption keys from machine ID for hardware-locked models

Features

  • Cross-platform support (Windows, Linux, macOS)
  • Multiple execution providers (CUDA, TensorRT, CoreML, DirectML, OpenVINO)
  • Generic tensor support with Go generics
  • Dynamic and static session modes
  • Model encryption and decryption
  • Authorization-based model access control

Installation

go get cnb.cool/svn/onnxruntime

You'll also need the ONNX Runtime shared library for your platform. Download from onnxruntime releases.

Quick Start

Basic Inference

package main import ( "fmt" ort "cnb.cool/svn/onnxruntime" ) func main() { // Set library path ort.SetSharedLibraryPath("/path/to/libonnxruntime.so") // Initialize environment if err := ort.InitializeEnvironment(); err != nil { panic(err) } defer ort.DestroyEnvironment() // Create input tensor inputShape := ort.NewShape(1, 3, 224, 224) inputData := make([]float32, inputShape.FlattenedSize()) inputTensor, _ := ort.NewTensor(inputShape, inputData) defer inputTensor.Destroy() // Create output tensor outputShape := ort.NewShape(1, 1000) outputTensor, _ := ort.NewEmptyTensor[float32](outputShape) defer outputTensor.Destroy() // Create session and run session, _ := ort.NewAdvancedSession("model.onnx", []string{"input"}, []string{"output"}, []ort.Value{inputTensor}, []ort.Value{outputTensor}, nil) defer session.Destroy() session.Run() result := outputTensor.GetData() fmt.Println("Result:", result[:10]) }

Encrypted Model Inference

package main import ( ort "cnb.cool/svn/onnxruntime" ) func main() { ort.SetSharedLibraryPath("/path/to/libonnxruntime.so") ort.InitializeEnvironment() defer ort.DestroyEnvironment() // Encryption key (32 bytes for AES-256) key := []byte("your-32-byte-secret-key-here!!") // Encrypt model (one-time operation) ort.EncryptModel("model.onnx", "model.onnx.enc", key) // Get model info from encrypted file inputs, outputs, _ := ort.GetInputOutputInfoFromEncryptedFile("model.onnx.enc", key) // Create session from encrypted model session, _ := ort.NewDynamicAdvancedSessionFromEncryptedFile( "model.onnx.enc", key, []string{inputs[0].Name}, []string{outputs[0].Name}, nil) defer session.Destroy() // Run inference... }

Machine-Bound Encrypted Model

package main import ( "github.com/darkit/machineid" ort "cnb.cool/svn/onnxruntime" ) func main() { ort.SetSharedLibraryPath("/path/to/libonnxruntime.so") ort.InitializeEnvironment() defer ort.DestroyEnvironment() // Get machine ID machineID, _ := machineid.ID() moduleName := "ai.model.classifier" salt := []byte("app-specific-salt") // Encrypt model for this machine ort.EncryptModelForMachine("model.onnx", "model.onnx.enc", machineID, moduleName, salt) // Derive key and create session key := ort.DeriveModelKey(machineID, moduleName, salt) session, _ := ort.NewDynamicAdvancedSessionFromEncryptedFile( "model.onnx.enc", key, inputNames, outputNames, nil) defer session.Destroy() }

API Reference

Environment Management

SetSharedLibraryPath

func SetSharedLibraryPath(path string)

Sets the path to the ONNX Runtime shared library. Must be called before InitializeEnvironment().

InitializeEnvironment

func InitializeEnvironment(opts ...EnvironmentOption) error

Initializes the ONNX Runtime environment. Must be called before creating any sessions.

Options:

  • WithLogLevelVerbose() - Enable verbose logging
  • WithLogLevelInfo() - Enable info logging
  • WithLogLevelWarning() - Enable warning logging (default)
  • WithLogLevelError() - Enable error logging only
  • WithLogLevelFatal() - Enable fatal logging only

DestroyEnvironment

func DestroyEnvironment() error

Cleans up the ONNX Runtime environment. Should be called when done using the library.

IsInitialized

func IsInitialized() bool

Returns true if the environment has been initialized.

GetVersion

func GetVersion() string

Returns the ONNX Runtime version string.

DisableTelemetry / EnableTelemetry

func DisableTelemetry() error func EnableTelemetry() error

Controls ONNX Runtime telemetry collection.

SetEnvironmentLogLevel

func SetEnvironmentLogLevel(level LoggingLevel) error

Sets the logging level for the environment.


Shape

NewShape

func NewShape(dimensions ...int64) Shape

Creates a new tensor shape from the given dimensions.

Shape Methods

func (s Shape) FlattenedSize() int64 // Total number of elements func (s Shape) Validate() error // Validates dimensions are positive func (s Shape) Clone() Shape // Creates a copy func (s Shape) String() string // String representation func (s Shape) Equals(other Shape) bool // Compares two shapes

Tensor Types

Tensor[T]

Generic tensor for numeric data types.

func NewTensor[T TensorData](s Shape, data []T) (*Tensor[T], error) func NewEmptyTensor[T TensorData](s Shape) (*Tensor[T], error)

Supported types (TensorData):

  • float32, float64
  • int8, int16, int32, int64
  • uint8, uint16, uint32, uint64

Methods:

func (t *Tensor[T]) GetData() []T // Get underlying data slice func (t *Tensor[T]) GetShape() Shape // Get tensor shape func (t *Tensor[T]) Clone() (*Tensor[T], error) // Create a copy func (t *Tensor[T]) ZeroContents() // Zero all elements func (t *Tensor[T]) Destroy() error // Release resources func (t *Tensor[T]) GetONNXType() ONNXType // Returns ONNXTypeTensor func (t *Tensor[T]) DataType() ONNXTensorElementDataType

Scalar[T]

Single-value tensor (0-dimensional).

func NewScalar[T TensorData](data T) (*Scalar[T], error) func NewEmptyScalar[T TensorData]() (*Scalar[T], error)

Methods:

func (s *Scalar[T]) GetData() T // Get the scalar value func (s *Scalar[T]) Set(value T) // Set the scalar value func (s *Scalar[T]) Destroy() error

StringTensor

Tensor containing string data.

func NewStringTensor(shape Shape) (*StringTensor, error)

Methods:

func (t *StringTensor) SetContents(contents []string) error func (t *StringTensor) GetContents() ([]string, error) func (t *StringTensor) SetElement(index int64, s string) error func (t *StringTensor) GetElement(index int64) (string, error) func (t *StringTensor) Destroy() error

CustomDataTensor

Tensor with custom binary data and element type.

func NewCustomDataTensor(s Shape, data []byte, dataType TensorElementDataType) (*CustomDataTensor, error)

Methods:

func (t *CustomDataTensor) GetData() []byte func (t *CustomDataTensor) Destroy() error

Container Types

Sequence

ONNX sequence container.

func NewSequence(contents []Value) (*Sequence, error)

Methods:

func (s *Sequence) GetValues() ([]Value, error) func (s *Sequence) Destroy() error

Map

ONNX map container.

func NewMap(keys, values Value) (*Map, error) func NewMapFromGoMap[K, V TensorData](m map[K]V) (*Map, error)

Methods:

func (m *Map) GetKeysAndValues() (Value, Value, error) func (m *Map) Destroy() error

Sessions

AdvancedSession

Static session with pre-bound input/output tensors.

func NewAdvancedSession(onnxFilePath string, inputNames, outputNames []string, inputs, outputs []Value, options *SessionOptions) (*AdvancedSession, error) func NewAdvancedSessionWithONNXData(onnxData []byte, inputNames, outputNames []string, inputs, outputs []Value, options *SessionOptions) (*AdvancedSession, error)

Methods:

func (s *AdvancedSession) Run() error func (s *AdvancedSession) RunWithOptions(opts *RunOptions) error func (s *AdvancedSession) GetModelMetadata() (*ModelMetadata, error) func (s *AdvancedSession) Destroy() error

DynamicAdvancedSession

Dynamic session where inputs/outputs are specified at runtime.

func NewDynamicAdvancedSession(onnxFilePath string, inputNames, outputNames []string, options *SessionOptions) (*DynamicAdvancedSession, error) func NewDynamicAdvancedSessionWithONNXData(onnxData []byte, inputNames, outputNames []string, options *SessionOptions) (*DynamicAdvancedSession, error)

Methods:

func (s *DynamicAdvancedSession) Run(inputs, outputs []Value) error func (s *DynamicAdvancedSession) RunWithOptions(inputs, outputs []Value, opts *RunOptions) error func (s *DynamicAdvancedSession) RunWithBinding(b *IoBinding) error func (s *DynamicAdvancedSession) CreateIoBinding() (*IoBinding, error) func (s *DynamicAdvancedSession) GetModelMetadata() (*ModelMetadata, error) func (s *DynamicAdvancedSession) Destroy() error

IoBinding

I/O binding for optimized memory management.

func (s *DynamicAdvancedSession) CreateIoBinding() (*IoBinding, error)

Methods:

func (b *IoBinding) BindInput(name string, value Value) error func (b *IoBinding) BindOutput(name string, value Value) error func (b *IoBinding) GetBoundOutputNames() ([]string, error) func (b *IoBinding) GetBoundOutputValues() ([]Value, error) func (b *IoBinding) ClearBoundInputs() func (b *IoBinding) ClearBoundOutputs() func (b *IoBinding) Destroy() error

Session Options

SessionOptions

func NewSessionOptions() (*SessionOptions, error)

Methods:

// Execution configuration func (o *SessionOptions) SetExecutionMode(mode ExecutionMode) error func (o *SessionOptions) SetGraphOptimizationLevel(level GraphOptimizationLevel) error func (o *SessionOptions) SetLogSeverityLevel(level LoggingLevel) error func (o *SessionOptions) SetIntraOpNumThreads(n int) error func (o *SessionOptions) SetInterOpNumThreads(n int) error func (o *SessionOptions) SetCpuMemArena(isEnabled bool) error func (o *SessionOptions) SetMemPattern(isEnabled bool) error // Session config entries func (o *SessionOptions) HasSessionConfigEntry(key string) (bool, error) func (o *SessionOptions) GetSessionConfigEntry(key string) (string, error) func (o *SessionOptions) AddSessionConfigEntry(key, value string) error // Execution providers func (o *SessionOptions) AppendExecutionProviderCUDA(cudaOptions *CUDAProviderOptions) error func (o *SessionOptions) AppendExecutionProviderTensorRT(tensorrtOptions *TensorRTProviderOptions) error func (o *SessionOptions) AppendExecutionProviderCoreML(flags uint32) error func (o *SessionOptions) AppendExecutionProviderCoreMLV2(options map[string]string) error func (o *SessionOptions) AppendExecutionProviderDirectML(deviceID int) error func (o *SessionOptions) AppendExecutionProviderOpenVINO(options map[string]string) error func (o *SessionOptions) AppendExecutionProvider(providerName string, options map[string]string) error func (o *SessionOptions) Destroy() error

ExecutionMode:

  • ExecutionModeSequential - Sequential execution
  • ExecutionModeParallel - Parallel execution

GraphOptimizationLevel:

  • GraphOptLevelDisableAll - No optimization
  • GraphOptLevelBasic - Basic optimizations
  • GraphOptLevelExtended - Extended optimizations
  • GraphOptLevelAll - All optimizations

RunOptions

func NewRunOptions() (*RunOptions, error)

Methods:

func (o *RunOptions) Terminate() error // Request termination func (o *RunOptions) UnsetTerminate() error // Clear termination flag func (o *RunOptions) Destroy() error

Execution Providers

CUDAProviderOptions

func NewCUDAProviderOptions() (*CUDAProviderOptions, error)

Methods:

func (o *CUDAProviderOptions) Update(options map[string]string) error func (o *CUDAProviderOptions) Destroy() error

Common options:

  • "device_id" - GPU device ID
  • "gpu_mem_limit" - Memory limit in bytes
  • "arena_extend_strategy" - Memory allocation strategy

TensorRTProviderOptions

func NewTensorRTProviderOptions() (*TensorRTProviderOptions, error)

Methods:

func (o *TensorRTProviderOptions) Update(options map[string]string) error func (o *TensorRTProviderOptions) Destroy() error

Common options:

  • "device_id" - GPU device ID
  • "trt_max_workspace_size" - Maximum workspace size
  • "trt_fp16_enable" - Enable FP16 precision
  • "trt_int8_enable" - Enable INT8 precision

Model Information

GetInputOutputInfo

func GetInputOutputInfo(path string) ([]InputOutputInfo, []InputOutputInfo, error) func GetInputOutputInfoWithOptions(path string, options *SessionOptions) ([]InputOutputInfo, []InputOutputInfo, error) func GetInputOutputInfoWithONNXData(data []byte) ([]InputOutputInfo, []InputOutputInfo, error)

Returns input and output tensor information for a model.

InputOutputInfo

type InputOutputInfo struct { Name string DataType TensorElementDataType Dimensions []int64 } func (n *InputOutputInfo) String() string

GetModelMetadata

func GetModelMetadata(path string) (*ModelMetadata, error)

ModelMetadata Methods

func (m *ModelMetadata) GetProducerName() (string, error) func (m *ModelMetadata) GetGraphName() (string, error) func (m *ModelMetadata) GetDomain() (string, error) func (m *ModelMetadata) GetDescription() (string, error) func (m *ModelMetadata) GetVersion() (int64, error) func (m *ModelMetadata) GetCustomMetadataMapKeys() ([]string, error) func (m *ModelMetadata) LookupCustomMetadataMap(key string) (string, bool, error) func (m *ModelMetadata) Destroy() error

Model Encryption

Basic Encryption

// Encrypt model file func EncryptModel(inputPath, outputPath string, key []byte) error // Decrypt model file func DecryptModel(inputPath, outputPath string, key []byte) error // Encrypt model data in memory func EncryptModelData(plaintext, key []byte) ([]byte, error) // Decrypt model data in memory func DecryptModelData(data, key []byte) ([]byte, error) // Generate random 32-byte encryption key func GenerateEncryptionKey() ([]byte, error)

Encryption format: AES-256-GCM with magic header ORTENC01

Encrypted Sessions

// Create session from encrypted file func NewAdvancedSessionFromEncryptedFile(encryptedPath string, key []byte, inputNames, outputNames []string, inputs, outputs []Value, options *SessionOptions) (*AdvancedSession, error) func NewDynamicAdvancedSessionFromEncryptedFile(encryptedPath string, key []byte, inputNames, outputNames []string, options *SessionOptions) (*DynamicAdvancedSession, error) // Create session from encrypted data func NewAdvancedSessionFromEncryptedData(encryptedData, key []byte, inputNames, outputNames []string, inputs, outputs []Value, options *SessionOptions) (*AdvancedSession, error) func NewDynamicAdvancedSessionFromEncryptedData(encryptedData, key []byte, inputNames, outputNames []string, options *SessionOptions) (*DynamicAdvancedSession, error) // Get model info from encrypted file func GetInputOutputInfoFromEncryptedFile(encryptedPath string, key []byte) ( []InputOutputInfo, []InputOutputInfo, error)

Machine-Bound Encryption

// Derive encryption key from machine ID func DeriveModelKey(machineID, moduleName string, salt []byte) []byte // Derive key from Authorization object func DeriveModelKeyFromAuth(auth Authorization, moduleName string, salt []byte) ([]byte, error) // Encrypt model for specific machine func EncryptModelForMachine(inputPath, outputPath, machineID, moduleName string, salt []byte) error // Encrypt model data for specific machine func EncryptModelDataForMachine(plaintext []byte, machineID, moduleName string, salt []byte) ([]byte, error)

Authorization Integration

Integration with machineid/cert package for enterprise licensing.

Authorization Interface

type Authorization interface { Validate(machineID string) error HasModule(name string) bool GetModuleQuota(name string) int ValidateModule(name string) error ExpiresAt() time.Time MachineIDs() []string }

SecurityChecker Interface

type SecurityChecker interface { Check() error // Anti-debugging, VM detection, etc. }

AuthorizedModelConfig

type AuthorizedModelConfig struct { ModuleName string // Module name for authorization MachineID string // Current machine ID Authorization Authorization // Authorization object SecurityChecker SecurityChecker // Optional security checker KeyDerivationSalt []byte // Salt for key derivation ValidateOnEveryRun bool // Validate before each inference QuotaTracker QuotaTracker // Optional quota tracking }

AuthorizedSession

func NewAuthorizedSession(encryptedPath string, config AuthorizedModelConfig, inputNames, outputNames []string, options *SessionOptions) (*AuthorizedSession, error) func NewAuthorizedSessionFromData(encryptedData []byte, config AuthorizedModelConfig, inputNames, outputNames []string, options *SessionOptions) (*AuthorizedSession, error)

Methods:

func (s *AuthorizedSession) Run(inputs, outputs []Value) error func (s *AuthorizedSession) GetSession() *DynamicAdvancedSession func (s *AuthorizedSession) Destroy() error

QuotaTracker Interface

type QuotaTracker interface { Increment(moduleName string) (int, error) GetCount(moduleName string) int Reset(moduleName string) } // Built-in implementation func NewInMemoryQuotaTracker() *InMemoryQuotaTracker

ModelAuthorizationInfo

type ModelAuthorizationInfo struct { ModuleName string `json:"module_name"` Salt []byte `json:"salt"` ModelHash []byte `json:"model_hash,omitempty"` InputNames []string `json:"input_names"` OutputNames []string `json:"output_names"` Description string `json:"description,omitempty"` Version string `json:"version,omitempty"` } func (info *ModelAuthorizationInfo) ValidateModelHash(encryptedData []byte) bool

Constants and Types

TensorElementDataType

const ( TensorElementDataTypeUndefined TensorElementDataType = iota TensorElementDataTypeFloat TensorElementDataTypeUint8 TensorElementDataTypeInt8 TensorElementDataTypeUint16 TensorElementDataTypeInt16 TensorElementDataTypeInt32 TensorElementDataTypeInt64 TensorElementDataTypeString TensorElementDataTypeBool TensorElementDataTypeFloat16 TensorElementDataTypeDouble TensorElementDataTypeUint32 TensorElementDataTypeUint64 TensorElementDataTypeComplex64 TensorElementDataTypeComplex128 TensorElementDataTypeBFloat16 )

ONNXType

const ( ONNXTypeUnknown ONNXType = iota ONNXTypeTensor ONNXTypeSequence ONNXTypeMap ONNXTypeOpaque ONNXTypeSparseTensor ONNXTypeOptional )

LoggingLevel

const ( LoggingLevelVerbose LoggingLevel = iota LoggingLevelInfo LoggingLevelWarning LoggingLevelError LoggingLevelFatal )

Encryption Constants

const ( EncryptedModelMagic = "ORTENC01" // Magic header for encrypted files AESKeySize = 32 // AES-256 key size GCMNonceSize = 12 // GCM nonce size )

Deprecated APIs

The following APIs are deprecated but maintained for backward compatibility:

// Use AdvancedSession instead type Session[T TensorData] struct{} type DynamicSession[In, Out TensorData] struct{} func NewSession[T TensorData](...) (*Session[T], error) func NewDynamicSession[In, Out TensorData](...) (*DynamicSession[In, Out], error) func NewSessionWithONNXData[T TensorData](...) (*Session[T], error) func NewDynamicSessionWithONNXData[In, Out TensorData](...) (*DynamicSession[In, Out], error) // Training API (deprecated in onnxruntime 1.20+) type TrainingSession struct{} func NewTrainingSession(...) (*TrainingSession, error) func IsTrainingSupported() bool // Always returns false

Examples

CUDA Acceleration

cudaOpts, _ := ort.NewCUDAProviderOptions() defer cudaOpts.Destroy() cudaOpts.Update(map[string]string{ "device_id": "0", }) sessionOpts, _ := ort.NewSessionOptions() defer sessionOpts.Destroy() sessionOpts.AppendExecutionProviderCUDA(cudaOpts) session, _ := ort.NewAdvancedSession("model.onnx", inputNames, outputNames, inputs, outputs, sessionOpts)

TensorRT Acceleration

trtOpts, _ := ort.NewTensorRTProviderOptions() defer trtOpts.Destroy() trtOpts.Update(map[string]string{ "device_id": "0", "trt_fp16_enable": "1", "trt_max_workspace_size": "2147483648", }) sessionOpts, _ := ort.NewSessionOptions() defer sessionOpts.Destroy() sessionOpts.AppendExecutionProviderTensorRT(trtOpts) session, _ := ort.NewDynamicAdvancedSession("model.onnx", inputNames, outputNames, sessionOpts)

Authorization with machineid/cert

import ( "github.com/darkit/machineid" "github.com/darkit/machineid/cert" ort "cnb.cool/svn/onnxruntime" ) func main() { // Get machine ID machineID, _ := machineid.ID() // Create authorizer and load certificate authorizer, _ := cert.NewAuthorizer(). WithCA(caPEM, caKeyPEM). Build() certAuth, _ := cert.NewCertAuthorization(certPEM, authorizer) // Create security checker securityMgr := cert.NewSecurityManager(cert.SecurityLevelAdvanced) // Configure authorized session config := ort.AuthorizedModelConfig{ ModuleName: "ai.model.classifier", MachineID: machineID, Authorization: certAuth, // Implements ort.Authorization SecurityChecker: securityMgr, KeyDerivationSalt: []byte("app-salt"), ValidateOnEveryRun: true, QuotaTracker: ort.NewInMemoryQuotaTracker(), } session, _ := ort.NewAuthorizedSession("model.onnx.enc", config, inputNames, outputNames, nil) defer session.Destroy() // Run with authorization checks session.Run(inputs, outputs) }

Testing

# Run all tests go test -v # Run with benchmarks go test -v -bench=. # Use custom onnxruntime library ONNXRUNTIME_SHARED_LIBRARY_PATH=/path/to/libonnxruntime.so go test -v

Version Compatibility

This library uses ONNX Runtime C API version 1.23.2. To use a different version:

  1. Replace onnxruntime_c_api.h and onnxruntime_ep_c_api.h with your version
  2. Replace the shared library in test_data/
  3. Verify DirectML API compatibility if needed

License

See the original yalue/onnxruntime_go repository for license information.

Related Projects


Static Library Build

By default, this library loads ONNX Runtime dynamically at runtime. To use a statically linked ONNX Runtime library, use the static build tag.

Build with Static Library

# Set include and library paths export CGO_CFLAGS="-I/opt/onnxruntime/include" export CGO_LDFLAGS="-L/opt/onnxruntime/lib -l:libonnxruntime.a -lstdc++ -lm -lpthread -ldl" # Build with static tag go build -tags static ./...

Platform-Specific Examples

Linux (CPU only):

export CGO_CFLAGS="-I/opt/onnxruntime/include" export CGO_LDFLAGS="-L/opt/onnxruntime/lib -l:libonnxruntime.a -lstdc++ -lm -lpthread -ldl" go build -tags static ./...

Linux (with CUDA):

export CGO_CFLAGS="-I/opt/onnxruntime/include -I/usr/local/cuda/include" export CGO_LDFLAGS="-L/opt/onnxruntime/lib -L/usr/local/cuda/lib64 \ -l:libonnxruntime.a -lcudart -lcublas -lcublasLt -lcudnn \ -lstdc++ -lm -lpthread -ldl" go build -tags static ./...

Linux (with TensorRT):

export CGO_CFLAGS="-I/opt/onnxruntime/include -I/usr/local/cuda/include -I/opt/TensorRT/include" export CGO_LDFLAGS="-L/opt/onnxruntime/lib -L/usr/local/cuda/lib64 -L/opt/TensorRT/lib \ -l:libonnxruntime.a -lcudart -lcublas -lcublasLt -lcudnn \ -lnvinfer -lnvinfer_plugin -lnvonnxparser \ -lstdc++ -lm -lpthread -ldl" go build -tags static ./...

macOS:

export CGO_CFLAGS="-I/opt/onnxruntime/include" export CGO_LDFLAGS="-L/opt/onnxruntime/lib -lonnxruntime -lc++ \ -framework Foundation -framework CoreML" go build -tags static ./...

Windows (MinGW):

set CGO_CFLAGS=-I/opt/onnxruntime/include set CGO_LDFLAGS=-L/opt/onnxruntime/lib -lonnxruntime -lstdc++ -lm -lpthread go build -tags static ./...

Code Differences

When using static build, you don't need to call SetSharedLibraryPath():

package main import ( ort "github.com/yalue/onnxruntime_go" ) func main() { // Static build: No need to set library path // ort.SetSharedLibraryPath() is ignored in static mode // Check if using static build if ort.IsStaticBuild() { println("Using static library") } // Initialize as usual if err := ort.InitializeEnvironment(); err != nil { panic(err) } defer ort.DestroyEnvironment() // ... rest of your code }

Checking Build Mode

// IsStaticBuild returns true if compiled with -tags static func IsStaticBuild() bool

Hybrid Linking (Static + Dynamic)

You can mix static and dynamic linking for optimal deployment:

Linker Flags

# -l:libxxx.a Force static linking for specific library # -lxxx Prefer dynamic, fallback to static # -Wl,-Bstatic Static linking for subsequent libraries # -Wl,-Bdynamic Dynamic linking for subsequent libraries

Example: ONNX Runtime Static + CUDA Dynamic (Recommended)

export CGO_CFLAGS="-I/opt/onnxruntime/include -I/usr/local/cuda/include" export CGO_LDFLAGS="\ -L/opt/onnxruntime/lib \ -L/usr/local/cuda/lib64 \ -Wl,-Bstatic -lonnxruntime \ -Wl,-Bdynamic -lcudart -lcublas -lcublasLt -lcudnn \ -lstdc++ -lm -lpthread -ldl" go build -tags static ./...

Example: ONNX Runtime Static + TensorRT Dynamic

export CGO_LDFLAGS="\ -L/opt/onnxruntime/lib \ -L/usr/local/cuda/lib64 \ -L/opt/TensorRT/lib \ -Wl,-Bstatic -lonnxruntime \ -Wl,-Bdynamic -lcudart -lcublas -lcudnn \ -lnvinfer -lnvinfer_plugin \ -lstdc++ -lm -lpthread -ldl"

Example: Maximum Static (Core Static, System Dynamic)

export CGO_LDFLAGS="\ -L/opt/onnxruntime/lib \ -L/usr/local/cuda/lib64/stubs \ -Wl,-Bstatic \ -lonnxruntime \ -lcudart_static -lcublas_static -lcublasLt_static \ -Wl,-Bdynamic \ -ldl -lpthread -lrt -lm"

Verify Linking

# Linux - Check dynamic dependencies ldd ./myapp # Check all symbols (including static) nm ./myapp | grep -i onnx # macOS otool -L ./myapp

Hybrid Linking Trade-offs

ProsCons
Core functionality portableMore complex configuration
GPU libs can update with systemNeed to understand linker behavior
Smaller executable than full staticSome libs still need deployment
Balance portability and flexibilityVersion matching during debug

Recommended Configuration

# onnxruntime static (portable core) # CUDA/cuDNN dynamic (follow driver version) # System libs dynamic (libc, libm, libpthread) CGO_LDFLAGS="\ -Wl,-Bstatic -l:libonnxruntime.a \ -Wl,-Bdynamic -lcudart -lcudnn -lcublas \ -lstdc++ -lm -lpthread -ldl"

This produces a binary that:

  • Does not require distributing libonnxruntime.so
  • Requires matching CUDA driver on target machine
  • Uses target machine's system libs for compatibility

About

跨平台 ONNX Runtime Go 封装库(支持加密模型)

195.67 MiB
0 forks0 stars1 branches0 TagREADMEMIT license
Language
Go82.8%
C12.4%
Python4.9%