Error when running inference in C++:onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Sigmoid node. #22836
Labels
ep:CUDA
issues related to the CUDA execution provider
Hello, I encountered the following error message while loading a model using the GPU in C++:
onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running Sigmoid node. Name:'/model.0/act/Sigmoid' Status Message: CUDA error cudaErrorNoKernelImageForDevice:no kernel image is available for execution on the device.
My GPU is an NVIDIA GeForce GT 730 with a compute capability of 3.5. The driver version I downloaded is 475.14, CUDA is 11.4, cuDNN is 8.2.2, OpenCV-CUDA version is 4.7.0, and ONNXRuntime is 1.12.0 (I have also tested with 1.11.0). During testing, the model can be loaded and inferred correctly on the CPU, but I get an error when using the GPU. I have checked the versions of the dynamic libraries, and I haven't found any compatibility issues; they should all be compatible. However, given the error above, I'm not sure how to resolve it. Does anyone have any suggestions
model :yolov8n.onnx
The text was updated successfully, but these errors were encountered: