Skip to content

RuntimeError: Unable to cast Python instance of type <class 'numpy.ndarray'> to C++ type '?' #16557

@ares89

Description

@ares89

🐛 Describe the bug

cmd is:
nohup python examples/qualcomm/oss_scripts/llama/llama.py -b build-android -m SA8295 --temperature 0 --model_mode hybrid --max_seq_len 1024 --prefill_ar_len 128 --decoder_model qwen3-0_6b --compile_only --prompt "你好啊" --checkpoint /home/xxx/.cache/meta_checkpoints/Qwen_Qwen3-0.6B.pth 1>logs/qwen3_8295.log 2>&1 &

log is like bellow, I have checked that pybinding is installed:

[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 1
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in SAVE MODE.
[INFO] [Qnn ExecuTorch]: Running level=3 optimization.
NOTE: Using slow Hadamard transform for SpinQuant. For better performance on GPU, install fast_hadamard_transform: pip install git+https://github.com/Dao-AILab/fast-hadamard-transform.git
QNN_SDK_ROOT=/home/xxx/executorch/backends/qualcomm/sdk/qnn
[QNN Partitioner Op Support]: aten.squeeze_copy.dims | True
[QNN Partitioner Op Support]: aten.permute_copy.default | True
Traceback (most recent call last):
File "/home/xxx/executorch/examples/qualcomm/oss_scripts/llama/llama.py", line 1343, in main
export_llama(args)
File "/home/xxx/executorch/examples/qualcomm/oss_scripts/llama/llama.py", line 1317, in export_llama
compile(args, decoder_model_config, pte_filename, tokenizer, chat_template)
File "/home/xxx/executorch/examples/qualcomm/oss_scripts/llama/llama.py", line 811, in compile
edge_prog_mgr = to_edge_transform_and_lower_to_qnn(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/backends/qualcomm/utils/utils.py", line 445, in to_edge_transform_and_lower_to_qnn
return to_edge_transform_and_lower(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/exir/program/_program.py", line 114, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/exir/program/_program.py", line 1371, in to_edge_transform_and_lower
edge_manager = edge_manager.to_backend(method_to_partitioner)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/exir/program/_program.py", line 114, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/exir/program/_program.py", line 1672, in to_backend
new_edge_programs = to_backend(method_to_programs_and_partitioners)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/envs/execu2/lib/python3.12/functools.py", line 912, in wrapper
return dispatch(args[0].class)(*args, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/exir/backend/backend_api.py", line 721, in _
partitioner_result = partitioner_instance(fake_edge_program)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/exir/backend/partitioner.py", line 66, in call
return self.partition(exported_program)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/backends/qualcomm/partition/qnn_partitioner.py", line 199, in partition
partitions = self.generate_partitions(edge_program)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/backends/qualcomm/partition/qnn_partitioner.py", line 164, in generate_partitions
return generate_partitions_from_list_of_nodes(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/exir/backend/canonical_partitioners/pattern_op_partitioner.py", line 54, in generate_partitions_from_list_of_nodes
partition_list = capability_partitioner.propose_partitions()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/envs/execu2/lib/python3.12/site-packages/torch/fx/passes/infra/partitioner.py", line 226, in propose_partitions
if self._is_node_supported(node) and node not in assignment:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/miniconda3/envs/execu2/lib/python3.12/site-packages/torch/fx/passes/infra/partitioner.py", line 87, in _is_node_supported
return self.operator_support.is_node_supported(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/backends/qualcomm/partition/qnn_partitioner.py", line 100, in is_node_supported
op_wrapper = self.node_visitors[node.target.name].define_node(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/backends/qualcomm/builders/op_conv.py", line 140, in define_node
filter_tensor_wrapper = self.define_tensor(
^^^^^^^^^^^^^^^^^^^
File "/home/xxx/executorch/backends/qualcomm/builders/node_visitor.py", line 462, in define_tensor
tensor_wrapper = PyQnnWrapper.TensorWrapper(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Unable to cast Python instance of type <class 'numpy.ndarray'> to C++ type '?' (#define PYBIND11_DETAILED_ERROR_MESSAGES or compile in debug mode for details)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/xxx/executorch/examples/qualcomm/oss_scripts/llama/llama.py", line 1354, in
main()
File "/home/xxx/executorch/examples/qualcomm/oss_scripts/llama/llama.py", line 1349, in main
raise Exception(e)
Exception: Unable to cast Python instance of type <class 'numpy.ndarray'> to C++ type '?' (#define PYBIND11_DETAILED_ERROR_MESSAGES or compile in debug mode for details)
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters
[INFO] [Qnn ExecuTorch]: Destroy Qnn context
[INFO] [Qnn ExecuTorch]: Destroy Qnn device
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend
[INFO] [Qnn ExecuTorch]: Destroy Qnn backend parameters

python -c "import pybind11; print(pybind11.__version__)"

3.0.1

python -c "import numpy as np; print(np.__version__)"

2.4.1

Versions

Collecting environment information...
PyTorch version: 2.9.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.0
Libc version: glibc-2.35

Python version: 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:16:04) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-160-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
GPU 2: NVIDIA GeForce RTX 4090
GPU 3: NVIDIA GeForce RTX 4090
GPU 4: NVIDIA GeForce RTX 4090
GPU 5: NVIDIA GeForce RTX 4090
GPU 6: NVIDIA GeForce RTX 4090
GPU 7: NVIDIA GeForce RTX 4090

Nvidia driver version: 550.163.01
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Caching allocator config: N/A

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7763 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3530.4929
CPU min MHz: 1500.0000
BogoMIPS: 4891.30
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-255
Vulnerability Gather data sampling: Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsa: Vulnerable: Clear CPU buffers attempted, no microcode
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] executorch==1.0.1a0+087568a
[pip3] numpy==2.4.1
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pytorch_tokenizers==1.0.1
[pip3] torch==2.9.0
[pip3] torchao==0.14.0+git02941240f
[pip3] torchaudio==2.9.0+cpu
[pip3] torchdata==0.11.0+cpu
[pip3] torchsr==1.0.4
[pip3] torchtune==0.7.0+cpu
[pip3] torchvision==0.24.0+cpu
[pip3] triton==3.5.0
[conda] No relevant packages

cc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: qnnIssues related to Qualcomm's QNN delegate and code under backends/qualcomm/partner: qualcommFor backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions