Skip to content

[FEA] Supporting DLPacks #122

@isVoid

Description

@isVoid

Problem:

Currently Numba-CUDA kernels interop with external arrays via cuda array interface (CAI). CAI is an extension based on the existing numpy array interface standard and adopts existing numpy data type to indicates its usage. Numpy dtype currently lacks exotic data types such as bfloat16, fp8 etc. This barrs the interoperability between dl objects with Numba kernels (torch.bfloat16 tensors, for example).

The community have moved towards there extension for Numpy dtypes such as ml_dtypes. It's been adopted by the community most of the time the solution doesn't come as plug and play (issues).

Proposed Solution:

Numba-CUDA kernels should interop with DLPack conformed objects.

Rationale

This should enable more DL users in adopting Numba as their "last-resort" tool for custom kernel development in SIMT model. DLPack is widely adopted by various frameworks and is actively in adding support for new data types. They added support to bfloat16 in 2020 and is actively adding support for fp8.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions