You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi pybind experts, I have what I suspect is a very naive question: is there a way to restrict the type-casting that is performed by pybind11 for numpy arrays? In particular, take a module defined like
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
uint32_t xor_uint32(uint32_t a, uint32_t b) {
return a ^ b;
}
py::array_t<uint32_t> xor_uint32_vec(const py::array_t<uint32_t>& a, const py::array_t<uint32_t>& b) {
return py::vectorize(xor_uint32)(a, b);
}
PYBIND11_MODULE(test, m) {
m.def("xor_uint32", &xor_uint32, "XOR two uint32_t values");
m.def("xor_uint32_vec", &xor_uint32_vec, "XOR two arrays of uint32_t values");
}
(my actual application is obviously much more complicated than an xor)
is it possible to restrict the types that can be passed in from python? I am looking for two possible behaviors:
Totally strict typing - ie if I try to pass a numpy array with dtype anything other than np.uint32, I get an error
Conservative typing - ie only allow "safe" type-casting. For example, passing an array of np.uint16 would be successful, but passing an array of np.uint64 will fail.
In either scenario I would expect that passing non-integral times (eg np.float32 and np.float64 should fail. I would hope for an error message like this one from numpy:
>>> np.arange(10, dtype=np.int64) ^ np.random.random(10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: ufunc 'bitwise_xor' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
Instead, the current behavior is to simply cast the inputs to the expected data type of np.uint32 when the function is called. I'm particularly concerned about the "unsafe" casting - eg from floating-point types to integral types, or from larger integer types to smaller integer types. I'm also concerned about the implicit copy that must be taking place, as this is a performance-sensitive application.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi pybind experts, I have what I suspect is a very naive question: is there a way to restrict the type-casting that is performed by pybind11 for numpy arrays? In particular, take a module defined like
(my actual application is obviously much more complicated than an xor)
is it possible to restrict the types that can be passed in from python? I am looking for two possible behaviors:
np.uint32
, I get an errornp.uint16
would be successful, but passing an array ofnp.uint64
will fail.In either scenario I would expect that passing non-integral times (eg
np.float32
andnp.float64
should fail. I would hope for an error message like this one from numpy:Instead, the current behavior is to simply cast the inputs to the expected data type of
np.uint32
when the function is called. I'm particularly concerned about the "unsafe" casting - eg from floating-point types to integral types, or from larger integer types to smaller integer types. I'm also concerned about the implicit copy that must be taking place, as this is a performance-sensitive application.Any help would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions