I think it may be getting close to time to promote JuliaImages a bit more heavily. I still have a few things I want to do first wrt cleaning the code base (getting a release of Images.jl out that is compatible with ImageCore 0.9 is high on the list, as is the abs2 issue). But it will take some time to assemble good benchmarks, and I think it's best to do that out in the open so that others can pitch in. @mkitti, you'll note from the README that I'd love to have Fiji benchmarks, is that by any chance something you'd be able and interested to contribute?
FWIW, here's where we are on my machine today:


Note the log-scaling. The "generic" tests (top) are expressed per-voxel, the "specialized" tests (below) are absolute.
Together with OpenCV, we pretty much dominate already (in one case by more than three orders of magnitude), but there are a few exceptions that merit investigation. (That Gaussian blur algorithm in OpenCV is worth emulating...) Missing points generally represent unsupported operations; the "specialized" for OpenCV is still a WIP but they support very little 3d so a lot of points will be missing anyway.
This also doesn't capture some of our other advantages, like scaling to big data. I've been playing a bit with dask but it seems like a pretty far cry from what we offer. scikit-image has a habit of immediately converting everything to a numpy array.
I think it may be getting close to time to promote JuliaImages a bit more heavily. I still have a few things I want to do first wrt cleaning the code base (getting a release of Images.jl out that is compatible with ImageCore 0.9 is high on the list, as is the
abs2issue). But it will take some time to assemble good benchmarks, and I think it's best to do that out in the open so that others can pitch in. @mkitti, you'll note from the README that I'd love to have Fiji benchmarks, is that by any chance something you'd be able and interested to contribute?FWIW, here's where we are on my machine today:
Note the log-scaling. The "generic" tests (top) are expressed per-voxel, the "specialized" tests (below) are absolute.
Together with OpenCV, we pretty much dominate already (in one case by more than three orders of magnitude), but there are a few exceptions that merit investigation. (That Gaussian blur algorithm in OpenCV is worth emulating...) Missing points generally represent unsupported operations; the "specialized" for OpenCV is still a WIP but they support very little 3d so a lot of points will be missing anyway.
This also doesn't capture some of our other advantages, like scaling to big data. I've been playing a bit with dask but it seems like a pretty far cry from what we offer.
scikit-imagehas a habit of immediately converting everything to a numpy array.