Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@kleisauke
Copy link
Member

@kleisauke kleisauke commented Dec 20, 2025

I noticed that some of our Highway paths were not big-endian safe. This PR fixes that.

Verified for correctness under QEMU emulation using this Dockerfile:

Details
FROM --platform=linux/s390x fedora:latest

RUN \
  dnf update -y && \
  dnf install -y \
    # build dependencies
    gcc-c++ \
    meson \
    git \
    # libvips dependencies (minimal)
    glib2-devel \
    expat-devel \
    highway-devel \
    libjpeg-turbo-devel \
    libpng-devel

WORKDIR /opt

RUN git clone https://github.com/libvips/libvips.git

WORKDIR /opt/libvips

ENV \
  CFLAGS="-march=z15 -mzvector" \
  CXXFLAGS="-march=z15 -mzvector"

RUN \
  meson setup build --prefix=/usr -Ddeprecated=false -Dexamples=false && \
  meson compile -Cbuild && \
  meson install -Cbuild

WORKDIR /opt

RUN \
  cat <<EOT > blur.mat
3 3 9
1 1 1
1 1 1
1 1 1
EOT
RUN \
  cat <<EOT > 3x3.mat
3 3
0 0 0
0 0 0
0 0 0
EOT
ADD https://wsrv.nl/zebra.jpg /opt
ADD https://github.com/lovell/sharp/raw/main/test/fixtures/dot-and-lines.png /opt

and running:

Details
$ podman build -t fedora-s390x .
$ podman run --rm -e QEMU_CPU=max,vxeh2=on fedora-s390x vips --targets
builtin targets:   Z15
supported targets: Z15
$ podman run --rm -e QEMU_CPU=max,vxeh2=on -v $PWD:/opt/host fedora-s390x vips convi zebra.jpg host/convi.jpg blur.mat --vips-info
VIPS-INFO: 14:23:45.815: threadpool completed with 2 workers
VIPS-INFO: 14:23:45.819: threadpool completed with 2 workers
VIPS-INFO: 14:23:45.820: convi: using vector path
VIPS-INFO: 14:23:47.819: threadpool completed with 2 workers
$ podman run --rm -e QEMU_CPU=max,vxeh2=on -v $PWD:/opt/host fedora-s390x vips reduceh zebra.jpg host/reduceh.jpg 2 --vips-info
VIPS-INFO: 14:23:48.324: reduceh: 13 point mask
VIPS-INFO: 14:23:48.330: reduceh: using vector path
VIPS-INFO: 14:23:50.155: threadpool completed with 4 workers
$ podman run --rm -e QEMU_CPU=max,vxeh2=on -v $PWD:/opt/host fedora-s390x vips reducev zebra.jpg host/reducev.jpg 2 --vips-info
VIPS-INFO: 14:23:50.579: reducev: 13 point mask
VIPS-INFO: 14:23:50.586: reducev: using vector path
VIPS-INFO: 14:23:50.586: reducev sequential line cache
VIPS-INFO: 14:23:52.753: threadpool completed with 4 workers
$ podman run --rm -e QEMU_CPU=max,vxeh2=on -v $PWD:/opt/host fedora-s390x vips shrinkv zebra.jpg host/shrinkv.jpg 2 --vips-info
VIPS-INFO: 14:23:53.242: shrinkv: using vector path
VIPS-INFO: 14:23:53.244: shrinkv sequential line cache
VIPS-INFO: 14:23:55.273: threadpool completed with 5 workers
$ podman run --rm -e QEMU_CPU=max,vxeh2=on -v $PWD:/opt/host fedora-s390x vips shrinkh zebra.jpg host/shrinkh.jpg 2 --vips-info
VIPS-INFO: 14:23:55.783: shrinkh: using vector path
VIPS-INFO: 14:23:57.608: threadpool completed with 4 workers
$ podman run --rm -e QEMU_CPU=max,vxeh2=on -v $PWD:/opt/host fedora-s390x vipsthumbnail zebra.jpg -o host/tn_%s.jpg --vips-info
VIPS-INFO: 14:23:57.987: thumbnailing zebra.jpg
VIPS-INFO: 14:23:57.997: selected loader is VipsForeignLoadJpegFile
VIPS-INFO: 14:23:57.997: input size is 4120 x 2747
VIPS-INFO: 14:23:57.998: loading with factor 8 pre-shrink
VIPS-INFO: 14:23:57.999: pre-shrunk size is 515 x 343
VIPS-INFO: 14:23:57.999: converting to processing space srgb
VIPS-INFO: 14:23:58.001: residual reducev by 0.248544
VIPS-INFO: 14:23:58.001: shrinkv by 2
VIPS-INFO: 14:23:58.002: shrinkv: using vector path
VIPS-INFO: 14:23:58.002: shrinkv sequential line cache
VIPS-INFO: 14:23:58.003: reducev: 13 point mask
VIPS-INFO: 14:23:58.005: reducev: using vector path
VIPS-INFO: 14:23:58.005: reducev sequential line cache
VIPS-INFO: 14:23:58.005: residual reduceh by 0.248544
VIPS-INFO: 14:23:58.005: shrinkh by 2
VIPS-INFO: 14:23:58.006: shrinkh: using vector path
VIPS-INFO: 14:23:58.006: reduceh: 13 point mask
VIPS-INFO: 14:23:58.008: reduceh: using vector path
VIPS-INFO: 14:23:58.009: thumbnailing zebra.jpg as ./host/tn_zebra.jpg
VIPS-INFO: 14:23:58.136: threadpool completed with 2 workers
$ podman run --rm -e QEMU_CPU=max,vxeh2=on -v $PWD:/opt/host fedora-s390x vips morph dot-and-lines.png host/erode.png 3x3.mat erode --vips-info
VIPS-INFO: 14:23:58.531: threadpool completed with 2 workers
VIPS-INFO: 14:23:58.538: threadpool completed with 2 workers
VIPS-INFO: 14:23:58.539: morph: using vector path
VIPS-INFO: 14:23:58.556: threadpool completed with 1 workers
$ podman run --rm -e QEMU_CPU=max,vxeh2=on -v $PWD:/opt/host fedora-s390x vips morph dot-and-lines.png host/dilate.png 3x3.mat dilate --vips-info
VIPS-INFO: 14:23:58.952: threadpool completed with 2 workers
VIPS-INFO: 14:23:58.959: threadpool completed with 2 workers
VIPS-INFO: 14:23:58.960: morph: using vector path
VIPS-INFO: 14:23:58.979: threadpool completed with 1 workers

Copy link
Member

@jcupitt jcupitt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

@jcupitt
Copy link
Member

jcupitt commented Dec 21, 2025

Could we run the test suit on a bigendian emulator as part of CI? I've not looked into it :(

@kleisauke
Copy link
Member Author

Could we run the test suit on a bigendian emulator as part of CI? I've not looked into it :(

Good idea, let me check if this would work as a follow-up. I'm aware that sharp runs a subset of its test suite on s390x, but it may still be worth testing it here as well.

@kleisauke kleisauke merged commit 46dde2e into libvips:master Dec 21, 2025
7 checks passed
@kleisauke kleisauke deleted the highway-be branch December 21, 2025 11:36
@lovell
Copy link
Member

lovell commented Dec 21, 2025

The sharp CI environment for s390x uses run-on-arch-action , which handles all the Docker+QEMU wrapping. The s390x binaries are currently built without highway support hence not catching this sooner - see https://github.com/lovell/sharp-libvips/blob/main/platforms/linux-s390x/Dockerfile#L55-L58 - as of the next release we can try building with highway support.

As an aside I just checked the latest npm stats and there are 1.5m downloads/week of libvips for s390x, of which I suspect 99.9% is from ancient versions of the yarn package manager that don't correctly filter by the relevant cpu property these packages are published with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants