Hi,
I'd like to understand vImageConver for ARGB2101010. Please help me about endianness.
vImageConvert_ARGB16UToARGB2101010
vImageConvert_ARGB16UToXRGB2101010
vImageConvert_ARGB2101010ToARGB16U
Under header they show it is "little endian" a.k.a. "host endian".
Accelerate.framework/Frameworks/vImage.framework/Headers/Conversion.h
10171 This format is 10-bit little endian 32-bit pixels. The 2 MSB are zero.
But if I understand correctly, described pseudo code in header shows big endian to host endian conversion.
(read in ARGB2101010)
10180 The per-pixel operation is:
10181 @code
10182 uint32_t *srcPixel = src.data;
10183 uint32_t pixel = ntohl(srcPixel[0]);
10184 srcPixel += 1;
On the other hand, described pseudo code in header seems to run in "host endian", in other word "little endian".
(write out ARGB2101010)
10303 uint32_t *destPixel = dest.data;
10304 destPixel[0] = (A2 << 30) | (R10 << 20) | (G10 << 10) | (B10 << 0);
10305 destPixel += 1;
10306 @endcode
If the header doc is correct, vImage runs as "Reading in BigEndian and Writing in HostEndian".
There should be some inconsistencies.
Please tell me how to use these APIs.
Simply saying, The per-pixel operation is wrong. As far as I tested the conversions, they work as ARGB2101010 in little-endian.
I do not know why that pseudo code is wrong, guess some historical reason but not sure.
(Historically, old Mac ran on big-endian processors, but that does not explain all the inconsistencies.)
Anyway, you can send a bug report of this issue as a documentation bug.