While using the XDR routines supplied by the macOS SDK, I am getting a build error stating that "xdr_u_long" does not take a u_long type as an argument but instead takes an unsigned int. After looking at the header file, there are two declarations of "xdr_u_long":
#ifdef __LP64__
extern bool_t xdr_long(XDR *, int *);
extern bool_t xdr_u_long(XDR *, unsigned int *);
#else
extern bool_t xdr_long(XDR *, long *);
extern bool_t xdr_u_long(XDR *, unsigned long *);
#endif
Should this compiler macro be the opposite of what it is? On a 32-bit machine ints and longs use the same number of bytes, so int and unsigned int would suffice. On a 64-bit machine this is not the case, so this seems a bit backwards to me. Is there a reason why on a 64 bit machine you would want a u_long to be an unsigned int?