Cross-compilation notes
👋 This page was last updated ~10 years ago. Just so you know.
I'll keep updating this article as I go, just to put stuff in all the same place.
Platforms
Cross-compiling for Linux
I'm pretty sure it's possible to cross-compile for Linux on other OSes, seeing as everything is open-source, but I have never done it - and why would I want to? Linux is the friendliest to build on, so it's better to use it as a build environment.
Cross-compiling for OSX
I've had success using xchain to compile for OSX from Linux, but it's a really long and painful experience.
The steps involve:
- Extracting SDKs from XCode dmg install images
- Building Apple's version of cctools
- A big mess with 32 & 64-bit (ld64 linker, etc.)
- Compiling GCC by hand (that was fun - for some version of fun)
Basically, information that's out there gets outdated almost as soon as it's published, so even if I did put out a step-by-step guide, it would be outdated in no time.
I remember problems with libstdc++ for C++ compilation. I haven't tried building a cross LLVM-GCC yet.
Compiling for both 32-bit & 64-bit means two separate compilations, unlike natively on OSX where you can just use xcodebuild to get fat executables.
In summary, not sure whether it's worth the trouble, it's long to get running, fragile, and involves a lot of hand-holding.
Cross-compiling for Windows
mingw and msys on Windows are basically 'GCC on Windows', which works fine most of the time, and even includes tools like bash, some Unix utils (even vim!), but let me state that once and for all: mingw on Windows is slow as fuck.
A much better solution, in fact, is to use mingw on Linux (or others) to produce Windows executables. Apparently, mingw-w64 is a more up-to-date version (fork?) that also supports building 64-bit Windows executables.
Debian testing & upper contains a mingw-w64 package that'll get you started.
Cross-compilation means:
- Prefixed tools, for example
i686-w64-mingw32-gcc
instead ofgcc
- A different sysroot & prefix, e.g.
/usr/i686-w64-mingw32
that hasinclude
,lib
,bin
, etc. - Generated executables are not necessarily executable on the platform you're building from (e.g. .exe on Linux)
In the next sections, I'll just assume we're cross-compiling from Linux to
Windows using mingw-w64
.
Autotools
Autotools philosophy
Autotools' principle is that:
- As a developer you run
autoreconf -i -f
(or something of the sort), which in turn runsautomake
and generatesMakefile.in
files fromMakefile.am
andconfigure
fromconfigure.ac
- these files are portable. - As a user, you run
./configure
which generates a system-specificMakefile
(recursively into each needed directory) - Then you can run
make
andmake install
Notes:
- Most packages will work with just
make install
as it depends on building targets. - While running
make
, if some files have changed, autotools might reinvoke itself - for example,aclocal-1.14
. - MinGW on Windows, at the time of this writing, has outdated versions of
automake (1.11) - running
autoreconf
on another platform first and re-running it with an older version might not work. The main complaint seems to be the automake macroAM_PROG_AR
which appeared in 1.11.2 or 1.12 - sometimes it's not used and it's enough to comment it inconfigure.ac
About out-of-tree builds:
- You can make a build directory to build outside of the source directory -
for example
cd build && ../yourpackage/configure [OPTIONS]
. The configure script will only populate the build directory with build artifacts rather than polluting the build directory. - Some packages don't support out-of-tree builds, notably,
libjit
doesn't. In that case, it's wise to make a copy of the source tree to build cleanly, and keep the original clean somewhere.
I haven't yet found a case where ./autogen.sh
or ./auto_gen.sh
are actually
needed - if you have recent autotools installed, it's fine.
Cross-compiling for Windows with Autotools
When running configure, use the --host=HOST
option. There's also --build
and others, but I'm confused about those and never had to use them except when
cross-compiling GCC for another platform. There's a reasonably good explanation
in the automake docs.
Example HOST
values:
i686-linux-gnu
.i686-w64-mingw32
,x86_64-w64-mingw32
i686-apple-darwin11
It's a good idea to cross-compile into a prefix so you don't mix native and foreign libraries. So most of the times, the build process might look something like:
mkdir build
cd build
../yourpackage/configure --host=i686-w64-mingw32 --prefix=$HOME/i686-w64-mingw32-prefix
make
make install
Side-note about temporary programs
Some packages compile executables that are run during the build process - such as tests, or libjit's opcode generator.
Sometimes, they'll be configured such as these will be built for the build
architecture (Linux) instead of the host
architecture (Windows), which is
perfect.
But if they're not, chances are you'll get an error about some missing executable
and find an .exe
in its place.
And if you're really unlucky, the package's Makefile.am files will invoke
directly util
instead of util$(EXEEXT)
- you might need to fix that too.
To be able to run .exe directly from Linux, on Debian you can do the following:
sudo apt-get install wine binfmt-support
sudo /usr/sbin/update-binfmts --package wine --install wine /usr/bin/wine --magic MZ
If you're lucky, the temporary programs will run just fine and everything will work out. If it doesn't, time to read the autotools docs to know how to generate those tools for the build platform, not the host platform.
If running these with wine messes up your terminal (when pressing enter you see
'^M'), running reset
should fix it in no time.
Cross-compiling for OSX with Autotools
Same principle, use --host=i686-apple-darwin11
or whatever your cross
toolchain is.
CMake
CMake philosophy
CMake is basically a build file generator - it can generate for:
- GNU make
- ninja (make alternative)
- XCode (command-line: xcodebuild)
- Microsoft Visual C++ / Studio (command-line: msbuild)
- And god knows what else
CMake usage
The cmake
command takes as argument the folder of a CMake project, ie.
where the CMakeLists.txt
file lives (casing may differ).
That means out-of-tree builds are possible as well. To specify variables
like the prefix, use the -D
option. Example run:
mkdir build
cmake ../yourpackage -DCMAKE_INSTALL_PREFIX=$HOME/prefix
make
make install
In fact, out-of-tree builds are the recommended way to go for CMake projects in general. Sometimes the build folders are created inside the source distribution, but if it's just a dependency and not your project, it's a bad habit in my opinion.
Cross-compiling for Windows with CMake
As far as I can tell, CMake has no built-in cross-compiling support like autotools
does. However, it does support toolchain files
. Here's how a toolchain file
for mingw-w64 might look like:
SET(CMAKE_SYSTEM_NAME Windows)
SET(CMAKE_C_COMPILER ${TARGET_ARCH}-gcc)
SET(CMAKE_CXX_COMPILER ${TARGET_ARCH}-g++)
SET(CMAKE_RC_COMPILER_INIT ${TARGET_ARCH}-windres)
SET(QT_MOC_EXECUTABLE ${MINGW_INSTALL_PREFIX}/bin/moc)
SET(QT_RCC_EXECUTABLE ${MINGW_INSTALL_PREFIX}/bin/rcc)
SET(QT_UIC_EXECUTABLE ${MINGW_INSTALL_PREFIX}/bin/uic)
SET(CMAKE_SYSTEM_PROGRAM_PATH ${CMAKE_INSTALL_PREFIX}/bin)
SET(CMAKE_FIND_ROOT_PATH ${MINGW_INSTALL_PREFIX})
SET(CMAKE_INSTALL_PREFIX ${MINGW_INSTALL_PREFIX})
SET(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
SET(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
SET(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
Cross-compiling for OSX with CMake
Same principle, you can find toolchains on the internet.
Runtime dependencies
Find dynamically linked libraries on Linux
To know which dynamic libraries a Linux executable is linked against, one
can simply run ldd EXEC
$ ldd ~/Dev/ooc/jit-tests/libjit1
linux-vdso.so.1 (0x00007fff8c3fe000)
libjit.so.0 => not found
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9d5468a000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9d54a5a000)
ldd is nice, because it respects LD_LIBRARY_PATH
and works recursively -
as long as it can find all the dependencies:
$ LD_LIBRARY_PATH=~/Dev/ooc/jit-tests/prefix/lib ldd ~/Dev/ooc/jit-tests/libjit1
linux-vdso.so.1 (0x00007fff2fba2000)
libjit.so.0 => ~/Dev/ooc/jit-tests/prefix/lib/libjit.so.0 (0x00007f99eaa1b000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f99ea64d000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f99ea42f000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f99ea22b000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f99e9f2a000)
/lib64/ld-linux-x86-64.so.2 (0x00007f99eacac000)
(Here, libjit was in a local prefix, and libm/libdl were dependencies of libjit, not the executable itself).
Find dynamically linked libraries on OSX
On OSX, use otool -L
:
$ otool -L bin/pug
bin/pug:
~/Dev/pug/pug/deps/prefix/lib/libcollections.0.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1197.1.1)
A few notes about otool:
- Unlike
ldd
, it will not do a recursive search - in the case above, maybelibcollections.0.dylib
has other dependencies, but otool won't tell you that, unless you call it onlibcollections.0.dylib
itself. You can pretty easily make a script to do that for you. - I think I remember that it doesn't respect
DYLD_LIBRARY_PATH
, but I might be mistaken.
Find dynamically linked libraries on Windows
On Windows, one can use the Dependency Walker to figure out which DLLs are loaded - it'll even go so far as monitor the executable while it's running to know which DLLs are not linked against, but still loaded dynamically (with LoadLibrary).
On Linux, you can run i686-w64-mingw32-objdump -p executable.exe
- the
output is a bit verbose, so you might want to pipe that into grep 'DLL Name'
sh
$ i686-w64-mingw32-objdump -p bin/pug.exe | grep 'DLL Name'
DLL Name: KERNEL32.dll
DLL Name: msvcrt.dll
DLL Name: USER32.dll
And that's it.
Here's another article just for you:
ktls now under the rustls org
What's a ktls
I started work on ktls and ktls-sys, a pair of crates exposing Kernel TLS offload to Rust, about two years ago.
kTLS lets the kernel (and, in turn, any network interface that supports it) take care of encryption, framing, etc., for the entire duration of a TLS connection... as soon as you have a TLS connection.
For the handshake itself (hellos, change cipher, encrypted extensions, certificate verification, etc.), you still have to use a userland TLS implementation.