Jekyll2024-02-04T12:23:19+00:00https://dominikberner.ch//feed.xmlDominik BernerC++ Coder, Agilist, Rock ClimberCMake line by line - Creating a library2024-01-24T00:00:00+00:002024-01-24T00:00:00+00:00https://dominikberner.ch//cmake-library-setup<p><strong>Creating a clean library that has proper symbol visibility and installation instructions might sound difficult.</strong> However with CMake it is relatively straight forward to set up, even if there are a few things to consider. Actually creating creating a library is as simple as invoking the <code class="language-plaintext highlighter-rouge">add_library()</code> command and adding the sources to it. When it comes to setting up the installation instructions and symbol visibility properly there is a bit more to it. There are also some small, but useful things like defining the version compatibility of the library that make the life of developers a lot easier if done properly.</p>
<p>In this post, we will go through the steps to create a library with CMake, including proper symbol visibility and installation. All the code for this post is available as <a href="https://github.com/bernedom/CMake_Library_Template">a template on GitHub</a></p>
<h2 id="creating-a-library-with-cmake---a-quick-overview">Creating a library with CMake - A quick overview</h2>
<p>When configuring for a library with CMake we need to do the following things:</p>
<ol>
<li>Creating the library and adding the sources to it</li>
<li>Setting version compatibility</li>
<li>Specifying which include files are public and private</li>
<li>Setting symbol visibility and creating an export header</li>
<li>Defining where to install the library and make it usable with <code class="language-plaintext highlighter-rouge">find_package()</code></li>
<li>Some miscellaneous things like setting the C++ standard and debug suffix</li>
</ol>
<p>For detailed documentation on the commands used in this post, please refer to the <a href="https://cmake.org/cmake/help/latest/">CMake documentation</a>.</p>
<div style="background-color: #444444; margin-bottom: 1.5em;">
<h1><a href="https://www.amazon.com/dp/1803239727">
CMake Best Practices - The book</a></h1>
<div style="display: grid; width: 100%; grid-template-columns: 25% 1fr; grid-gap: 1%; padding-bottom: 0.5em;">
<div>
<a href="https://www.amazon.com/dp/1803239727">
<img src="/images/cmake-best-practices.jpg" alt="Cover of the CMake Best Practices book by Dominik Berner and Mustafa Kemal Gilor" style="max-width:100%;height:auto;display:block" />
</a>
</div>
<div>
CMake Best Practices: Discover proven techniques for creating and maintaining programming projects with
CMake. Learn how to use CMake to maximum efficiency with this compendium of best practices for a lot of
common tasks when building C++ software.
<br />
<br />
<div class="order-button">
<a href="https://www.amazon.com/dp/1803239727">Get it from Amazon</a>
</div>
</div>
</div>
</div>
<h2 id="setting-up-the-project">Setting up the project</h2>
<p>Choosing the right file structure for a project is always important as it makes it easier to find files and helps to keep the project organized. For libraries it is even more important, as others will want to use the library as well. Usually, not all files needed to build a library are necessary to use the library, so a clean separation helps to only install the files that are needed. For this post, we will create a library called “Greeter” or “libGreeter” and use the following file structure:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>├── CMakeLists.txt <span class="c"># the main CMakeLists.txt file</span>
├── LICENSE
├── README.md
├── cmake <span class="c"># CMake modules</span>
│ └── GreeterConfig.cmake.in
├── include <span class="c"># public headers</span>
│ └── greeter
│ └── hello.hpp
└── src <span class="c"># source files</span>
├── hello.cpp
├── internal.cpp
└── internal.hpp
</code></pre></div></div>
<p>The library will expose a class <code class="language-plaintext highlighter-rouge">Greeter::Hello</code> that contains a <code class="language-plaintext highlighter-rouge">greet()</code> function that prints “Hello ${name} from a library” and it is declared in the <code class="language-plaintext highlighter-rouge">include/hello/hello.hpp</code>. Internally it uses a private function called <code class="language-plaintext highlighter-rouge">print_impl</code> which is defined in The <code class="language-plaintext highlighter-rouge">internal.cpp</code> and <code class="language-plaintext highlighter-rouge">internal.hpp</code> files. These are used to demonstrate how to hide symbols from the library interface. The <code class="language-plaintext highlighter-rouge">GreeterConfig.cmake.in</code> file is used to configure the CMake package file that will be used to make the library usable with <code class="language-plaintext highlighter-rouge">find_package()</code>.</p>
<p>Let’s have a look at the public header file <code class="language-plaintext highlighter-rouge">include/greeter/hello.hpp</code>:</p>
<div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#pragma once
</span>
<span class="cp">#include <greeter/export_greeter.hpp>
#include <string>
</span>
<span class="k">namespace</span> <span class="n">Greeter</span> <span class="p">{</span>
<span class="c1">/// Example class that is explicitly exported into a library</span>
<span class="k">class</span> <span class="nc">GREETER_EXPORT</span> <span class="n">Hello</span> <span class="p">{</span>
<span class="nl">public:</span>
<span class="n">Hello</span><span class="p">(</span><span class="k">const</span> <span class="n">std</span><span class="o">::</span><span class="n">string</span> <span class="o">&</span><span class="n">name</span><span class="p">)</span> <span class="o">:</span> <span class="n">name_</span><span class="p">{</span><span class="n">name</span><span class="p">}</span> <span class="p">{}</span>
<span class="kt">void</span> <span class="n">greet</span><span class="p">()</span> <span class="k">const</span><span class="p">;</span>
<span class="nl">private:</span>
<span class="k">const</span> <span class="n">std</span><span class="o">::</span><span class="n">string</span> <span class="n">name_</span><span class="p">;</span>
<span class="p">};</span>
<span class="p">}</span> <span class="c1">// namespace Greeter</span>
</code></pre></div></div>
<p>Two things are notable in this class, first, the including of the <code class="language-plaintext highlighter-rouge"><greeter/export_header.hpp></code> file and second, the <code class="language-plaintext highlighter-rouge">GREETER_EXPORT</code> macro. The <code class="language-plaintext highlighter-rouge">export_greeter.hpp</code> file is generated by CMake and contains the necessary macros to export symbols from the library. The <code class="language-plaintext highlighter-rouge">GREETER_EXPORT</code> macro is used to mark the class as exported. This makes the class visible to users of the library and marks the class <code class="language-plaintext highlighter-rouge">Greeter::Hello</code> as part of the public API. The export header is generated by CMake and we will look at it later.</p>
<h2 id="creating-the-library-with-cmake">Creating the library with CMake</h2>
<p>Let’s have a look at the <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> file line by line.</p>
<details>
<summary>
Click here to expand the full FindLibrary.cmake
</summary>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cmake_minimum_required</span><span class="p">(</span>VERSION 3.17<span class="p">)</span>
<span class="nb">project</span><span class="p">(</span>
Greeter
VERSION 1.0.0
DESCRIPTION
<span class="s2">"A simple C++ project to demonstrate creating executables and libraries in CMake"</span>
LANGUAGES CXX
<span class="p">)</span>
<span class="c1"># set the postfix "d" for the resulting .so or .dll files when building the</span>
<span class="c1"># library in debug mode</span>
<span class="nb">set</span><span class="p">(</span>CMAKE_DEBUG_POSTFIX
d
<span class="p">)</span>
<span class="c1"># add the library target and an alias</span>
<span class="nb">add_library</span><span class="p">(</span>Greeter<span class="p">)</span>
<span class="nb">add_library</span><span class="p">(</span>Greeter::Greeter ALIAS Greeter<span class="p">)</span>
<span class="c1"># set properties for the target. VERSION set the library version to the project</span>
<span class="c1"># version * SOVERSION set the compatibility version for the library to the</span>
<span class="c1"># major number of the version</span>
<span class="nb">set_target_properties</span><span class="p">(</span>
Greeter
PROPERTIES VERSION <span class="si">${</span><span class="nv">PROJECT_VERSION</span><span class="si">}</span>
SOVERSION <span class="si">${</span><span class="nv">PROJECT_VERSION_MAJOR</span><span class="si">}</span>
<span class="p">)</span>
<span class="c1"># add sources to the library target</span>
<span class="nb">target_sources</span><span class="p">(</span>
Greeter
PRIVATE src/hello.cpp src/internal.cpp
<span class="p">)</span>
<span class="c1"># define the C++ standard needed to compile this library and make it visible to</span>
<span class="c1"># dependers</span>
<span class="nb">target_compile_features</span><span class="p">(</span>
Greeter
PUBLIC cxx_std_17
<span class="p">)</span>
<span class="c1"># set the include directories</span>
<span class="nb">target_include_directories</span><span class="p">(</span>
Greeter
PRIVATE src
PUBLIC $<BUILD_INTERFACE:<span class="si">${</span><span class="nv">CMAKE_CURRENT_SOURCE_DIR</span><span class="si">}</span>/include>
$<INSTALL_INTERFACE:<span class="si">${</span><span class="nv">CMAKE_INSTALL_INCLUDEDIR</span><span class="si">}</span>>
<span class="p">)</span>
<span class="c1"># if using limited visibility, set CXX_VISIBILILTY_PRESET to "hidden"</span>
<span class="nb">include</span><span class="p">(</span>GenerateExportHeader<span class="p">)</span>
<span class="nb">set_property</span><span class="p">(</span>
TARGET Greeter
PROPERTY CXX_VISIBILITY_PRESET <span class="s2">"hidden"</span>
<span class="p">)</span>
<span class="c1"># Hide inlined functions by default, reducing the size of the library</span>
<span class="nb">set_property</span><span class="p">(</span>
TARGET Greeter
PROPERTY VISIBILITY_INLINES_HIDDEN TRUE
<span class="p">)</span>
<span class="c1"># this command generates a header file in the CMAKE_CURRENT_BINARY_DIR which</span>
<span class="c1"># sets the visibility attributes according to the compiler settings</span>
<span class="nf">generate_export_header</span><span class="p">(</span>
Greeter
EXPORT_FILE_NAME
export/greeter/export_greeter.hpp
<span class="p">)</span>
<span class="c1"># Add CMAKE_CURRENT_BINARY_DIR to the include path so the generated header can</span>
<span class="c1"># be found</span>
<span class="nb">target_include_directories</span><span class="p">(</span>
Greeter
PUBLIC $<BUILD_INTERFACE:<span class="si">${</span><span class="nv">CMAKE_CURRENT_BINARY_DIR</span><span class="si">}</span>/export>
$<INSTALL_INTERFACE:<span class="si">${</span><span class="nv">CMAKE_INSTALL_INCLUDEDIR</span><span class="si">}</span>>
<span class="p">)</span>
<span class="c1"># include the GNUInstallDirs module to get the canonical install paths defined</span>
<span class="nb">include</span><span class="p">(</span>GNUInstallDirs<span class="p">)</span>
<span class="c1"># Install the library and export the CMake targets</span>
<span class="nb">install</span><span class="p">(</span>
TARGETS Greeter
EXPORT GreeterTargets
LIBRARY DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_LIBDIR</span><span class="si">}</span>
ARCHIVE DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_LIBDIR</span><span class="si">}</span>
RUNTIME DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_BINDIR</span><span class="si">}</span>
INCLUDES DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_INCLUDEDIR</span><span class="si">}</span>
<span class="p">)</span>
<span class="c1"># install the public headers</span>
<span class="nb">install</span><span class="p">(</span>DIRECTORY include/ DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_INCLUDEDIR</span><span class="si">}</span><span class="p">)</span>
<span class="c1"># install the generated export header</span>
<span class="nb">install</span><span class="p">(</span>
FILES <span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_BINARY_DIR</span><span class="si">}</span><span class="s2">/export/greeter/export_greeter.hpp"</span>
DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_INCLUDEDIR</span><span class="si">}</span>/greeter
<span class="p">)</span>
<span class="c1"># configure the CMake package file so the libray can be included with find_package() later</span>
<span class="nb">include</span><span class="p">(</span>CMakePackageConfigHelpers<span class="p">)</span>
<span class="nf">write_basic_package_version_file</span><span class="p">(</span>
<span class="s2">"GreeterConfigVersion.cmake"</span>
VERSION <span class="si">${</span><span class="nv">PROJECT_VERSION</span><span class="si">}</span>
COMPATIBILITY SameMajorVersion<span class="p">)</span>
<span class="nf">configure_package_config_file</span><span class="p">(</span>
<span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_LIST_DIR</span><span class="si">}</span><span class="s2">/cmake/GreeterConfig.cmake.in"</span>
<span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_BINARY_DIR</span><span class="si">}</span><span class="s2">/GreeterConfig.cmake"</span>
INSTALL_DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_DATAROOTDIR</span><span class="si">}</span>/cmake/greeter
<span class="p">)</span>
<span class="c1"># install the CMake targets</span>
<span class="nb">install</span><span class="p">(</span>
EXPORT GreeterTargets
FILE GreeterTargets.cmake
NAMESPACE Greeter::
DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_DATAROOTDIR</span><span class="si">}</span>/cmake/greeter
<span class="p">)</span>
</code></pre></div> </div>
</details>
<h2 id="setting-up-the-library">Setting up the library</h2>
<p>As usual the <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> starts with <code class="language-plaintext highlighter-rouge">cmake_minimum_required</code> which specifies the minimum CMake version to be used and the <code class="language-plaintext highlighter-rouge">project()</code> call.</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">project</span><span class="p">(</span>
Greeter
VERSION 1.0.0
DESCRIPTION
<span class="s2">"A simple C++ project to demonstrate creating executables and libraries in CMake"</span>
LANGUAGES CXX
<span class="p">)</span>
</code></pre></div></div>
<p>For libraries the <code class="language-plaintext highlighter-rouge">VERSION</code> field is important, as this is used to determine the version compatibility of the library. The <code class="language-plaintext highlighter-rouge">LANGUAGES</code> field is optional, but it is good practice to specify the language used in the project. This will make sure that the correct compiler is used when building the project.</p>
<p>The next thing to do is to set a debug postfix for the library with <code class="language-plaintext highlighter-rouge">set(CMAKE_DEBUG_POSTFIX d)</code>. This means when building the library in debug mode, it will append a “d” to the resulting library file. This is useful to distinguish between debug and release builds of the library, but it is an optional step. This is a global option for the project, so it will affect all libraries and executables inside the project.</p>
<p>After that the library target is created with <code class="language-plaintext highlighter-rouge">add_library(Greeter)</code>. This will create a library target called <code class="language-plaintext highlighter-rouge">Greeter</code> which can be used to add sources, set properties and link against other libraries. To make the library usable with <code class="language-plaintext highlighter-rouge">find_package()</code> the same way as if it was included with <code class="language-plaintext highlighter-rouge">add_subdirectory</code> or with <code class="language-plaintext highlighter-rouge">FetchContent</code> we also create an alias for the library with <code class="language-plaintext highlighter-rouge">add_library(Greeter::Greeter ALIAS Greeter)</code>.</p>
<p>That way all targets that use the library can use <code class="language-plaintext highlighter-rouge">Greeter::Greeter</code> instead of just <code class="language-plaintext highlighter-rouge">Greeter</code>. This is useful to avoid name clashes and to make it clear that the target is a library.</p>
<p>Once the target is defined we can set the properties for the library.</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">set_target_properties</span><span class="p">(</span>
Greeter
PROPERTIES VERSION <span class="si">${</span><span class="nv">PROJECT_VERSION</span><span class="si">}</span>
SOVERSION <span class="si">${</span><span class="nv">PROJECT_VERSION_MAJOR</span><span class="si">}</span>
<span class="p">)</span>
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">VERSION</code> property sets the version of the library to the project version. The <code class="language-plaintext highlighter-rouge">SOVERSION</code> property sets the compatibility version of the library to the major version of the project. Generally you should try to use <a href="https://semver.org/">semantic versioning</a> for libraries and set the <code class="language-plaintext highlighter-rouge">SOVERSION</code> to the major version of the library and determines API compatibility. Generally I advise to use the following rules for versioning:</p>
<ul>
<li>If your change the public API by removing or changing an interface class or function, increase the major version</li>
<li>If new symbols are added to the API but nothing is changed or remove increase the minor version</li>
<li>For implementation changes that do not affect the API increase the patch version</li>
</ul>
<p>Once the target is created and the properties are set, we can add the sources to the library target with <code class="language-plaintext highlighter-rouge">target_sources()</code>. This command takes the target name and a list of source files and adds them to the target. The sources are added as private sources, which means they are only visible to the target itself. This is important to hide implementation details from the library interface.</p>
<p>Next we need to set up the include directories for the library. For the public include directories, there are two things to consider here, first the headers need to be available to the library itself and second, the headers need to be available to users of the library. This makes the <code class="language-plaintext highlighter-rouge">target_include_directories()</code> command a bit more complicated.</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">target_include_directories</span><span class="p">(</span>
Greeter
PRIVATE src
PUBLIC $<BUILD_INTERFACE:<span class="si">${</span><span class="nv">CMAKE_CURRENT_SOURCE_DIR</span><span class="si">}</span>/include>
$<INSTALL_INTERFACE:<span class="si">${</span><span class="nv">CMAKE_INSTALL_INCLUDEDIR</span><span class="si">}</span>>
<span class="p">)</span>
</code></pre></div></div>
<p>This command takes the target name and a list of include directories. The <code class="language-plaintext highlighter-rouge">PRIVATE</code> keyword means that the include directory is only visible to the target itself. We add the <code class="language-plaintext highlighter-rouge">src</code> folder here which contains all the internal headers/
The <code class="language-plaintext highlighter-rouge">PUBLIC</code> keyword means that the include directory is visible to the target and to users of the library, to differ the include path during building the library and when it is installed, a <a href="https://cmake.org/cmake/help/latest/manual/cmake-generator-expressions.7.html">generator expression</a> is used. If we’re building the library itself the <code class="language-plaintext highlighter-rouge">$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include></code> expression will be evaluated to the <code class="language-plaintext highlighter-rouge">include</code> folder in the source directory. If the library is installed, the expression <code class="language-plaintext highlighter-rouge">$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}></code> will be evaluated to the install include directory. This makes sure that the correct include directory is used when building the library and when it is installed.</p>
<h2 id="setting-symbol-visibility">Setting symbol visibility</h2>
<p>Separating the headers into private and public is one step to define the library interface, we can go a step further by defining the symbol visibility of the library. This is important to hide implementation details from the library interface and to reduce the size of the library. First the default visibility of the library is set to hidden:</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">set_target_properties</span><span class="p">(</span>Greeter PROPERTIES CXX_VISIBILITY_PRESET <span class="s2">"hidden"</span>
VISIBILITY_INLINES_HIDDEN TRUE<span class="p">)</span>
</code></pre></div></div>
<p>This means that all symbols are hidden by default and need to be explicitly exported. On Windows this is already the default, on linux und mac the default is that everything is visible. Additionally we can hide inlined functions by default, which will reduce the size of the library some more but it also means that inlined functions need to be explicitly exported.</p>
<p>CMake has a built-in module called <code class="language-plaintext highlighter-rouge">GenerateExportHeader</code> that can be used to generate a header file that sets the symbol visibility according to the compiler settings, which is included with <code class="language-plaintext highlighter-rouge">include(GenerateExportHeader)</code>. This gives us the <code class="language-plaintext highlighter-rouge">generate_export_header()</code> command which generates the export macro header file for a target.</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nf">generate_export_header</span><span class="p">(</span>
Greeter
EXPORT_FILE_NAME
export/greeter/export_greeter.hpp
<span class="p">)</span>
</code></pre></div></div>
<p>By default the export header will be created in the <code class="language-plaintext highlighter-rouge">${CMAKE_CURRENT_BINARY_DIR}</code> directory unless an absolute path is passed to <code class="language-plaintext highlighter-rouge">EXPORT_FILE_NAME</code>, in this case exporting to the default location is fine, but we define the folder structure and the file name to be <code class="language-plaintext highlighter-rouge">export/greeter/export_greeter.hpp</code>. Putting the export file into its own subfolder helps later with finding and installing it. The generated header file contains the necessary macros to export symbols from the library.</p>
<p>In order to use the included file with <code class="language-plaintext highlighter-rouge">#include <greeter/export_greeter.hpp></code> we need to add the <code class="language-plaintext highlighter-rouge">${CMAKE_CURRENT_BINARY_DIR}</code> to the include path. This is done with the <code class="language-plaintext highlighter-rouge">target_include_directories()</code> command again. Again we use the generator expression to differ between building the library and installing it.</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="nb">target_include_directories</span><span class="p">(</span>
Greeter
PUBLIC $<BUILD_INTERFACE:<span class="si">${</span><span class="nv">CMAKE_CURRENT_BINARY_DIR</span><span class="si">}</span>/export>
$<INSTALL_INTERFACE:<span class="si">${</span><span class="nv">CMAKE_INSTALL_INCLUDEDIR</span><span class="si">}</span>>
<span class="p">)</span>
</code></pre></div></div>
<p>This concludes the setup of the library target and it can be built and used in other projects by using <code class="language-plaintext highlighter-rouge">add_subdirectory()</code> and <code class="language-plaintext highlighter-rouge">target_link_libraries()</code>. However for a library to be useful it needs to be installed and usable with <code class="language-plaintext highlighter-rouge">find_package()</code>.</p>
<h2 id="defining-installation-behavior">Defining installation behavior</h2>
<p>To make the library usable with <code class="language-plaintext highlighter-rouge">find_package()</code> we need to install the library and create a CMake package file. The first step is to define where the library should be installed. This is done with the <code class="language-plaintext highlighter-rouge">install()</code> command. The CMake module <code class="language-plaintext highlighter-rouge">GNUInstallDirs</code> defines the canonical install paths for different platforms which should be used except for special cases.</p>
<p>The first thing to set is the install destination for the library itself. This is done with the <code class="language-plaintext highlighter-rouge">LIBRARY</code>, <code class="language-plaintext highlighter-rouge">ARCHIVE</code> and <code class="language-plaintext highlighter-rouge">RUNTIME</code> keywords. The <code class="language-plaintext highlighter-rouge">LIBRARY</code> keyword is used for shared libraries, the <code class="language-plaintext highlighter-rouge">ARCHIVE</code> keyword is used for static libraries and the <code class="language-plaintext highlighter-rouge">RUNTIME</code> keyword is used for executables. The <code class="language-plaintext highlighter-rouge">INCLUDES</code> keyword is used to install the include directories.</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="nb">install</span><span class="p">(</span>
TARGETS Greeter
EXPORT GreeterTargets
LIBRARY DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_LIBDIR</span><span class="si">}</span>
ARCHIVE DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_LIBDIR</span><span class="si">}</span>
RUNTIME DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_BINDIR</span><span class="si">}</span>
INCLUDES DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_INCLUDEDIR</span><span class="si">}</span>
<span class="p">)</span>
</code></pre></div></div>
<p>This tells CMake to install the target <code class="language-plaintext highlighter-rouge">Greeter</code> the the paths specified below. the <code class="language-plaintext highlighter-rouge">EXPORT GreeterTargets</code> keyword tells CMake to export the target information to an export set called <code class="language-plaintext highlighter-rouge">GreeterTargets</code> which will be used later to create the CMake package file in order to make the installation usable with <code class="language-plaintext highlighter-rouge">find_package</code>.
The <code class="language-plaintext highlighter-rouge">CMAKE_INSTALL_LIBDIR</code>, <code class="language-plaintext highlighter-rouge">CMAKE_INSTALL_BINDIR</code> and <code class="language-plaintext highlighter-rouge">CMAKE_INSTALL_INCLUDEDIR</code> variables are defined by the <code class="language-plaintext highlighter-rouge">GNUInstallDirs</code> module.</p>
<p>The public headers and the export-header need to be installed explicitly in a similar manner:</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">install</span><span class="p">(</span>DIRECTORY include/ DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_INCLUDEDIR</span><span class="si">}</span><span class="p">)</span>
<span class="nb">install</span><span class="p">(</span>
FILES <span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_BINARY_DIR</span><span class="si">}</span><span class="s2">/export/greeter/export_greeter.hpp"</span>
DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_INCLUDEDIR</span><span class="si">}</span>/greeter
<span class="p">)</span>
</code></pre></div></div>
<p>For normal usage in a system as runtime library this would be already enough, but since this is a CMake package we want this to be usable as easily as possible by other devs. This means we need to create a CMake package file that can be used with <code class="language-plaintext highlighter-rouge">find_package()</code>.</p>
<h3 id="making-the-library-usable-with-find_package">Making the library usable with <code class="language-plaintext highlighter-rouge">find_package()</code></h3>
<p>CMake provides the <a href="https://cmake.org/cmake/help/latest/module/CMakePackageConfigHelpers.html">CMakePackageConfigHelpers</a> module which - as the name suggests - contains helper functions that can be used to create a CMake package file. A CMake package consists of a version information file, a configuration file and a list of exported targets. The version information file is used to check if the correct version of the package is installed and the package file is used to make the package usable with <code class="language-plaintext highlighter-rouge">find_package()</code>.</p>
<p>The first step is to create the version information file with <code class="language-plaintext highlighter-rouge">write_basic_package_version_file()</code>.</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nf">write_basic_package_version_file</span><span class="p">(</span>
<span class="s2">"GreeterConfigVersion.cmake"</span>
VERSION <span class="si">${</span><span class="nv">PROJECT_VERSION</span><span class="si">}</span>
COMPATIBILITY SameMajorVersion<span class="p">)</span>
</code></pre></div></div>
<p>This command takes the name of the file to be created, the version of the package and the compatibility mode. The compatibility mode is used to determine if the package is compatible with the requested version. In this case we use <code class="language-plaintext highlighter-rouge">SameMajorVersion</code> which means that the package is compatible if the major version is the same.</p>
<p>Next we generate the package file from a template:</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nf">configure_package_config_file</span><span class="p">(</span>
<span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_LIST_DIR</span><span class="si">}</span><span class="s2">/cmake/GreeterConfig.cmake.in"</span>
<span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_BINARY_DIR</span><span class="si">}</span><span class="s2">/GreeterConfig.cmake"</span>
INSTALL_DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_DATAROOTDIR</span><span class="si">}</span>/cmake/greeter
<span class="p">)</span>
</code></pre></div></div>
<p>internally calls <code class="language-plaintext highlighter-rouge">configure_file()</code> to generate the package file from the template. The generated file will be called <code class="language-plaintext highlighter-rouge">GreeterConfig.cmake</code> and will be installed to <code class="language-plaintext highlighter-rouge">${CMAKE_INSTALL_DATAROOTDIR}/cmake/greeter</code>. The template file is a generic file that looks like this:</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@PACKAGE_INIT@
<span class="nb">include</span><span class="p">(</span><span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_LIST_DIR</span><span class="si">}</span><span class="s2">/@PROJECT_NAME@Targets.cmake"</span><span class="p">)</span>
<span class="nf">check_required_components</span><span class="p">(</span><span class="s2">"@PROJECT_NAME@"</span><span class="p">)</span>
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">@PACKAGE_INIT@</code> macro is replaced by the <code class="language-plaintext highlighter-rouge">CMakePackageConfigHelpers</code> module with the necessary code to initialize the package. The <code class="language-plaintext highlighter-rouge">@PROJECT_NAME@</code> macro is replaced with the project name. It includes the file containing the targets and checks if the required components are available.</p>
<p>The last step is to install the CMake targets:</p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">install</span><span class="p">(</span>
EXPORT GreeterTargets
FILE GreeterTargets.cmake
NAMESPACE Greeter::
DESTINATION <span class="si">${</span><span class="nv">CMAKE_INSTALL_DATAROOTDIR</span><span class="si">}</span>/cmake/greeter
<span class="p">)</span>
</code></pre></div></div>
<p>This takes the export set <code class="language-plaintext highlighter-rouge">GreeterTargets</code> which we created above with the <code class="language-plaintext highlighter-rouge">install(EXPORT)</code> command and installs it to <code class="language-plaintext highlighter-rouge">${CMAKE_INSTALL_DATAROOTDIR}/cmake/greeter</code>. Then the namespace <code class="language-plaintext highlighter-rouge">Greeter::</code> is added to the targets in the export set. This means that the targets can be used with <code class="language-plaintext highlighter-rouge">Greeter::Greeter</code> instead of just <code class="language-plaintext highlighter-rouge">Greeter</code>.</p>
<p>With this the library is ready to be, built installed and used with <code class="language-plaintext highlighter-rouge">find_package()</code>. To build and install the library call.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cmake <span class="nt">-B</span> build <span class="nt">-S</span> <span class="nb">.</span>
cmake <span class="nt">--build</span> build <span class="nt">--target</span> <span class="nb">install</span>
</code></pre></div></div>
<p>By default this will create a static library, to create a shared library add <code class="language-plaintext highlighter-rouge">-DBUILD_SHARED_LIBS=ON</code> to the <code class="language-plaintext highlighter-rouge">cmake</code> command to configure the project.</p>
<p>The setup described here is the bare minimum to create a clean library with CMake. There are a few more things that can be done to make the library more usable and portable, like adding <a href="https://cmake.org/cmake/help/latest/module/CPack.html">packaging information</a> and of course it pays to set up proper <a href="https://cmake.org/cmake/help/latest/module/CPack.html">testing</a> as well.</p>Creating a clean library that has proper symbol visibility and installation instructions might sound difficult. However with CMake it is relatively straight forward to set up, even if there are a few things to consider. Actually creating creating a library is as simple as invoking the add_library() command and adding the sources to it. When it comes to setting up the installation instructions and symbol visibility properly there is a bit more to it. There are also some small, but useful things like defining the version compatibility of the library that make the life of developers a lot easier if done properly.LLMs and AI make software development harder2023-11-03T00:00:00+00:002023-11-03T00:00:00+00:00https://dominikberner.ch//ai-tools-make-our-job-harder<p><strong>LLMs and AI make software development harder.</strong> Wait, what? Isn’t the whole point of AI to make writing code <em>easier</em>? Well, yes. But writing code is the easy part of software development. The hard part is understanding the problem, designing business logic and debugging tough bugs. And that’s where AI code assistants like <a href="https://copilot.github.com/">copilot</a> or <a href="https://chat.gpt.ai/">chatgpt</a> make our job harder, as they strip a way the easy parts of our job and only leave us with the hard parts and make it harder for new developers to master the craft of software development.</p>
<h2 id="coding-is-the-easy-part">Coding is the easy part?</h2>
<p>Is coding really that easy? No, not exactly easy - mastering a programming language still takes years of practice. But when looking at software development as a whole, writing code is one of the easier part and it is no wonder that chatgpt and copilot can write decent code. First, they have trained on millions of lines of code and second, code is by its nature very easy to understand for a machine as programming languages are very structured languages with limited vocabularies. For a LLM it is probably much easier to learn than natural language.</p>
<blockquote>
<p>Programming languages are just very powerful tools that we use to solve problems</p>
</blockquote>
<p>In the end, programming languages are just very powerful tools that we use to solve problems. And the hard part is not the learning tool, but understanding the problem and designing a solution for it. This is instantly obvious as most software engineering problems could be solved by a lot of different programming languages, which one to pick is a matter of context or even personal preference.</p>
<p>Another indicator that programming is that easy part is, that the more senior a software developer gets, the less time they usually spend writing code. Instead seniors spend more time understanding the problem, designing the solution, jumping in to debug tough bugs or doing design decisions and of course mentoring junior team members. While this might not be true for every senior developer, when looking at my software development bubble this is a clear trend.</p>
<h2 id="the-hard-parts-of-software-development">The hard parts of software development</h2>
<p>Copilot and other AI assistants are a great help for developers, but they are not flawless. A part of it is natural, as they are trained on existing code without any context and there are also some bad habits from the training data that code assistants might have picked up. And while this might get optimized over time, at the moment it means that developers still have to review the code that is generated by the AI code assistants. And reviews are hard - especially if one cannot query the author of the code for their intent.</p>
<p>And even if the code is good enough, it might still introduce flaws into the control logic of a program, might be missing edge cases or introduce a regression bug when integrated into an existing code base. This means that developers have to debug the code that is generated by the AI code assistants in case of an error. And debugging is hard - especially for these kind of problems where it might be hard to recreate the circumstances that cause the bug in the first place.</p>
<p>As the generated code heavily depends on the context we give the AI code assistants, this means that we have to be very precise in our descriptions which means that we have to understand the problem very well, which requires domain knowledge and context awareness on the side of the developer. Even if we just focus on the technical part, being aware of the surrounding architecture and the existing code is crucial to get good results.</p>
<p>Granted we could ask LLMs like chatgpt for help with integration into the codebase or we could just pass it the whole codebase and let it redesign everything. But apart from requiring lot of input to give enough context debugging in an unfamiliar codebase is even tougher than debugging stuff that you wrote yourself.</p>
<p>And then there is the whole thing about figuring out what exactly our product should do, how it should behave and how it should look like. At the moment this still requires a lot of human smarts and while AI tools might allow us to iterate faster on figuring out what we want to build in the end it is still a human that has to make the decision.</p>
<h2 id="ai-generated-software-development-is-exhausting">AI generated software development is exhausting</h2>
<p>It seems a given that AI assistants will change our job by automating away writing code and even helping us with some design decisions. It is very convenient that we can ask chatgpt questions regarding system design and get reasonable answers. What is still left to us is making the decision on which answer to pick and which prompt to give to the LLM to get the results that we need. And this is very exhausting - decision fatigue is a thing and it is very real. Already before AI code assistants the limiting factor in the speed of delivering software was not the often the decision making process of an organization or a team - not writing the code.</p>
<blockquote>
<p>The limiting factor in delivery speed is decision making, not writing code</p>
</blockquote>
<p>On top of that is that current company structures will most likely still hold software developers accountable for the code that is running in a product, not the AI code assistants that wrote them in the first place. This will add another layer of stress on it, not just do we need to make more decisions faster, we are also to blame if the AI code assistants make a mistake.</p>
<p>And if there is a mistake then the debugging needs to be done, which often needs a lot of context and background knowledge to be efficient. AI tools are of less help there, because they cannot figure out context changes by themselves. They might help us with the easy part of debugging like running tests with different variations, to narrow down the cause but finding the prompt for an LLM to generate the fix will still be on us.</p>
<h2 id="are-ai-tools-replacing-developers">Are AI tools replacing developers?</h2>
<p>AI assistants might lower the initial hurdle to get into software development, but they will not make it easier to become a good, experienced software developer. Most of the senior developers I know gained the background knowledge and context needed to formulate complex solutions from years of slogging through (bad) code and learning from their mistakes. This might be an inefficient way of learning but it is very effective in building up the domain knowledge that is needed for software development.
This knowledge is also something that is very hard to teach in a formal way, as books or online tutorials by nature are somewhat generic and and adaption to real life situations still needs hands-on experience.</p>
<p>As I see it, broad usage of AI tools will change the the skill distribution of software developers. We might end up with a lot more junior developers that are able to write code - or at least prompt the LLMs write the code - but lack the deep understanding of software development to be efficient in decision making. On the other hands senior developers that have acquired the context and domain knowledge will be fewer and fewer as the effort to acquire this knowledge will be higher as AI tools will hide away the parts that would enable us to learn unless the generated code is reviewed in-depth, which then raises the question if we gain that much efficiency through the tools at all.</p>
<p>So are AI tools replacing developers? Currently no, they will transform the job of a developer but they will not replace them. The question is how we as an industry will make sure that we retain the knowledge and experience that we have gained over the years. It will also raise the question how we handle the human side of software development, as the job will either become more boring because we just feed machines with prompts, yet more stressful because we have to make more hard decisions faster. Or maybe AI tools are really just a hype and a fad and nothing will change at all.</p>LLMs and AI make software development harder. Wait, what? Isn’t the whole point of AI to make writing code easier? Well, yes. But writing code is the easy part of software development. The hard part is understanding the problem, designing business logic and debugging tough bugs. And that’s where AI code assistants like copilot or chatgpt make our job harder, as they strip a way the easy parts of our job and only leave us with the hard parts and make it harder for new developers to master the craft of software development.The four core powers for empowered agile teams2023-07-14T00:00:00+00:002023-07-14T00:00:00+00:00https://dominikberner.ch//what-are-agile-empowered-teams<p><strong>Empowering Teams is a key aspect to create high-performing teams in an agile setting.</strong> Ever since <a href="http://www.extremeprogramming.org/">Extreme Programming</a> was introduced into the world of software development this statement or a variety of it has been carried over to almost all agile frameworks. And there is a multitude of articles about how to create them. But what does “empowered” exactly mean? What are the minimum powers that a team needs to be able to be agile?</p>
<p>As with a lot of things in agile, the bandwidth of how far one wants to go with empowering the team depends heavily on the context and the organization a team lives in. This can range from choosing their agile framework to budget allocations up to teams doing their hiring. However, if we focus on the process perspective of delivering quality software effectively, there are surprisingly few things that a team needs to be able to do to significantly change the game of agile software delivery.</p>
<h2 id="empowerment-and-performance">Empowerment and Performance</h2>
<p>If we define <em>performance as “the ability of a team to constantly deliver value to the customer in a timely fashion”</em> then we can derive the minimum empowerment that a team needs to be able to do this.</p>
<blockquote>
<p>Performance is “the ability of a team to constantly deliver value to the customer in a timely fashion”</p>
</blockquote>
<p>From this definition, we can deduct the minimum of core powers that a team needs to be able to be performant:</p>
<ul>
<li>Being able to prioritize their backlog and have the final word on what is in it - this includes the ability to say no to backlog items and delete them</li>
<li>Being able to allocate capacity and work</li>
<li>Setting the pace for releases and deciding when to release what</li>
<li>Shaping their development and testing infrastructure to their needs without organizational hurdles</li>
</ul>
<p>In short, this can be summarized as “The team can decide what to do, who does it when, and how”. It is almost impossible for one empowerment to work out without the others being there, but if one has to start building up the empowerment starting with the ability to <a href="https://dominikberner.ch/lean-backlog-handling/">prioritize and maintain their lean backlog</a> is often a good first power to gain. If it is clear to the team what to do next, then the team can decide “how much” of the backlog it can or wants to deliver in a given amount of time, which boils down to capacity allocation within the team.
The team usually has a good overview of what non-backlog-related tasks are on their plate - such as maintaining infrastructure, taking care of individual education, or fixing things discovered in a retro. All this has to be taken into account and who can do this better than the team itself? The power to allocate capacity for working on the backlog directly translates to the ability to set the pace for releases. I still stand on the point that releasing often and in small increments is superior to rare big-bang releases, quality assurance and releasing need to be highly automated tasks that need little to no human interaction. This directly leads to the team’s need to be able to shape their infrastructure after their fashion. This includes the ability to decide what tools to use and how to use them. In reality, there are often some organizational constraints to this, but again the fewer the better. Let’s look at each of these powers in more detail and how they work together.</p>
<h3 id="the-power-of-prioritizing-the-backlog">The power of prioritizing the backlog</h3>
<p>The backlog is the central artifact in agile software development and as such it is a prioritized <em>list of problems to be solved</em>. Some of them might not yet be fully understood and for most of them, there might exist several solutions.</p>
<blockquote>
<p>The backlog is a list of <em>problems to be solved</em> - not a list of task to be done</p>
</blockquote>
<p>Software development is a flow-based activity, which needs focus to be done effectively and efficiently and as the team knows best how to get into the flow, they need the power to say what they want to work on next and to <em>assign a unique priority</em> to the backlog items. While the team should have the final say on what goes to the top of the backlog, they need to be acutely aware of their customer’s needs, and the team must respect and manage stakeholder expectations about what is important for who. As such teams need to be keenly aware of the different stakeholder groups regarding their product and understand their needs and expectations and they need the clout so stakeholders accept it if the teams say “No - We’re not doing this (yet)”.
Long term a team should strive to have some kind of fairness regarding the prioritization of the needs of each stakeholder group - including the team’s own needs. By determining what part of the backlog the team should do next, they can align that with their capacity to do work.</p>
<h3 id="the-power-of-allocating-capacity-and-work">The power of allocating capacity and work</h3>
<p>Once the priority of problems to solve is established, the team should be able to determine how much of the backlog they want to tackle at the same time. This is independent of the agile framework chosen, be it kanban where the team sets their work-in-progress limits, or scrum where the single sprints give a limited timeframe to tackle problems. Part of being able to allocate work is <a href="https://dominikberner.ch/the-art-of-slicing/">slicing backlog items</a> to a workable size and breaking down the problems into smaller chunks that can be solved iteratively and incrementally. This is a skill that can easily be learned on the job and most teams master it relatively quickly. The number of problems solved and time invested has a direct relation to when solutions can be released to the customer and since the teams decide what to do with which capacity they naturally are in charge of setting the pace for their releases. On the other hand empowered teams often also have some tasks that are inward facing such as maintaining their infrastructure, fixing bugs, mentoring each other, or learning new things. These tasks also need to be taken into account when allocating capacity and the team is in the best position to do so.</p>
<h3 id="the-power-of-setting-the-pace-for-releases">The power of setting the pace for releases</h3>
<p>Grass doesn’t grow faster if you pull on it - and software is not delivered faster by setting arbitrary deadlines. Sure, there might be outside constraints that determine good time windows for releasing to the public, but generally, these windows are not as business-critical as we are often made to believe. Overall it is a much better strategy for teams to set the pace for releasing. This works especially well if the team manages to release often and in small increments. Ideally, features are pushed to the customers whenever they are ready instead of artificially waiting for a specific date because this increases the speed of the feedback loop. As the team is responsible for allocating time and ensuring the quality of their work, naturally, the team should be in charge of saying when they see a product increment fit for release to the public.</p>
<blockquote>
<p>Grass doesn’t grow faster if you pull on it - and software is not delivered faster by setting arbitrary deadlines.</p>
</blockquote>
<p>To release frequently, releasing should be a painless process and ideally, there should be mechanisms in place to easily roll back released versions, such as staged releases or feature toggles or staged releases. While the team usually can tell if a feature is of a good enough technical quality, determining whether a feature is ready to be used often requires feedback from the stakeholders and oftentimes, the only way to get this is by releasing the feature to production. The other side of the coin of giving the power to release to the team is that the team has to ensure the <a href="https://dominikberner.ch/software-quality-roadmap/">quality of the software</a> to be confident that a release will work as expected. This is where the power of shaping the development and testing infrastructure comes into play.</p>
<h3 id="the-power-of-shaping-the-development-and-testing-infrastructure">The power of shaping the development and testing infrastructure</h3>
<p>As the teams are expected to take over responsibility for the quality of their software and the release pipeline, they need the power to shape and extend their development and testing infrastructure to their means. This includes the ability to decide what tools to use. It is not uncommon that companies restrict the freedom of choice for tools and services, but it generally pays to have as few restrictions regarding what tool and technology to use as possible. With modern approaches like GitHub actions or Azure DevOps pipelines and similar spinning up a new build and test environment is a matter of minutes and the team should be able to do this on their own, without jumping through bureaucratic hoops to get their work done.
The other side of this empowerment means that the teams have to have the skill for building and maintaining their build and test environments. Granted this can take quite some time and effort to master, but once a team has mastered this skill, adding new quality tools or upgrading existing ones to fit a new need is a matter of hours and not days or weeks, and this is often a huge performance boost.
It is also crucial that are not just focussing on the continuous integration (CI) aspect of their build and testing infrastructure, but that they also invest in the continuous deployment (CD) part, which creates and delivers the software to the customer. As the empowered team has the power to decide when to release what, they also have to be able to do it “on a whim”. By deciding what to release when and having the skill to do so an agile team can greatly reduce the feedback loop and thus increase the speed of delivery, making them truly agile.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The main goal of giving the team the power to prioritize the backlog, allocate capacity and work, set the pace for releases, and shape the development and testing infrastructure is to enable the team to make them fast at generating and reacting to customer feedback. If a team can do these four things more autonomously they gain a lot of flexibility in how they work and how they deliver value to the customer, which is a basis for high-performing teams that want to do agile software development.
There are more aspects of empowering teams to reach peak performance, like being involved in the hiring, shaping overall team composition, and managing the personal growth of their members, but if one wants to start somewhere starting with backlog prioritization, capacity allocation, release planning and shaping the development and testing infrastructure are a good choice to have first.</p>Empowering Teams is a key aspect to create high-performing teams in an agile setting. Ever since Extreme Programming was introduced into the world of software development this statement or a variety of it has been carried over to almost all agile frameworks. And there is a multitude of articles about how to create them. But what does “empowered” exactly mean? What are the minimum powers that a team needs to be able to be agile?Testing strategies for software that interacts with hardware2023-05-03T00:00:00+00:002023-05-03T00:00:00+00:00https://dominikberner.ch//software-testing-when-hardware-is-involved<p><strong>“Testing our software is difficult, because of the hardware involved”,</strong> is a common sentence when developing software for a specific hardware platform. Testing software that interacts closely with hardware indeed complicates the testing setup and in turn, often means that additional cost and effort are required. As the range of “embedded software” goes from low-level firmware running on a specific chip to software running on a specifically designed operating system with custom peripherals there is no one-size-fits-all solution to this. However, there are some strategies and principles that can help to make testing when easier and more effective.</p>
<p>The obvious goal of testing is to ensure that the software and hardware work as expected and to catch regression bugs as early as possible. With hardware involved catching regression often becomes quite important as the specific environment that the software is running on might evolve and introduce new bugs. The trivial approach is to “just run your code on the hardware”, but depending on the setup, this might not always work. The hardware might be too expensive to have available in large quantities or the setup might be too complicated to reproduce and maintain at scale - Not to mention that having lots of hardware around might also be quite expensive. So a good testing strategy is usually a tradeoff between fast feedback and running tests in an environment that is close to the production environment. If I have to choose, I generally put slightly more emphasis on quick and timely feedback to the developers than on creating a perfect testing environment.</p>
<div style="background-color: #444444; margin-bottom: 1.5em;">
<h1><a href="https://www.amazon.com/dp/1803239727">
CMake Best Practices - The book</a></h1>
<div style="display: grid; width: 100%; grid-template-columns: 25% 1fr; grid-gap: 1%; padding-bottom: 0.5em;">
<div>
<a href="https://www.amazon.com/dp/1803239727">
<img src="/images/cmake-best-practices.jpg" alt="Cover of the CMake Best Practices book by Dominik Berner and Mustafa Kemal Gilor" style="max-width:100%;height:auto;display:block" />
</a>
</div>
<div>
CMake Best Practices: Discover proven techniques for creating and maintaining programming projects with
CMake. Learn how to use CMake to maximum efficiency with this compendium of best practices for a lot of
common tasks when building C++ software.
<br />
<br />
<div class="order-button">
<a href="https://www.amazon.com/dp/1803239727">Get it from Amazon</a>
</div>
</div>
</div>
</div>
<h2 id="building-a-testing-strategy-on-the-test-pyramid-with-hardware">Building a testing strategy on the test pyramid with hardware</h2>
<p>The underlying principle of good testing strategies with hardware is to get important feedback from testing as fast as possible while trying to while keeping the maintenance cost of the hardware setup to a minimum. This means that in the day-to-day work, developers should be able to test as much functionality straight out of their editor and only use the real hardware when working on something that is super closely tied to the hardware. For everything that is only slightly relying on the hardware, the tests should preferably be performed by the CI. Of course, developers should have hardware available to run their stuff on and do so once in a while, but not every change should need deployment to the target hardware. This is generally achieved by structuring code in such a way that the hardware-specific code is isolated and can be mocked away easily. Secondly, it is achieved by investing in the build system so the code can be cross-compiled easily to run on the development machine and the actual hardware as well. Also the further up you move on the testing pyramid, the more one benefits from running tests on real hardware - But generally this also means more expensive and time-consuming tests.</p>
<figure>
<img src="/images/testing-with-hardware/TestingPyramid.png" alt="The testing pyramid if hardware is involved" onclick="toggleSize(this)" />
<figcaption>The testing pyramid if hardware is involved
</figcaption>
</figure>
<p>All automatic tests should be run on every commit to the main branch of your repo or even on every commit pushed to any branch if the time frame allows that. In reality, this usually means a staged CI pipeline, which runs all unit tests and some of the fast integration tests on every pushed commit, but runs the more expensive ones only when a merge request to the main branch is opened. This way you get fast feedback on the state of your code and you can catch regressions early. And of course, do not forget to regularly run the manual tests on the full set of hardware as well. While all tests should also be run locally by the devs, I usually only run the unit tests concerning the code I’m currently touching unless I have a very specific reason to run the full test suite.</p>
<p>An important thing is that even if the test pyramid shows emulators, simulators etc. the setup should be so, that most of the tests can be run on the actual hardware itself as well and this should also be done regularly to catch hardware-induced regressions. The same applies to the development machines, all emulator- or simulator-based tests should be runnable on the dev’s machine for easy debugging and the devs should have easy access to the hardware for debugging. Let’s look at the different levels of the testing pyramid in more detail. One thing to note is although the levels are shown as discrete steps, in reality, there is often a lot of overlap between them.</p>
<h3 id="the-foundation-cross-compiling-and-unit-tests">The foundation: Cross-Compiling and Unit tests</h3>
<p>The base of the test pyramid consists of the smallest and most basic unit tests. Usually, they are also the most numerous and are intended to be run very frequently - so the individual test should be fast. When doing test-driven development (which you should!) these are the workhorses regarding software quality. Because the developers need to be able to run them frequently, the ability to easily cross-compile your code so it can be tested on the developer’s machine and the target hardware is a must. There usually are some unit tests that require some information about the hardware environment, but often the majority of the code can be tested very well by mocking the hardware away.</p>
<figure>
<img src="/images/testing-with-hardware/TestingPyramid_unit_test.png" alt="The base for successful testing with hardware is the ability to cross-compile your code" style="width: 60%; height: 60%;" onclick="toggleSize(this)" />
<figcaption>The base for successful testing with hardware is the ability to cross-compile your code
</figcaption>
</figure>
<h2 id="the-lower-middle-emulators-and-component-tests">The lower middle: Emulators and component tests</h2>
<p>Component tests are usually a bit more complex than unit tests and they test a larger part of the system but are still pretty localized regarding the code. They are usually still fast enough to run on the developer’s machine and they can be run very frequently on the CI as well. The main difference between unit- and component tests is that component tests are more likely to require some limited system awareness. Emulators <a href="https://www.qemu.org/">like QEMU</a> are a great way to enable low-cost automation and bring some of the behavior of the hardware into play including faking peripherals. Emulators mimic the hardware on a low level but often without the full setup of all running services etc. One downside of the emulators is that they cannot give any indication about the runtime performance of the hardware, for this only the real hardware can be used.</p>
<figure>
<img src="/images/testing-with-hardware/TestingPyramid_component_test.png" alt="Running tests on emulators shortens the feedback loop as it allows developers to run tests on their machines" style="width: 60%; height: 60%;" onclick="toggleSize(this)" />
<figcaption>Running tests on emulators shortens the feedback loop as it allows developers to run tests on their machines
</figcaption>
</figure>
<h2 id="the-upper-middle-integration-testing-with-simulators">The upper middle: Integration testing with simulators</h2>
<p>As soon as the components cannot be tested in isolation there is usually also some more surrounding logic needed. This is where integration tests come into play. Integration tests are again more complex than component tests and they test a larger part of the system. They are usually a combination of multiple components but still do not need the full system setup. The way to tackle this is to use simulators on top of the emulators. The main difference between emulation and simulation is that simulators can play back input, and higher-level logic or mimic a changing environment. This means that they can be used to test the interaction between the software and the hardware up to the system boundaries. The distinction between emulation and simulation is not always clear cut and there are often some gray areas. Although not always possible, being able to run the simulator not only on emulation but also on the non-native environment helps generate faster feedback as well. A very useful addition that comes in somewhere at the boundary between simulators and the real hardware is that here we can also test the deployment of the software on the target device, although some might this consider already a system test rather than an integration test.</p>
<figure>
<img src="/images/testing-with-hardware/TestingPyramid_integration_test.png" alt="Simulators are a good way to test with controlled input and complicated workflows" style="width: 60%; height: 60%;" onclick="toggleSize(this)" />
<figcaption>Simulators are a good way to test with controlled input and complicated workflows
</figcaption>
</figure>
<h2 id="the-top-hardware-in-the-loop">The top: Hardware in the loop</h2>
<p>The closer we get to the very complex system tests the more the need for hardware. While the unit-, component- and integration tests should be runnable on the real hardware running the more complex tests yields the most benefit from running on the hardware. It still pays to invest in automation and having system tests running regularly and frequently on your code to avoid some nasty surprises when deploying the software for real. As the availability of the full hardware setup is often a limiting factor and a bottleneck having the hardware in relatively late is a trade-off between the cost of the hardware and the amount of information gained from running on the hardware and the effort of integrating hardware into CI. However, for running the system tests that effort should be taken on to create at least one full set of the system for testing.</p>
<figure>
<img src="/images/testing-with-hardware/TestingPyramid_system_test.png" alt="When testing the whole system hardware is mandatory" style="width: 60%; height: 60%;" onclick="toggleSize(this)" />
<figcaption>When testing the whole system hardware is mandatory
</figcaption>
</figure>
<h2 id="the-tip-involve-humans">The tip: Involve Humans</h2>
<p>Automatic testing is a tremendous cost saver and a very good way to get fast feedback back to the developers. But at some point, nothing beats a human tester. This is also where you want to be as close to the real, completely assembled device as possible. While manual tests might still need faking of some parts of the system, it is a very good way to get a feeling for the system as a whole. Often with manual tests the boundary between testing and quality assurance becomes a bit blurry, especially if the automated tests are solid and already cover a lot of the functionality. Nevertheless, it is often human testers that catch some of the more subtle bugs or point out where things are not optimal. And for that having the whole set up in a close-to-real environment is a huge benefit. Another huge benefit of having such a system around is that it can be shown to customers and stakeholders to get feedback on the usability of the system or to train them in operation.</p>
<figure>
<img src="/images/testing-with-hardware/TestingPyramid.png" alt="Complement the automatic tests on the hardware with human interaction." style="width: 60%; height: 60%;" onclick="toggleSize(this)" />
<figcaption>Complement the automatic tests on the hardware with human interaction.
</figcaption>
</figure>
<h2 id="balance-is-key">Balance is key</h2>
<p>Investing heavily into testing is a great way to ensure that your software is of high quality but as always there is a balance between effort and gain and there is no catch-all solution for testing strategies regarding hardware. Talking with your team about the testing pyramid and how you want to structure your testing strategy around it is a good starting point. Have a look at the different levels of the pyramid and see where you can get the most bang for your buck. For pure software projects starting to build the pyramid from the base upward is often a good idea. When hardware is involved it can sometimes be beneficial to build the tip first and then start at the bottom, just make sure that the pyramid does not get top-heavy regarding the number of tests. Even if the approach described in this article might not work for your specific situation, I hope that it gives you something to start a conversation about setting up your tests in a meaningful and cost-effective way.</p>“Testing our software is difficult, because of the hardware involved”, is a common sentence when developing software for a specific hardware platform. Testing software that interacts closely with hardware indeed complicates the testing setup and in turn, often means that additional cost and effort are required. As the range of “embedded software” goes from low-level firmware running on a specific chip to software running on a specifically designed operating system with custom peripherals there is no one-size-fits-all solution to this. However, there are some strategies and principles that can help to make testing when easier and more effective.Organizing CMake presets2023-02-19T00:00:00+00:002023-02-19T00:00:00+00:00https://dominikberner.ch//cmake-presets-best-practices<p><strong><a href="https://cmake.org/cmake/help/latest/manual/cmake-presets.7.html">CMake presets</a> are arguably one of the biggest improvements in CMake since the introduction of targets in 2014.</strong> In a nutshell, CMake presets contain information on how to configure, build, test and package a CMake project and they are a tremendous help when managing different configurations for various compilers and platforms. Instead of fiddling with various command-line options, presets are stored in a JSON file and can be used to configure CMake with a single command. This article shows how to set up and organize and use them so they are most effective and easy to maintain.</p>
<h1 id="cmake-presets-in-a-nutshell">CMake presets in a nutshell</h1>
<p>CMake presets were introduced to CMake with version 3.19, if you are using an older version I strongly recommend updating to a newer version, even if it is just for the sake of being able to use presets. As of early 2023, the built-in support in editors and IDEs for CMake presets is still in its infancy, but there is a noticeable push to accommodate them in most tools.</p>
<div style="background-color: #444444; margin-bottom: 1.5em;">
<h1><a href="https://www.amazon.com/dp/1803239727">
CMake Best Practices - The book</a></h1>
<div style="display: grid; width: 100%; grid-template-columns: 25% 1fr; grid-gap: 1%; padding-bottom: 0.5em;">
<div>
<a href="https://www.amazon.com/dp/1803239727">
<img src="/images/cmake-best-practices.jpg" alt="Cover of the CMake Best Practices book by Dominik Berner and Mustafa Kemal Gilor" style="max-width:100%;height:auto;display:block" />
</a>
</div>
<div>
CMake Best Practices: Discover proven techniques for creating and maintaining programming projects with
CMake. Learn how to use CMake to maximum efficiency with this compendium of best practices for a lot of
common tasks when building C++ software.
<br />
<br />
<div class="order-button">
<a href="https://www.amazon.com/dp/1803239727">Get it from Amazon</a>
</div>
</div>
</div>
</div>
<p>As mentioned earlier, CMake presets are a way to store information on how to configure, build, test and package a CMake project. The various presets are stored in JSON files that are named <code class="language-plaintext highlighter-rouge">CMakePresets.json</code> or <code class="language-plaintext highlighter-rouge">CMakeUserPresets.json</code> and are placed in the root of a CMake project. The former is intended to be checked into version control and the latter is intended to contain system-specific information for each individual user. Various types of presets describe the different steps of building a CMake project:</p>
<ul>
<li><strong>Configure presets</strong>: describe how to configure a CMake project. They specify the generator, toolchain file, CMake cache variables and the build directory among other options.</li>
<li><strong>Build presets</strong>: describe how to build a CMake project. They may specify the build targets and the configuration for multi-configuration toolchains such as MSVC or ninja-multi.</li>
<li><strong>Test presets</strong> describe the environment and conditions for running tests. They may specify the test executable and the test filter.</li>
<li><strong>Package presets</strong>: describe how to package a CMake project. They may specify the package type and the package destination.</li>
<li><strong>Workflow presets</strong>: describe a sequence of actions to be executed. They may specify the presets to be executed and the order in which they are executed.</li>
</ul>
<p>The structure of a CMake preset file is as follows:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w">
</span><span class="nl">"cmakeMinimumRequired"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"major"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w">
</span><span class="nl">"minor"</span><span class="p">:</span><span class="w"> </span><span class="mi">21</span><span class="p">,</span><span class="w">
</span><span class="nl">"patch"</span><span class="p">:</span><span class="w"> </span><span class="mi">0</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"configurePresets"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">...</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">...</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">Add</span><span class="w"> </span><span class="err">more</span><span class="w"> </span><span class="err">presets</span><span class="w"> </span><span class="err">here</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"buildPresets"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">...</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">...</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"testPresets"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">...</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="err">//</span><span class="w"> </span><span class="err">...</span><span class="w">
</span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>The command to configure a project using a preset is <code class="language-plaintext highlighter-rouge">cmake --preset <preset-name></code> and to build with a preset use <code class="language-plaintext highlighter-rouge">cmake --build --preset <preset-name></code>.
Test presets can be invoked over CTest by using <code class="language-plaintext highlighter-rouge">ctest --preset <preset-name></code> or with CMake with <code class="language-plaintext highlighter-rouge">cmake --build --preset <preset-name> --target test</code>.
Unfortunately the <code class="language-plaintext highlighter-rouge">cpack</code> command line utility so far lacks preset support, so to package with a preset CMake has to be used: <code class="language-plaintext highlighter-rouge">cmake --build --preset <preset-name> --target package</code>.</p>
<p>While Test-, Package- and Workflow-presets are useful, in this article I will focus on organizing the configure- and build-presets as they are the most frequently used. For the full documentation on CMake presets see the <a href="https://cmake.org/cmake/help/latest/manual/cmake-presets.7.html">official CMake documentation</a>.</p>
<h1 id="organizing-cmake-presets">Organizing CMake presets</h1>
<p>As projects grow - especially when they are targetet at multiple platforms - the number of CMake presets can grow quickly. This can make it hard to keep track of them and find the right one. To keep things organized, it is a good idea to get some organization into the presets, so it is easy to find the right preset for the platform and compiler you are using.</p>
<p>I generally recommend having presets for each compiler and platform and combining them by inheriting from them in further presets. This makes it easy to find the right preset and to keep the individual presets small and simple. I tend to use a naming scheme like this: <code class="language-plaintext highlighter-rouge"><ci>-<generator>-<toolchain>-<buildType></code> where the prefix is either <code class="language-plaintext highlighter-rouge">ci</code> or <code class="language-plaintext highlighter-rouge">dev</code> depending on whether the preset is intended for CI or local development. Generally, all <code class="language-plaintext highlighter-rouge">ci</code> presets are located in the <code class="language-plaintext highlighter-rouge">CMakePresets.json</code> and are checked in, while <code class="language-plaintext highlighter-rouge">dev</code> presets tend at least partially to come from the <code class="language-plaintext highlighter-rouge">CMakeUserPresets.json</code>. Where they go is also dependent if it is a public project or not. Presets for public projects should be as generic as possible and not contain any information that is specific to a user or a CI environment, while in a project inside a company the presets might be more specific for a bit of extra convenience.
The generator is the CMake generator, the toolchain is a combination of the platform, compiler and operating system like <code class="language-plaintext highlighter-rouge">clang12-armv7-linux</code> and the build type is the build type. For multi-configuration generators like MSVC or ninja-multi, the build type is omitted and configured over build presets.</p>
<figure>
<img src="/images/cmake-presets/Preset-Organisation.drawio.png" alt="Example scheme how to organize presets for single configuration compilers" onclick="toggleSize(this)" />
<figcaption>Example scheme how to organize presets for single configuration compilers
</figcaption>
</figure>
<p>Some example configuration presets that I use frequently in my projects are <code class="language-plaintext highlighter-rouge">ci-ninja-x86_64-linux-debug</code>, <code class="language-plaintext highlighter-rouge">ci-ninja-x86_64-linux-release</code>, <code class="language-plaintext highlighter-rouge">ci-msvc19-x86_64-windows</code>. Note that the MSVC preset does not specify the build type as it is a multi-configuration generator and thus the build type is configured in the build preset.</p>
<p>It is good practice to mark any presets that should not be used to build the project standalone as <code class="language-plaintext highlighter-rouge">hidden</code>. This makes it easy to find the presets that are intended to be used. Generally, I recommend marking only presets that have the full definition of generator, toolchain and build type as visible.</p>
<h2 id="configuration-presets-in-detail">Configuration Presets in detail</h2>
<p>So what goes into which presets? A typical example for many of my CMake projects contains the following presets.</p>
<ul>
<li><strong>Standalone presets</strong>: These can be aggregated with almost any other combination of presets. They do not define a build directory or a generator. They are all marked <code class="language-plaintext highlighter-rouge">hidden</code> and are intended to be used in other presets. Most of them are either defining cache variables or environment variables.
<ul>
<li>ccache-env (hidden): A preset that defines some environment variables for ccache. This is used in CI and dev builds where ccache is used to speed up the build.</li>
<li>clang-tidy (hidden): A preset that defines the clang-tidy checks to be used. This is used mainly in CI builds where clang-tidy is used to check the code.</li>
<li><a href="https://include-what-you-use.org/">iwyu</a> (hidden): A preset that defines the include-what-you-use checks to be used. This is used mainly in CI builds where include-what-you-use is used to check the code.</li>
</ul>
</li>
<li><strong>Generator presets</strong>: These presets to define the generator and the build directory. I usually also keep them <code class="language-plaintext highlighter-rouge">hidden</code> and use them in other presets.
<ul>
<li>Ninja: My generator of choice when building for Linux and mac. This preset defines the build directory and the generator.</li>
<li>MSVC: for building on windows</li>
<li>Any other generator required to build on other platforms</li>
</ul>
</li>
<li><strong>Toolchain presets</strong>: These contain specific compiler versions and flags. These presets are also marked <code class="language-plaintext highlighter-rouge">hidden</code> and are used in other presets. They might also contain library locations such as for Qt or Boost. I often prefix them with either <code class="language-plaintext highlighter-rouge">ci</code> if they contain information that is specific to the ci environment or the <a href="https://dominikberner.ch/using-devcontainers-with-cpp/">devcontainer</a> bundled with the project. These might (re-)define the build directory.
<ul>
<li>gcc-flags: Defines the flags for gcc and clang such as <code class="language-plaintext highlighter-rouge">-Wall -Werror</code></li>
<li>msvc-flags: Defines the flags for MSVC such as <code class="language-plaintext highlighter-rouge">/W4 /WX</code></li>
<li>clang-sanitizer: Defines the flags for clang sanitizer such as <code class="language-plaintext highlighter-rouge">-fsanitize=address</code></li>
<li>msvc-sanitizer: Defines the flags for MSVC sanitizer such as <code class="language-plaintext highlighter-rouge">/fsanitize=address</code></li>
<li>android-ndk: Defines the toolchain file for the android ndk</li>
<li>Qt-5.15.2: Defines the location of the Qt libraries of a specific version</li>
<li>… and more depending on the complexity and size of the project</li>
</ul>
</li>
<li><strong>build-type presets</strong>: Defines the build type for single configuration presets. In addition to the default Debug, Release and RelWithDebInfo, I sometimes add coverage build types here. Usually, they define only cache variables or set options. For multi-config generators, I move the relevant information into the build presets.</li>
</ul>
<details>
<summary>
An example <code class="language-plaintext highlighter-rouge">CMakePresets.json</code> to build a Qt project for linux, windows, android and webassembly might look like this (Click to expand).
Note that this particular example does contain some file paths that are specific to the CI setup. To easily replicate build environments I recommend using <a href="https://dominikberner.ch/using-devcontainers-with-cpp/">devcontainers</a>. This particular setup does only contain configure and build presets.
</summary>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w">
</span><span class="nl">"cmakeMinimumRequired"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"major"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w">
</span><span class="nl">"minor"</span><span class="p">:</span><span class="w"> </span><span class="mi">21</span><span class="p">,</span><span class="w">
</span><span class="nl">"patch"</span><span class="p">:</span><span class="w"> </span><span class="mi">0</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"configurePresets"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ccache-env"</span><span class="p">,</span><span class="w">
</span><span class="nl">"hidden"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"environment"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CCACHE_BASEDIR"</span><span class="p">:</span><span class="w"> </span><span class="s2">"${sourceDir}"</span><span class="p">,</span><span class="w">
</span><span class="nl">"CCACHE_SLOPPINESS"</span><span class="p">:</span><span class="w"> </span><span class="s2">"pch_defines,time_macros"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"emscripten"</span><span class="p">,</span><span class="w">
</span><span class="nl">"hidden"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CMAKE_EXE_LINKER_FLAGS_INIT"</span><span class="p">:</span><span class="w"> </span><span class="s2">"-s WASM=1 -s USE_SDL"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"environment"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CXX"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/emsdk/emscripten/1.38.30/em++"</span><span class="p">,</span><span class="w">
</span><span class="nl">"CC"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/emsdk/emscripten/1.38.30/emcc"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"toolchain:"</span><span class="p">:</span><span class="w"> </span><span class="s2">"emscripten"</span><span class="p">,</span><span class="w">
</span><span class="nl">"binaryDir"</span><span class="p">:</span><span class="w"> </span><span class="s2">"${sourceDir}/build_wasm"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"verbose-debug-output"</span><span class="p">,</span><span class="w">
</span><span class="nl">"hidden"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"EXTENDED_DEBUG_OUTPUT_ENABLED"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ON"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"qt-webassembly"</span><span class="p">,</span><span class="w">
</span><span class="nl">"hidden"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CMAKE_PREFIX_PATH"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/Qt/5.14.1/android_wasm/"</span><span class="p">,</span><span class="w">
</span><span class="nl">"CMAKE_FIND_ROOT_PATH_MODE_PACKAGE"</span><span class="p">:</span><span class="w"> </span><span class="s2">"BOTH"</span><span class="p">,</span><span class="w">
</span><span class="nl">"CMAKE_FIND_ROOT_PATH_MODE_LIBRARY"</span><span class="p">:</span><span class="w"> </span><span class="s2">"BOTH"</span><span class="p">,</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"android"</span><span class="p">,</span><span class="w">
</span><span class="nl">"hidden"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"toolchainFile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/android-ndk/build/cmake/android.toolchain.cmake"</span><span class="p">,</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"ANDROID_ABI"</span><span class="p">:</span><span class="w"> </span><span class="s2">"armeabi-v7a"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ANDROID_PLATFORM"</span><span class="p">:</span><span class="w"> </span><span class="s2">"23"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ANDROID_SDK"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/android-sdk"</span><span class="p">,</span><span class="w">
</span><span class="nl">"CMAKE_FIND_ROOT_PATH_MODE_PACKAGE"</span><span class="p">:</span><span class="w"> </span><span class="s2">"BOTH"</span><span class="p">,</span><span class="w">
</span><span class="nl">"CMAKE_FIND_ROOT_PATH_MODE_LIBRARY"</span><span class="p">:</span><span class="w"> </span><span class="s2">"BOTH"</span><span class="p">,</span><span class="w">
</span><span class="nl">"CMAKE_FIND_ROOT_PATH_MODE_INCLUDE"</span><span class="p">:</span><span class="w"> </span><span class="s2">"BOTH"</span><span class="p">,</span><span class="w">
</span><span class="nl">"OPENSSL_ROOT_DIR"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/android_libs"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"environment"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"JAVA_HOME"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/usr/lib/jvm/java-1.8.0-openjdk-amd64"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ANDROID_SDK_ROOT"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/android-sdk"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ANDROID_NDK_ROOT"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/android-sdk"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"binaryDir"</span><span class="p">:</span><span class="w"> </span><span class="s2">"${sourceDir}/build_android"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Qt-desktop"</span><span class="p">,</span><span class="w">
</span><span class="nl">"hidden"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CMAKE_PREFIX_PATH"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/Qt/5.14.1/gcc_64/"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"binaryDir"</span><span class="p">:</span><span class="w"> </span><span class="s2">"${sourceDir}/build"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Qt-android"</span><span class="p">,</span><span class="w">
</span><span class="nl">"hidden"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CMAKE_PREFIX_PATH"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/Qt/5.14.1/android/"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-ninja"</span><span class="p">,</span><span class="w">
</span><span class="nl">"displayName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Ninja"</span><span class="p">,</span><span class="w">
</span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">"build using Ninja generator"</span><span class="p">,</span><span class="w">
</span><span class="nl">"inherits"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"ccache-env"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"generator"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Ninja"</span><span class="p">,</span><span class="w">
</span><span class="nl">"hidden"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-ninja-debug"</span><span class="p">,</span><span class="w">
</span><span class="nl">"displayName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Ninja Debug"</span><span class="p">,</span><span class="w">
</span><span class="nl">"inherits"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"Qt-desktop"</span><span class="p">,</span><span class="w">
</span><span class="s2">"ci-ninja"</span><span class="p">,</span><span class="w">
</span><span class="s2">"verbose-debug-output"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CMAKE_BUILD_TYPE"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Debug"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-ninja-wasm-debug"</span><span class="p">,</span><span class="w">
</span><span class="nl">"displayName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Ninja Webassembly Debug"</span><span class="p">,</span><span class="w">
</span><span class="nl">"inherits"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"ci-ninja"</span><span class="p">,</span><span class="w">
</span><span class="s2">"qt-webassembly"</span><span class="p">,</span><span class="w">
</span><span class="s2">"emscripten"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CMAKE_BUILD_TYPE"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Debug"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-ninja-debug-unittest"</span><span class="p">,</span><span class="w">
</span><span class="nl">"displayName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Ninja Debug for Unit Tests"</span><span class="p">,</span><span class="w">
</span><span class="nl">"inherits"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"Qt-desktop"</span><span class="p">,</span><span class="w">
</span><span class="s2">"ci-ninja"</span><span class="p">,</span><span class="w">
</span><span class="s2">"verbose-debug-output"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CMAKE_BUILD_TYPE"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Debug"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-ninja-release"</span><span class="p">,</span><span class="w">
</span><span class="nl">"displayName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Ninja Release"</span><span class="p">,</span><span class="w">
</span><span class="nl">"inherits"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"Qt-desktop"</span><span class="p">,</span><span class="w">
</span><span class="s2">"ci-ninja"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CMAKE_BUILD_TYPE"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Release"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-ninja-android-debug"</span><span class="p">,</span><span class="w">
</span><span class="nl">"displayName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Ninja Android Debug"</span><span class="p">,</span><span class="w">
</span><span class="nl">"inherits"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"Qt-android"</span><span class="p">,</span><span class="w">
</span><span class="s2">"ci-ninja"</span><span class="p">,</span><span class="w">
</span><span class="s2">"android"</span><span class="p">,</span><span class="w">
</span><span class="s2">"verbose-debug-output"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"CMAKE_BUILD_TYPE"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Debug"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-windows-msvc2017"</span><span class="p">,</span><span class="w">
</span><span class="nl">"displayName"</span><span class="p">:</span><span class="w"> </span><span class="s2">"MSVC 2017"</span><span class="p">,</span><span class="w">
</span><span class="nl">"generator"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"Visual Studio 15 2017"</span><span class="p">,</span><span class="w">
</span><span class="nl">"binaryDir"</span><span class="p">:</span><span class="w"> </span><span class="s2">"${sourceDir}/build"</span><span class="p">,</span><span class="w">
</span><span class="nl">"condition"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"equals"</span><span class="p">,</span><span class="w">
</span><span class="nl">"lhs"</span><span class="p">:</span><span class="w"> </span><span class="s2">"${hostSystemName}"</span><span class="p">,</span><span class="w">
</span><span class="nl">"rhs"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Windows"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nl">"buildPresets"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-msvc2017-debug"</span><span class="p">,</span><span class="w">
</span><span class="nl">"displayName"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"MSVC 2017 Debug"</span><span class="p">,</span><span class="w">
</span><span class="nl">"configurePreset"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-windows-msvc2017"</span><span class="p">,</span><span class="w">
</span><span class="nl">"configuration"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Debug"</span><span class="p">,</span><span class="w">
</span><span class="nl">"condition"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"equals"</span><span class="p">,</span><span class="w">
</span><span class="nl">"lhs"</span><span class="p">:</span><span class="w"> </span><span class="s2">"${hostSystemName}"</span><span class="p">,</span><span class="w">
</span><span class="nl">"rhs"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Windows"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-msvc2017-release"</span><span class="p">,</span><span class="w">
</span><span class="nl">"displayName"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"MSVC 2017 release"</span><span class="p">,</span><span class="w">
</span><span class="nl">"configurePreset"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"ci-windows-msvc2017"</span><span class="p">,</span><span class="w">
</span><span class="nl">"configuration"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Release"</span><span class="p">,</span><span class="w">
</span><span class="nl">"condition"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"equals"</span><span class="p">,</span><span class="w">
</span><span class="nl">"lhs"</span><span class="p">:</span><span class="w"> </span><span class="s2">"${hostSystemName}"</span><span class="p">,</span><span class="w">
</span><span class="nl">"rhs"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Windows"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div> </div>
</details>
<p>Overall this setup usually gives me a very good mix of being open enough to be used in different environments and specific enough to be conveniently useful. The downside is that this can lead to a lot of presets to manage, so it might be helpful to split these up into multiple files and use includes to manage them.</p>
<h2 id="presets-in-real-life">Presets in real life</h2>
<p>in my opinion CMake presets are one of the most powerful features of CMake and a great addition to the tool. Especially if you’re working with an editor or IDE that supports them natively it reduces the complexity of using CMake drastically as I no longer have to memorize the specific options and flags for each project. Presets are also a great way to keep the <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> clean and agnostic the build environment. By having the presets the temptation to put specific compiler flags or other environment-specific information into the <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> is greatly reduced - which is a very good thing when it comes to maintainability. The downside is of course that the complexity handling of multi-platform projects is not reduced but just shifted to a different file and that one now has to manage not just the <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> but also the preset files.</p>
<p>Overall I think that the benefits of using CMake presets outweigh the downsides and I would recommend using them in every project - whether you follow my strategy for organizing them or find your own way to do it.</p>CMake presets are arguably one of the biggest improvements in CMake since the introduction of targets in 2014. In a nutshell, CMake presets contain information on how to configure, build, test and package a CMake project and they are a tremendous help when managing different configurations for various compilers and platforms. Instead of fiddling with various command-line options, presets are stored in a JSON file and can be used to configure CMake with a single command. This article shows how to set up and organize and use them so they are most effective and easy to maintain.CMake line by line - Building and Android APK with Qt52022-11-24T00:00:00+00:002022-11-24T00:00:00+00:00https://dominikberner.ch//cmake-android-apk-and-qt<p><strong>If you build GUI applications with C++ and Qt, chances are that you have to create a mobile version of it.</strong> While the discussion, if Qt and C++ or the native Android SDK is the right technology to use is certainly worth a tought, there are situations where it makes sense to stick with Qt and C++. This article illustrates line by line how to build a C++/Qt application for android with <em>CMake</em> and how to pack it into an android APK.</p>
<p>This post will walk through the <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> file to build a Qt application for desktop and android and then pack it into an APK.</p>
<h1 id="what-you-need">What you need</h1>
<p>To build an Android APK, you need the following:</p>
<ul>
<li><a href="https://developer.android.com/studio">Android SDK</a> and <a href="https://developer.android.com/studio/releases/platform-tools">Android SDK Tools</a> - Either install it via Android Studio or download it from the website.</li>
<li><a href="https://developer.android.com/ndk">Android NDK</a> version 20 or newer - Either install it via Android Studio or download it from the website.</li>
<li><a href="https://www.qt.io/download-qt-installer">Qt5 for Android</a> Version 5.15 - Either install it with the installer, build it yourself or use <a href="https://github.com/miurahr/aqtinstall">aqtinstall</a> to install it<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>.</li>
<li><a href="https://cmake.org/">CMake</a> at least version 3.21</li>
</ul>
<p>The example project used in this article can be found on <a href="https://github.com/bernedom/CMakeQtAPK/">GitHub</a>.</p>
<p>Since the dependencies for building an Android APK with CMake are quite heavy, I recommend using <a href="https://dominikberner.ch/using-devcontainers-with-cpp/">development containers</a> to set up your development environment. This way you can use the same environment on your local machine and your CI server.</p>
<div style="background-color: #444444; margin-bottom: 1.5em;">
<h1><a href="https://www.amazon.com/dp/1803239727">
CMake Best Practices - The book</a></h1>
<div style="display: grid; width: 100%; grid-template-columns: 25% 1fr; grid-gap: 1%; padding-bottom: 0.5em;">
<div>
<a href="https://www.amazon.com/dp/1803239727">
<img src="/images/cmake-best-practices.jpg" alt="Cover of the CMake Best Practices book by Dominik Berner and Mustafa Kemal Gilor" style="max-width:100%;height:auto;display:block" />
</a>
</div>
<div>
CMake Best Practices: Discover proven techniques for creating and maintaining programming projects with
CMake. Learn how to use CMake to maximum efficiency with this compendium of best practices for a lot of
common tasks when building C++ software.
<br />
<br />
<div class="order-button">
<a href="https://www.amazon.com/dp/1803239727">Get it from Amazon</a>
</div>
</div>
</div>
</div>
<h1 id="setting-up-the-project">Setting up the project</h1>
<p>In a nutshell, running a C++/Qt Application on Android works by wrapping the C++ application in Java code. The Java code, all dependencies and resources are then packed into an <em>Android APK</em>. Under the hood, the C++ code is compiled into a shared library that is then loaded by the Java code at runtime. The C++ code is compiled and linked against the libraries from the Android Native Development Kit (<em>NDK</em>), while the Java code is built on the Android <em>SDK</em>. Conveniently the NDK provides a toolchain file for CMake to configure the C++ compiler and set the sysroot.
Qt for android contains a template for the <code class="language-plaintext highlighter-rouge">AndroidManifest.xml</code> the Qt directory under <code class="language-plaintext highlighter-rouge">src/android/templates</code> which can be copied to the source folder.</p>
<p>The example builds a small slideshow application that rotates through a set of images. All the code for this example can be found on <a href="https://github.com/bernedom/CMakeQtAPK/">GitHub/bernedom/CMakeQtAPK</a>.</p>
<figure>
<img src="/images/qml_on_android/emulator.gif" alt="The sample QML application running in the android emulator" onclick="toggleSize(this)" />
<figcaption>The sample QML application running in the android emulator
</figcaption>
</figure>
<p>The project is structured as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>├── android
│ └── AndroidManifest.xml
├── CMakeLists.txt
├── CMakePresets.json
└── src
├── main.cpp
├── qml
│ ├── <various qml resources>
│ ├── main.qml
│ └── qml.qrc
├── slideshow.cpp
└── slideshow.h
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">src</code> folder contains all C++ sources and Qml files and resources to build the app. The <code class="language-plaintext highlighter-rouge">android</code> folder contains the android-specific files, in our case only the <code class="language-plaintext highlighter-rouge">AndroidManifest.xml</code> file. The <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> file contains the build instructions for the project. The <code class="language-plaintext highlighter-rouge">CMakePresets.json</code> file contains the build configurations for the project.</p>
<p>So let’s look at the CMake code to build an APK.</p>
<h1 id="building-an-apk-with-cmake">Building an APK with CMake</h1>
<h2 id="finding-and-configuring-cmake-to-use-android-and-qt">Finding and configuring CMake to use Android and Qt</h2>
<p>To build for Android, CMake needs to know the location of a java compiler, the Android NDK and SDK as well as the Qt location. Additionally, the Android target platform and ABI have to be specified. I prefer to pass configuration options to CMake by using <a href="https://cmake.org/cmake/help/latest/manual/cmake-presets.7.html">CMake presets</a>, which are available from CMake 3.19. Alternatively, the options can be passed to CMake via the command line or by setting environment variables. Avoid hard coding the values in the <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> as it makes it harder to build the project on different machines.</p>
<p>The CMake presets are defined in the <code class="language-plaintext highlighter-rouge">CMakePresets.json</code> file. The presets are defined in a JSON file and can be used to configure CMake:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">...</span><span class="w">
</span><span class="s2">"configurePresets:"</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"android"</span><span class="p">,</span><span class="w">
</span><span class="nl">"toolchainFile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/android-ndk/build/cmake/android.toolchain.cmake"</span><span class="p">,</span><span class="w">
</span><span class="nl">"cacheVariables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"ANDROID_ABI"</span><span class="p">:</span><span class="w"> </span><span class="s2">"armeabi-v7a"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ANDROID_PLATFORM"</span><span class="p">:</span><span class="w"> </span><span class="s2">"23"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ANDROID_SDK"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/android-sdk"</span><span class="p">,</span><span class="w">
</span><span class="nl">"CMAKE_FIND_ROOT_PATH_MODE_PACKAGE"</span><span class="p">:</span><span class="w"> </span><span class="s2">"BOTH"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ANDROID_BUILD_ABI_armeabi-v7a"</span><span class="w"> </span><span class="p">:</span><span class="w"> </span><span class="s2">"ON"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"environment"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"JAVA_HOME"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/usr/lib/jvm/java-1.8.0-openjdk-amd64"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ANDROID_SDK_ROOT"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/android-sdk"</span><span class="p">,</span><span class="w">
</span><span class="nl">"ANDROID_NDK_ROOT"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/opt/android-ndk"</span><span class="p">,</span><span class="w">
</span><span class="nl">"CMAKE_PREFIX_PATH"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/usr/local/Qt/android"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"binaryDir"</span><span class="p">:</span><span class="w"> </span><span class="s2">"${sourceDir}/build_android"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">]</span><span class="w">
</span><span class="err">...</span><span class="w">
</span></code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">name</code> field is used to identify the preset and the <code class="language-plaintext highlighter-rouge">toolchainFile</code> field points to the toolchain file provided by the Android NDK. This toolchain file tells CMake to use the clang compiler from the NDK to cross-compile and changes the search paths to the libraries to the sysroot included with the NDK.
The <code class="language-plaintext highlighter-rouge">cacheVariables</code> field is used to pass configuration options to CMake. In this case, we set the ABI to <code class="language-plaintext highlighter-rouge">armeabi-v7a</code>, the target platform to Android 6.0 (API level 23) and the location of the Android SDK.
Additionally, we set the <code class="language-plaintext highlighter-rouge">CMAKE_FIND_ROOT_PATH_MODE_PACKAGE</code> to <code class="language-plaintext highlighter-rouge">BOTH</code> to make sure that CMake searches for packages in the sysroot and in the host system. This is needed to find the Qt libraries which are outside the sysroot. The <code class="language-plaintext highlighter-rouge">ANDROID_BUILD_ABI_armeabi-v7a</code> variable is used to enable building for the armv7 ABI in the toolchain file.
The <code class="language-plaintext highlighter-rouge">environment</code> field is used to set environment variables that are used by the toolchain file. the <code class="language-plaintext highlighter-rouge">JAVA_HOME</code> variable is used to find the java compiler, the <code class="language-plaintext highlighter-rouge">ANDROID_SDK_ROOT</code> and <code class="language-plaintext highlighter-rouge">ANDROID_NDK_ROOT</code> variables are used to find the Android SDK and NDK and the <code class="language-plaintext highlighter-rouge">CMAKE_PREFIX_PATH</code> variable is used to find the Qt libraries.
The <code class="language-plaintext highlighter-rouge">binaryDir</code> field is used to set the build directory for the preset. The <code class="language-plaintext highlighter-rouge">${sourceDir}</code> variable is a CMake variable that is replaced by the path to the source directory.</p>
<p>With the environment set up, let’s have a look at the <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code></p>
<div class="language-cmake highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cmake_minimum_required</span><span class="p">(</span>VERSION 3.21<span class="p">)</span>
<span class="nb">project</span><span class="p">(</span>
CMakeQtAPKExample
VERSION 1.0
DESCRIPTION
<span class="s2">"An example repository to showcase how to build a simple C++ android app with Qt/QML and CMake"</span>
LANGUAGES CXX<span class="p">)</span>
<span class="nb">find_package</span><span class="p">(</span>Qt5 REQUIRED COMPONENTS Core Quick<span class="p">)</span>
<span class="nb">set</span><span class="p">(</span>ANDROID_PACKAGE_SOURCE_DIR <span class="s2">"</span><span class="si">${</span><span class="nv">CMAKE_CURRENT_SOURCE_DIR</span><span class="si">}</span><span class="s2">/android"</span> CACHE INTERNAL <span class="s2">""</span> FORCE<span class="p">)</span>
<span class="nb">set</span><span class="p">(</span>CMAKE_AUTOMOC ON<span class="p">)</span>
<span class="nb">set</span><span class="p">(</span>CMAKE_AUTORCC ON<span class="p">)</span>
<span class="nb">if</span><span class="p">(</span>NOT ANDROID<span class="p">)</span>
<span class="nb">add_executable</span><span class="p">(</span>QMLDesktopExample<span class="p">)</span>
<span class="nb">elseif</span><span class="p">(</span>ANDROID<span class="p">)</span>
<span class="nb">add_library</span><span class="p">(</span>QMLDesktopExample SHARED<span class="p">)</span>
<span class="nb">find_package</span><span class="p">(</span>Qt5 REQUIRED AndroidExtras<span class="p">)</span>
<span class="nb">target_link_libraries</span><span class="p">(</span>QMLDesktopExample PRIVATE Qt5::AndroidExtras<span class="p">)</span>
<span class="nb">set_target_properties</span><span class="p">(</span>QMLDesktopExample PROPERTIES LIBRARY_OUTPUT_NAME <span class="si">${</span><span class="nv">PROJECT_NAME</span><span class="si">}</span><span class="p">)</span>
<span class="nb">add_dependencies</span><span class="p">(</span>apk QMLDesktopExample<span class="p">)</span>
<span class="nb">endif</span><span class="p">()</span>
<span class="nb">target_sources</span><span class="p">(</span>QMLDesktopExample PRIVATE src/main.cpp src/slideshow.cpp src/qml/qml.qrc<span class="p">)</span>
<span class="nb">target_link_libraries</span><span class="p">(</span>QMLDesktopExample PRIVATE Qt5::Core Qt5::Quick<span class="p">)</span>
</code></pre></div></div>
<p>The first line <code class="language-plaintext highlighter-rouge">cmake_minimum_required(VERSION 3.21)</code> sets the minimum required CMake version to 3.21. since this project is using CMake presets, this is the minimum required version.</p>
<p>The call to <code class="language-plaintext highlighter-rouge">project()</code> sets up the basic project information. In this case, the project is called <code class="language-plaintext highlighter-rouge">CMakeQtAPKExample</code> and we tell CMake that it is a C++ project by setting the <code class="language-plaintext highlighter-rouge">LANGUAGES</code> to <code class="language-plaintext highlighter-rouge">CXX</code>. Qt will later use the project name for the generated APK.</p>
<p>Before we start defining our application, tell CMake to look for the necessary Qt libraries with the call to <code class="language-plaintext highlighter-rouge">find_package(Qt5 REQUIRED COMPONENTS Core Quick)</code>. This tells CMake to look for the package <code class="language-plaintext highlighter-rouge">Qt5</code> which is required for building this project, and inside the package the modules <code class="language-plaintext highlighter-rouge">Core</code> and <code class="language-plaintext highlighter-rouge">Quick</code> should be present. If any of the modules or the package itself is not found, CMake will stop with an error. Since Qt for android is usually not installed in the default location, we added the path to Qt to the <code class="language-plaintext highlighter-rouge">CMAKE_PREFIX_PATH</code> variable in the <code class="language-plaintext highlighter-rouge">CMakePresets.json</code>.
Next, we tell CMake where to find the Android-specific files by setting the <code class="language-plaintext highlighter-rouge">ANDROID_PACKAGE_SOURCE_DIR</code> variable to the <code class="language-plaintext highlighter-rouge">android</code> subfolder. Since this is a very simple project the directory only contains the <code class="language-plaintext highlighter-rouge">AndroidManifest.xml</code> but for more complex projects <code class="language-plaintext highlighter-rouge">build.gradle</code> and other gradle scripts, as well as custom java code, can be placed there. You’ll notice that this variable is set as an internal cache variable that will be force-overwritten by each configuration step. This is needed so the Qt AndroidExtras module is forced to use our custom <code class="language-plaintext highlighter-rouge">AndroidManifest.xml</code> file instead of the generated one. This is a somewhat hacky solution caused by the Qt Android extras from Qt5 not respecting the variable otherwise.</p>
<p>The next two lines <code class="language-plaintext highlighter-rouge">set(CMAKE_AUTOMOC ON)</code> and <code class="language-plaintext highlighter-rouge">set(CMAKE_AUTORCC ON)</code> tell CMake to automatically generate the <code class="language-plaintext highlighter-rouge">moc</code>-files and compile any <code class="language-plaintext highlighter-rouge">qrc</code> files attached to any CMake target.</p>
<p>A typical use case is that the same code is used for desktop and mobile. In this case, we can use the <code class="language-plaintext highlighter-rouge">if</code>-statement to check if we are building for Android or not. If we are building for a desktop, a regular executable is created by defining the target with <code class="language-plaintext highlighter-rouge">add_executable</code>.</p>
<p>If we are building for Android, we create a target that creates a shared library with <code class="language-plaintext highlighter-rouge">add_library(QMLDesktopExample SHARED)</code>. The library has to be shared, so it can be dynamically loaded by the generated java code on android. This is different from regular library projects, where we would let the developer choose if a library target should be built as a static or dynamic lib.</p>
<p>Next, we search the <code class="language-plaintext highlighter-rouge">AndroidExtras</code> module from Qt with another <code class="language-plaintext highlighter-rouge">find_package</code> call. The <code class="language-plaintext highlighter-rouge">AndroidExtras</code> will do most of the android specific stuff in CMake. It will generate a <code class="language-plaintext highlighter-rouge">android_deployment_settings.json</code> file used to pack the application which will be placed in the build directory. The AndroidExtras will also generate a target called <code class="language-plaintext highlighter-rouge">apk</code> which can later be used for packaging the application for deployment to android. Additionally, there will be other similar targets such as the <code class="language-plaintext highlighter-rouge">aab</code> for building android app bundles. Unfortunately the [documentation for the <a href="https://doc.qt.io/qt-5/android-building.html">AndroidExtras module</a> is very slim, so you will have to look at the source code if you want to find out more.</p>
<p>The <code class="language-plaintext highlighter-rouge">target_link_libraries</code> call links the <code class="language-plaintext highlighter-rouge">Qt5::AndroidExtras</code> module to our application target which will add any dependencies needed to run the code on android.</p>
<p>Since the <code class="language-plaintext highlighter-rouge">AndroidExtras</code> will automatically name the APK the same as the project name and expects the library to be named the same, we set the <code class="language-plaintext highlighter-rouge">LIBRARY_OUTPUT_NAME</code> to the variable <code class="language-plaintext highlighter-rouge">${PROJECT_NAME}</code> which contains the name of the project (<code class="language-plaintext highlighter-rouge">CMakeAPKExample</code>). Note that we only change the name of the output file and not the name of the target itself.</p>
<p>Finally, a dependency on the <code class="language-plaintext highlighter-rouge">apk</code> target is added to the <code class="language-plaintext highlighter-rouge">QMLDesktopExample</code> target with the call to <code class="language-plaintext highlighter-rouge">add_dependencies(apk QMLDesktopExample)</code>. This will ensure that when the <code class="language-plaintext highlighter-rouge">apk</code> target is built, then our library target will also be built. The <code class="language-plaintext highlighter-rouge">apk</code> target is provided by the Qt <code class="language-plaintext highlighter-rouge">AndroidExtras</code>. And that is all that is needed regarding the Android-specific CMake code.</p>
<p>So now the target is set up, we only need to add the C++ sources and Qml resources to it. This happens in the next line <code class="language-plaintext highlighter-rouge">target_sources(QMLDesktopExample PRIVATE src/main.cpp src/slideshow.cpp src/qml/qml.qrc)</code>. The <code class="language-plaintext highlighter-rouge">PRIVATE</code> keyword tells CMake that the files are only used by the target itself and not by any other target. And lastly, the <code class="language-plaintext highlighter-rouge">target_link_libraries</code><code class="language-plaintext highlighter-rouge"> call links the </code>Qt5::Core<code class="language-plaintext highlighter-rouge"> and </code>Qt5::Quick` modules to our application target which will add any dependencies needed to run the code.</p>
<h2 id="building-the-apk">Building the APK</h2>
<p>Now that we have the CMake code ready, we can build the APK. To configure and build the example project, we can use the <code class="language-plaintext highlighter-rouge">cmake</code> command line tool and the preset we defined like this:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
cmake <span class="nt">--preset</span><span class="o">=</span>android <span class="nt">-S</span> <Path_To_CMakeLists.txt>
cmake <span class="nt">--build</span> build_android <span class="nt">--target</span> apk
</code></pre></div></div>
<p>This will first configure the project and then build the APK. The APK will be located in the <code class="language-plaintext highlighter-rouge">build_android</code> folder as defined in our preset.</p>
<p>Since Qt itself and the Android NDK and SDK are quite heavy dependencies I recommend using <a href="https://dominikberner.ch/using-devcontainers-with-cpp/">containerized build</a> environments](https://dominikberner.ch/using-devcontainers-with-cpp/) to build the project. This will ensure that the build-environment is always the same and that your OS is not polluted with the dependencies. This comes in especially handy as it is often the case that one needs to build for different android versions and installing everything directly into the OS can be a hassle.</p>
<h2 id="running-the-apk">Running the APK</h2>
<p>The APK can now be run on any supported android device by either copying the APK to the device or android emulator or by using the <code class="language-plaintext highlighter-rouge">adb</code> tool. To install it with the <code class="language-plaintext highlighter-rouge">adb</code> tool USB debugging has to be enabled on the android device, we can use the following command:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>adb usb
adb <span class="nb">install</span> <span class="nt">-r</span> <Path_To_APK>
</code></pre></div></div>
<h2 id="conclusion">Conclusion</h2>
<p>While it needs a bit of effort to set up, building and running simple Qt applications on Android is not that hard thanks to the Qt AndroidExtras module that does most of the work. There is more to releasing software on Android, such as publishing on the play store and signing the generated APK as well as publishing for the various android versions out there. However, this post should give you a good starting point to get started with Qt on Android and hopefully, you will be able to build your own Qt applications for Android. Have fun running stuff on Android!</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>While Qt6 is out and provides some additional features to build for android, Qt5 is still widely used, so this example focuses on Qt5 but the technique should also apply to Qt6. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>If you build GUI applications with C++ and Qt, chances are that you have to create a mobile version of it. While the discussion, if Qt and C++ or the native Android SDK is the right technology to use is certainly worth a tought, there are situations where it makes sense to stick with Qt and C++. This article illustrates line by line how to build a C++/Qt application for android with CMake and how to pack it into an android APK.Bringing software quality into roadmaps2022-08-22T00:00:00+00:002022-08-22T00:00:00+00:00https://dominikberner.ch//software-quality-roadmap<p>“We really struggle to bring engineering topics into the roadmap! When do we finally get the time to improve our code quality?” Many software development teams gripe when it comes to getting engineering topics into a product roadmap. For many, improving the lesser visible aspects of software quality comes second putting more features into a product. Nevertheless, creating quality software is a necessity if products are expected to perform on the market and be developed for a long time. The <a href="https://iso25000.com/index.php/en/iso-25000-standards/iso-25010">ISO 25010 model for software quality</a> lays a good foundation to discuss the quality topic with the relevant stakeholders and with a few easy-to-learn tools putting features that improve the quality into a backlog becomes much easier.</p>
<p>Most of the time it is not that the business-facing people in software development such as POs, marketing, sales and so on are not interested in having a quality product at their hand. The reason why work regarding the internal quality of a piece of software is pushed to the back of the backlog, is often that we engineers are very badly prepared to hold a constructive dialog on software quality with people with a less technical focus.</p>
<p>The most common problem is that teams struggle to define <em>software quality</em> in the first place and just refer to any of the work on improving a codebase as “refactoring” or “working on tech debt”. While the developers might have a quite clear idea of what these terms mean, they often do not serve well when talking with non-technical people.
Agile software engineering is often very value driven, so one cannot argue the value of a task during planning it will consequently be deprioritized. If developers want to bring software quality topics into the planning discussions, they have to change how to talk about it and how to present the information to justify why it is necessary.</p>
<h2 id="defining-quality">Defining Quality</h2>
<p>Let’s start by describing what <em>software quality</em> means by using the <a href="https://iso25000.com/index.php/en/iso-25000-standards/iso-25010">ISO 25010 model for software product quality</a>. The ISO 25010 standard lists eight characteristics of software quality and combined they give a very comprehensible definition of what software quality entails.</p>
<figure>
<img src="/images/software-quality/iso25010.png" alt="The iso 25010 standard, listing the 8 characteristics for software quality" onclick="toggleSize(this)" />
<figcaption>The iso 25010 standard, listing the 8 characteristics for software quality
<div class="source-annotation">Source: https://iso25000.com/index.php/en/iso-25000-standards/iso-25010</div>
</figcaption>
</figure>
<p>in a nutshell, the eight characteristics are:</p>
<ul>
<li><strong>Functional Suitability</strong> - How well does the product meet the stated and implied needs?</li>
<li><strong>Performance efficiency</strong> - How effectively does the product use the available resources?</li>
<li><strong>Compatibility</strong> - To what degree can the system interact or exchange information with other products and systems?</li>
<li><strong>Usability</strong> - How satisfactory and effective and efficient are the users when using the product?</li>
<li><strong>Reliability</strong> - How well does the system perform its function under specific conditions?</li>
<li><strong>Security</strong> - How well are the system and the data within the system protected against unauthorized access?</li>
<li><strong>Maintainability</strong> - How effective and efficient can the system be changed or developed further?</li>
<li><strong>Portability</strong> - How easy can the product be transferred to different hardware or run-time environment?</li>
</ul>
<p>A more comprehensive description of each category is available on the <a href="https://iso25000.com/index.php/en/iso-25000-standards/iso-25010">ISO 25010 homepage</a>.</p>
<p>Defining and understanding the quality metrics is the very first step towards getting an actionable plan on where and how to improve quality in a software system. As this is an international standard that naturally is intended for a very broad audience, these characteristics might lack context, so there might be a bit of work to do to define what the characteristics entail for a specific product. So, one of the first things to do is to gather everyone involved with developing a product and find alignment on the understanding of the various quality characteristics, possibly by gathering everyone and having someone present the ISO standard.</p>
<h2 id="assessing-quality-from-an-engineering-perspective">Assessing Quality from an Engineering Perspective</h2>
<p>Once an team has defined and communicated what is meant by “software quality”, creating a quality assessment and quality goals of the current software is the next step. A simple gap analysis can be done by having the developers draw a spider graph of where they think their software currently is and set the goal values of where they would like to be. For a first iteration, this can be done on gut feeling if people know the code reasonably enough. Later you might want to find and agree on a more data-driven metric such as the number of bugs reported. Either way, one of the most important things to do is to set the scale of the spider graph and agree on what each number means.</p>
<p>An example scale could look like this.</p>
<table>
<thead>
<tr>
<th style="text-align: left">Scale</th>
<th style="text-align: left">Overall Score</th>
<th style="text-align: left">Frequency of reported issues of medium or higher severity</th>
<th style="text-align: left">number of known defects</th>
<th style="text-align: left">impact of issues on users</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">1</td>
<td style="text-align: left">Bad, not satisfactory at all</td>
<td style="text-align: left">Daily/Weekly</td>
<td style="text-align: left">large</td>
<td style="text-align: left">Work cannot be performed</td>
</tr>
<tr>
<td style="text-align: left">2</td>
<td style="text-align: left">Not satisfactory</td>
<td style="text-align: left">Weekly</td>
<td style="text-align: left">few</td>
<td style="text-align: left">Work is severly impeded</td>
</tr>
<tr>
<td style="text-align: left">3</td>
<td style="text-align: left">OK</td>
<td style="text-align: left">Monthly</td>
<td style="text-align: left">almost none</td>
<td style="text-align: left">work is inconvient</td>
</tr>
<tr>
<td style="text-align: left">4</td>
<td style="text-align: left">Good</td>
<td style="text-align: left">Quarterly</td>
<td style="text-align: left">none</td>
<td style="text-align: left">temporary inconvenience</td>
</tr>
<tr>
<td style="text-align: left">5</td>
<td style="text-align: left">Excellent</td>
<td style="text-align: left">Half-Yearly</td>
<td style="text-align: left">none</td>
<td style="text-align: left">Users might not notice</td>
</tr>
</tbody>
</table>
<p>Once the scale is established, drawing a spider graph of how the characteristics are rated as well with an expected goal value or relevance for the product helps to visualize the current state and gap for each characteristic. It might be tempting to put all goal values to the maximum value, so a good approach is to weigh the goal values also by relevance. An embedded system without any user interface might have a low relevance and thus a lower goal value in the <em>usability</em> category. Or software running in a very isolated environment might put the <em>compatibility</em> or <em>portability</em> to a very low number while putting <em>reliability</em> to the max. If it is still hard to find reasonable goal values, it can help to place the goal number at the value that we want to achieve with the next product increment instead of an overall goal value.</p>
<figure>
<img src="/images/software-quality/quality_assessment.png" alt="An example for a spider graph of a quality assessment showing the actual assessment (red) and the target value for the next product increment (blue)" onclick="toggleSize(this)" />
<figcaption>An example for a spider graph of a quality assessment showing the actual assessment (red) and the target value for the next product increment (blue)
</figcaption>
</figure>
<p>The gap analysis from an engineering perspective is a good starting point for engineers for discussing which items regarding improving quality should go into a prioritized backlog. However, getting the business side of the product development into the loop requires a different approach as they will probably find it hard to assess some of the more internal quality aspects of the product. For this the building a preference matrix is a very suitable tool.</p>
<h2 id="defining-the-relevance-of-quality">Defining the Relevance of Quality</h2>
<p>The <em>preference matrix</em> compares each quality characteristic against all others to find out which one is currently valued the most. As opposed to the gap analysis, which defines targets from the current state, the preference matrix is focused on defining the desired <em>outcome</em> first and then working backward to the current state. This often suits the non-technical people involved in a product better. Additionally, the quality assessments done by the devs might yield several areas where teams would like to invest work. In this case the preference matrix might also help with prioritization.</p>
<figure>
<img src="/images/software-quality/preference_matrix.png" alt="An example of a preference matrix done in a spreadsheet software" onclick="toggleSize(this)" />
<figcaption>An example of a preference matrix done in a spreadsheet software
</figcaption>
</figure>
<p>To fill out the matrix go through it from the top left to the bottom right, asking for each combination of the quality categories: “If you have to choose, would you rather invest into “A” or “B” in the next product increment?” Note down the one that gets mentioned and continue. The people have to pick one, as we want to eliminate as much ambiguity as possible - “both” is not an acceptable answer here.
Once the combinations are filled out, the number of mentions of each category is counted and divided by the total sum of all combinations to get a percentage value (there are 28 combinations if nothing was added to the IS25010). The higher the percentage, the more important a quality characteristic is. In most cases, one or two categories will be the top contenders, so this is where teams should prioritize the work. Now is also a good way to crosscheck the ranking against the gap analysis done by the engineer. If the priorities according to the preference matrix are very different from where the engineers see the gaps, this needs a conversation to be resolved, but most of the time there is some overlap. A very simple and pragmatic resolution is might be the agreement that in the next iteration the category with the biggest gap from the gap analysis might be placed on the same priority as the characteristic that scored the highest on the preference matrix.</p>
<p>The last and final step to getting work that benefits the product quality the most into the roadmap is mapping any planned backlog item to the preference matrix.</p>
<h2 id="mapping-quality-characteristics-to-features-and-epics">Mapping Quality Characteristics to Features and Epics</h2>
<p>To find out which feature or backlog item pays for which metric, I like doing a stripped-down version of the <a href="https://www.sciencedirect.com/topics/engineering/quality-function-deployment">Quality Function Deployment (QFD) method</a>. This method assesses the influence of each feature on each quality metric. Those features that influence the quality characteristics that got the highest scores in the preference matrix and the gap analysis are prioritized first.</p>
<figure>
<img src="/images/software-quality/QFD.png" alt=" Example for mapping features to the quality characteristics. The weights of each characteristic are taken from the preference matrix above." onclick="toggleSize(this)" />
<figcaption> Example for mapping features to the quality characteristics. The weights of each characteristic are taken from the preference matrix above.
</figcaption>
</figure>
<p>On the left side of the table, all quality metrics are listed with the weight from the preference matrix. On the top, all high-level features on the backlog are listed. For each feature, it is determined if it influences the quality metric. For features that have a very clear effect put in a 9, for features that have some effect put in a 3 and if the feature has a very minimal effect put in a 1. If there will be no effect, leave the cell blank. This scale is chosen like this on purpose, as we want to prioritize the things that have a very distinct effect first.</p>
<p>The bottom line shows how much each feature fulfills our preferred quality. Each of the numbers is multiplied by the weight of the category and then each column is summarized. The features with the highest fulfillment rate are the ones that should be prioritized highest.</p>
<h2 id="quality-evolves">Quality Evolves</h2>
<p>As with any aspect of a product, the quality will change over time. Some aspects might degrade because of the complexity added but more often there is an inflation in the values added. As products evolve, users get used to a certain quality and expect higher standards, which means that what was perceived as sufficient or high quality might suddenly be perceived as shabby. Security is a prime example of this: A few years ago it was rare to have two-factor authentication for an online service, while nowadays it is becoming more and more a standard up to the point where people will not use a service if there is no 2FA present. So the quality assessment and preference matrix have to be adjusted every other planning cycle.</p>
<p>Once the quality assessment and preference matrix are there, mapping of the larger efforts in the backlog to the quality characteristics should become standard practice when planning. As with a lot of things, doing it the first time will take some time, but if done repeatedly teams usually get very efficient at processing and using the information and will get a lot of value out of this additional aspect of planning.</p>“We really struggle to bring engineering topics into the roadmap! When do we finally get the time to improve our code quality?” Many software development teams gripe when it comes to getting engineering topics into a product roadmap. For many, improving the lesser visible aspects of software quality comes second putting more features into a product. Nevertheless, creating quality software is a necessity if products are expected to perform on the market and be developed for a long time. The ISO 25010 model for software quality lays a good foundation to discuss the quality topic with the relevant stakeholders and with a few easy-to-learn tools putting features that improve the quality into a backlog becomes much easier.Writing “CMake Best Practices” - A writing journey2022-07-11T00:00:00+00:002022-07-11T00:00:00+00:00https://dominikberner.ch//writing-cmake-best-practices<p>When Packt Publishing approached me in august 2021 with the idea of writing a book about CMake, I immediately was all for it. Ten months later I am mightily proud that “<a href="https://www.amazon.com/CMake-Best-Practices-maintaining-programming-ebook/dp/B09QKYQ6SZ">CMake Best Practices</a>” finally hit the shelves. As this would be my first book, I had no idea what I was getting into when I signed the contract with <a href="https://www.packtpub.com/">Packt</a>. The months of writing the book were among the most exhausting ones in my whole career and they were an emotional rollercoaster, but I also count them as one of my most valuable experiences. And finally seeing the book getting the first customer reviews really rocks. So how is it to write a book for Packt as a first-time author? In this article, I try to give a few insights into my writing journey.</p>
<div style="background-color: #444444; margin-bottom: 1.5em;">
<h1><a href="https://www.amazon.com/dp/1803239727">
CMake Best Practices - The book</a></h1>
<div style="display: grid; width: 100%; grid-template-columns: 25% 1fr; grid-gap: 1%; padding-bottom: 0.5em;">
<div>
<a href="https://www.amazon.com/dp/1803239727">
<img src="/images/cmake-best-practices.jpg" alt="Cover of the CMake Best Practices book by Dominik Berner and Mustafa Kemal Gilor" style="max-width:100%;height:auto;display:block" />
</a>
</div>
<div>
CMake Best Practices: Discover proven techniques for creating and maintaining programming projects with
CMake. Learn how to use CMake to maximum efficiency with this compendium of best practices for a lot of
common tasks when building C++ software.
<br />
<br />
<div class="order-button">
<a href="https://www.amazon.com/dp/1803239727">Get it from Amazon</a>
</div>
</div>
</div>
</div>
<h2 id="how-packt-gets-books-done">How Packt gets books done</h2>
<p>Packt Publishing capitalizes on the print-on-demand and digital market, so its mode of operation is geared towards a high output of technical literature in a very short time - and this shows in the way how they approach book projects. In my case, I was approached by an editor from Packt over LinkedIn and asked directly if I wanted to write a book about CMake. After a few days of thinking, I said yes and soon was paired up with my co-author-to-be. Being paired with a stranger for such a big project felt a bit weird, especially as there was very little effort by the people from Packt in introducing us to each other or helping us set up a productive work environment. We essentially got each other’s e-mail addresses and a brief bio and then it was up to us to figure out who writes which chapter and how to coordinate the work on the shared documents. We were lucky, that my <a href="https://www.linkedin.com/in/mustafakemalgilor/">co-author Mustafa</a> and me have a similar writing style and that we agreed quickly on the overall content that we wanted to put into the book.</p>
<p>After the initial contact with the publishing product manager, the first thing we had to do was to write a pitch for the book, including a rough outline of the contents covered and a description of the target audience we wanted to reach. The senior editor from Packt would then use this pitch to push for an internal Go/No-Go decision and once we took this first hurdle, we had to write an outline of all the intended chapters. This included tentative titles for the sections and an estimated page count. It pays to put some extra effort into the outline, as this will become the central document from which the contract is drafted and from which the progress of the book will be measured. At this point, we authors also had to agree on who would write which chapters - this was used by Packt to evaluate who would get how much of the royalties in case the book would be eventually published. We intended to split the content roughly 50/50, so splitting the royalties evenly seemed only fair then. This would later cause a bit of grief on our side, as the first estimated page count per author was off quite a bit, but we could quickly resolve that. For Packt this page count is very important as the schedule is based on these numbers and we had quite some discussions with our editors about page count during writing. The main problem was that as first-time authors we had absolutely no clue how much space the information we could fit into a chapter would take. In the end, we had chapters that overrun the estimation twice and others that would end up being only a third of the estimated size.</p>
<p>The outline was then evaluated by Packt for another Go/No-Go decision and to draft out an approximate timeline for the project. Luckily we passed that stage as well and by this time we were offered a proper writing contract - up to this point, there was no formal working agreement or contract so if the book would not be published we would have worked for free up to that point.
The contract signing was more or less uneventful and according to my research with other publishers is more or less what is to be expected as a first-time author. My co-author and I each got 8% royalties and roughly 1000$ writing fee, paid out upon timely completion of parts of the book. And then it was “go” for writing.</p>
<h2 id="the-writing-slog">The writing slog</h2>
<p>The actual writing of the book was done chapter by chapter, with each chapter being assigned to an author and having a predefined deadline for the first draft and final editing. Each chapter would go through the following stages:</p>
<ol>
<li>Write the first draft and create any <a href="https://github.com/PacktPublishing/CMake-Best-Practices">code examples on github</a>.</li>
<li>Hand the first draft over to the main editor.</li>
<li>After a few days, we would get the annotated document back and have a few days for corrections.</li>
<li>Send the corrected document to the technical reviewers.</li>
<li>After a few days to weeks get the annotated document back from the technical reviewers.</li>
<li>Rewrite any parts and code examples that drew comments from the technical reviewers.</li>
<li>Hand the corrected documents back to the editor.</li>
<li>Receive a final draft to sign it off for production.</li>
</ol>
<p>Although the process itself was straightforward, what bothered me a few times was that it was not intended to take extra iterations in this process. The process was tied closely to the project schedule and not delivering an edited chapter on a pre-agreed date would prompt questions and increase pressure from the project team at Packt. Especially for the more complex chapters, such as the one on dependency management and the one on testing, more than one round with the technical reviewers were needed. Fitting in an extra round with the editors and adapting the schedule proved to be difficult. Any such extra round meant that we would need to pick up the pace on writing later on. There was quite obviously a trade-off for Packt between pushing books as fast as possible and getting the quality right and sometimes I was left with the feeling that publishing on time was the more important factor. The quality of writing drops quite a bit, when writing under pressure and this showed in the number of issues brought up in the review which then again would mean more work fixing them and we would fall even farther behind schedule. In the end, we only managed to pick up the schedule again by spending some holidays writing the book.</p>
<h2 id="delivering-chapters-on-time">Delivering chapters on time</h2>
<p>Keeping the tight and rigid schedule meant was often demanding - even grueling - especially if done besides a full-time job. For me, it often meant getting up an hour earlier, writing a bit, sending the kids off to school, doing a full day of work, and putting another few hours into the book once the kids were asleep. Needless to say that this took quite some understanding from my family and some of my hobbies suffered quite a bit from this. If I ever write a book again I will either push harder for a more relaxed schedule or try to take some writing days off work.</p>
<p>One big advantage to defining a rough timeline for writing up front is that there is less danger of putting the book away when the writing stalls for whatever reason. The schedule was proposed early on by Packt based on the outline we drafted in pitching the book. Initially, they wanted two pages per day and it took a bit of haggling to get them to shift from that number. In the end, we used 1.5 pages a day (including weekends) as a basis for calculating the schedule. Still, a very is a very tough thing to do. Just writing the chapters might have worked, but on top of that, we needed to come up with easy-to-understand examples and especially later in the book there was always at least one chapter under review which needed additional time which was not reflected in the schedule. In the end, we barely managed to put out the book with only a little delay, but sometimes it felt that Packt took advantage of the fact that we first-time authors had no clue about how tough the writing process would be when we set up the schedule.</p>
<h2 id="competent-reviewers-are-worth-their-weight-in-gold">Competent reviewers are worth their weight in gold</h2>
<p>Perhaps the biggest advantage for us was, that Packt brought some very competent and fast editors to the table. We would hand in the first draft of a chapter and a few days later we would get the document back with corrections, questions and annotations. I found it very pleasant and productive to work with the editors and technical reviewers. I would venture that having easy access to a team of editors and reviewers if probably the biggest advantage when working with a large publishing company.</p>
<p>The technical reviews were amongst the most helpful things to increase the quality of the book. Our reviewers were absolutely superb and very thorough when it came to checking the technical correctness of the book. They often came up with very good suggestions on how we could change examples or reshuffle the sections to make more sense. On the other hand, working with them was sometimes unnecessarily difficult because how the communication channels were set up. Of the four different technical reviewers that worked with us, only one was finally invited to our slack channel so we could interact directly with him. For the other three, we had no other means of communication than over the comments in the chapters or by mail-relay over our editors. Something that I know now was also not very well received by our technical reviewers.</p>
<p>Once most of the final drafts were done, there were some last finishing touches to be done. Writing author bios, acknowledgments blurb and various texts and summaries of the book to go on to the product page at Amazon and other stores. This was pretty straightforward and painless, but since the publishing date was fixed by this time this meant again keeping a very tight schedule. On the other hand here having a professional, big publishing house backing the book was a very big advantage, as most of the tedious small things like aligning the content with page breaks, and creating an index and a glossary were done by people at Packt.</p>
<h2 id="tooling-and-handling-documents">Tooling and handling documents</h2>
<p>One big downside from my view is that Packt wanted us to write the book Word and that all data exchange and versioning of the documents was done over SharePoint. Both tools are only halfway suited for such a heavy project - Especially if one is used to working with git and markdown, <a href="https://asciidoc.org/">asciidoc</a> or <a href="https://docutils.sourceforge.io/rst.html">reStructuredText</a>. The lack of a proper way to put code into Word was a major pain for me.
If a review would take more than one turn or if a document passed through multiple hands tracking the changes in Word would become almost impossible. Since it was often unclear if we as authors or the senior editor were responsible to accept the suggestions. Working with SharePoint was a nightmare. SharePoint sucks at handling versions and documents would vanish without a trace because somebody was moving them somewhere and there sometimes would be multiple copies of different versions of the document lying around. Luckily we found most of the documents again but at least on one occasion changes were completely lost. For me, the choice of the toolset for such as technical book was one of the biggest disappointments and felt cumbersome and often distracting.</p>
<h1 id="would-i-do-it-again">Would I do it again?</h1>
<p>All in all, writing the book was a very cool experience and I am quite proud of <a href="https://www.amazon.com/CMake-Best-Practices-maintaining-programming-ebook/dp/B09QKYQ6SZ">what came out of it</a>. Would I do it again? Yes, but I would probably either push for a more relaxed schedule, reduce my contribution or see if I can take some time off work and my other obligations to write the book. I think the writing process works reasonably well and has a high guarantee that books are going to be published and not left halfway through. There are a few things that I think could be improved to get more quality into the writing, first starting with facilitating closer collaboration between the technical reviewers and the authors. Being able to discuss examples and concepts with the technical reviewers before they go into the book would not just reduce the amount of rewriting needed but also ensure that the examples are understandable with as little context as possible.</p>
<p>Being paired up for such a large project with someone you do not know bears also quite a risk. I was lucky in the way that my co-author Mustafa and me mostly agreed on how we would write things and structure the code. But I can imagine that this might be quite difficult if there is no such fit. Added to this is that it is entirely up to the authors to establish a reliable communication channel, something that took us a while to figure out. It also pays if you talk early and frequently about your expectation of each other, i.e. how much you review each other’s work, and whether you do sparring sessions for the tougher chapters or not. This is one thing that we, unfortunately, neglected quite a bit as the pressure increased because we were late in delivering some of the chapters.</p>
<p>Nevertheless pairing up with a large publishing house such as Packt has many advantages - especially for first-time authors. I am sure that without the push and coordination from the people at Packt and the many additional resources such as editors and (technical) reviewers “<a href="https://www.amazon.com/CMake-Best-Practices-maintaining-programming-ebook/dp/B09QKYQ6SZ">CMake Best Practices</a>” would still be unwritten. I underestimated how many people are involved in creating such a book and Packt’s ability to bring all the necessary skills together is very helpful. So, if I ever want to write another technical book, going with Packt again is definitively an option for me.</p>When Packt Publishing approached me in august 2021 with the idea of writing a book about CMake, I immediately was all for it. Ten months later I am mightily proud that “CMake Best Practices” finally hit the shelves. As this would be my first book, I had no idea what I was getting into when I signed the contract with Packt. The months of writing the book were among the most exhausting ones in my whole career and they were an emotional rollercoaster, but I also count them as one of my most valuable experiences. And finally seeing the book getting the first customer reviews really rocks. So how is it to write a book for Packt as a first-time author? In this article, I try to give a few insights into my writing journey.The Mountain Mind - Mountain climbing and software development are the same2022-03-14T00:00:00+00:002022-03-14T00:00:00+00:00https://dominikberner.ch//the-mountain-mind<p>When planning a climbing trip the goal of reaching the summit is often known, but how tough it really is to get up there is something that can only be judged in detail once the climbing starts. Some mountains look almost impossible to climb and yet they are relatively easy to summit, while others look comparatively tame but require quite some effort to get up to. Starting a software project is often very similar. While there might be a general idea of what problem or business case the software should solve, it is often unclear what is needed to get there. Some things might turn out much harder than anticipated, while others will be solved surprisingly quick.</p>
<p>Developing a software product often has many unknown factors contributing to whether the development will succeed or fail, not all of them of technical nature. The team might not know each other well enough or they might not be familiar with some of the technology involved. When climbing a mountain the first time it might not be known where the most difficult parts of the wall are, how the weather will develop, and maybe the rope party does not know each other well enough to know how each person reacts under stress. Nevertheless, if one wants to summit a mountain, one has to start climbing and if one wants to ship software one has to start coding at some point.</p>
<h2 id="committing-to-success">Committing to success</h2>
<p>No matter how the preconditions were, once the climbing or the software development started, a certain commitment from those involved is needed, or else the team plods along listlessly and gets nowhere. People need to be engaged and be able to focus without distraction on the project ahead, especially when facing a challenging situation. For best performances, ideally, people reach a <a href="https://en.wikipedia.org/wiki/Flow_(psychology)">flow-state</a> when climbing or coding. Being in the flow is often defined as being fully immersed, feeling energized, and having a positive focus on the activity being performed. This implies being <em>committed</em> to the undertaking and committed people are willing to invest time, nerves, and energy to reach the common goal.</p>
<p>For software development, this often means committing to a product and a team even if it sometimes might be difficult to go forward. Sometimes a bit of grit, resolve, and stamina is needed to overcome a nasty situation - whether this is a turn for nasty weather or a particularly elusive regression bug. Generally, the further one progresses, either in climbing or in software development, the bigger the negative consequences of abandoning the project are. When developing software this often comes down to having wasted quite large amounts of money, when climbing this could mean that one has to be rescued out of a wall or worse.</p>
<blockquote>
<p>Every ambitious undertaking will get tough sometimes, and there will be moments when the frustration gets high. But if people are committed they accept this willingly.</p>
</blockquote>
<p>Being committed and accepting that there will be moments where it is going to be tough is an important factor to success at challenging projects. On the other hand, being committed means taking over responsibility for one’s actions and decisions. If the lead climber chooses a more interesting route than planned this means everybody else has to follow that way, even if the difficulty might be increased. When software engineers make technical decisions regarding a project, this affects the people tasked with maintaining or further developing the product in the future.</p>
<h2 id="where-things-go-wrong">Where things go wrong</h2>
<p>It’s not just the developers that need to be committed. The customer or sponsor of a project also has to invest their commitment, if one wants to succeed. If a customer fully backs up a project and is willing to invest time and expertise the risk of failing to ship the software successfully dramatically decreases. If a customer has a hard time expressing their wishes regarding a product or if they are unwilling to invest time on their side into a project, this might be a hint for a lack of commitment.</p>
<p>Another large factor to influence the level of commitment is the risks involved in doing something new. Some risks can be reduced and controlled to some extent, but some risks are inherent when doing something. In the mountains the weather might change suddenly, a rock can break loose or material fatigue might cause an anchor to fail. Or a global pandemic might disrupt the economy, a critical security bug might be happening in a widely used library. These risks, often very unlikely to happen, are not reducible.</p>
<p>However, many risks can be mitigated very well. Over the years I identified three major factors that contribute to increased risk and accidents when climbing or developing software: <em>Ignorance, casualness and distraction</em>.</p>
<figure>
<img src="/images/mountain-mind/accidents-reasons.png" alt="The three main reasons for increased risks and accidents are ignorance, casualness and distraction. The inherent risk is often comparatively small." onclick="toggleSize(this)" />
<figcaption>The three main reasons for increased risks and accidents are ignorance, casualness and distraction. The inherent risk is often comparatively small.
</figcaption>
</figure>
<p><em>Ignorance</em> means that we just do not know enough or that our skills are not developed far enough. Reducing ignorance is done by learning and gathering experience which is done relatively easily by regular training and education. In this context <em>casualness</em> means, that we underestimate the seriousness of the situation or overestimate our abilities to solve a problem. Reducing <em>casualness</em> is done by increasing the system awareness and getting a grasp of the surrounding context. Something that comes with experience in the field and being exposed to similar situations frequently. Retrospectives and reflecting together are also good methods to turn the implicit experience into active knowledge. <em>Distraction</em> means that we cannot focus on the task at hand for whatever reason. This is where commitment and focus come into play.</p>
<h2 id="the-circle-of-risk-and-commitment">The circle of risk and commitment</h2>
<p>The combination of focus and commitment reduces the risks. If we work on a product that is only a triviality and a side-project of a customer the risk of failure or not reaching the goal is often increased. Typical symptoms are the inability to define clear goals, roadmaps, or the unavailability of persons responsible or able to decide something.</p>
<p>When climbing having a distracted or uncommitted member in the party might put the whole rope team into danger. Unfortunately, being scared is highly distracting for most people. This creates a vicious cycle where people are scared and do not trust each other which in turn makes them lose focus which again lessens trust and heightens the feeling of immediate danger. If unchecked, this cycle continues until either an accident happens or the project is abandoned. Added to this is the fact that people not being committed to a project often tend to estimate risks higher.</p>
<figure>
<img src="/images/mountain-mind/comittment-risk-cycle_influences.png" alt="How risk and commitment influence each other and the limiting factors to break the cycle" onclick="toggleSize(this)" />
<figcaption>How risk and commitment influence each other and the limiting factors to break the cycle
</figcaption>
</figure>
<p>There are various influencing factors that we can control to escape that vicious cycle. Reflecting with the team about the risk in general and the risk perception of each individual member is a great start - no matter if climbing or hacking. Learning to keep the focus despite being scared is a great asset. Exposing yourself to the risks in a controlled environment can be a very effective way for learning this. When climbing this could mean going to the climbing gym and falling in the rope on purpose. When writing software hackathons, coding dojos and doing small low-stake projects are other ways.</p>
<p>Another huge leverage to break the cycle is to give people the time and context to be able to focus on the project on hand. This means not overloading them with many different tasks and projects and creating stable teams where people feel safe and where it is easy to get help. Teams that know how each individual perceives risks and how people react to perceived danger and pressure often perform extremely well. These teams often find it easier to commit themselves to a project early on, despite many unknowns, because they know how to overcome tough situations and are often adept at learning and reflecting. And this in turn helps to reduce ignorance and casualness, the other two factors for accidents - No matter if climbing a mountain or delivering awesome software products to users.</p>
<p>–</p>
<p>(This article was originally <a href="https://bbv.ch/programmieren-klettern/">published at bbv.ch in german</a></p>
<p><img src="/images/logo_bbv_thumb.png" alt="bbv software services logo" /></p>When planning a climbing trip the goal of reaching the summit is often known, but how tough it really is to get up there is something that can only be judged in detail once the climbing starts. Some mountains look almost impossible to climb and yet they are relatively easy to summit, while others look comparatively tame but require quite some effort to get up to. Starting a software project is often very similar. While there might be a general idea of what problem or business case the software should solve, it is often unclear what is needed to get there. Some things might turn out much harder than anticipated, while others will be solved surprisingly quick.Reproducible build environments for C++ using docker and vscode2021-09-08T00:00:00+00:002021-09-08T00:00:00+00:00https://dominikberner.ch//using-devcontainers-with-cpp<p><strong>“But it compiles on MY machine!”</strong> Is one of the phrases that every C++ coder hates. Even with CMake building a C++ app is often hard, because of missing system dependencies, or people have different libraries installed or are just building with another flavor of make or ninja. But thanks to the <a href="https://code.visualstudio.com/docs/remote/containers-tutorial">remote container extension</a> of visual studio code this has gotten much easier.</p>
<p>With a single extension installed, vscode supports setting up a complete dev-environment, develop and debug inside a docker container without any fiddling or hack.</p>
<figure>
<img src="/images/devcontainer/architecture-containers.png" alt="Overview about how devcontainers work" onclick="toggleSize(this)" />
<figcaption>Overview about how devcontainers work
<div class="source-annotation">Source: https://code.visualstudio.com/docs/remote/containers</div>
</figcaption>
</figure>
<h2 id="a-10000-mile-overview">A 10000-mile overview</h2>
<p>Having the build-environment inside a container is a major game-changer. First, it helps to ensure that all developers have exactly the same dependencies installed, but it goes further. All major CI systems include support for container-based building, the same container that runs on a developer’s machine can be used to build your code on the server. Using dev-containers helps even more because the definition of the container is stored and checked in along with the code in a <code class="language-plaintext highlighter-rouge">Dockerfile</code> and the <code class="language-plaintext highlighter-rouge">devcontainer.json</code>. By putting it under version control, each commit provides the information about the respective build system set up as well, which is great for consistent builds. So how does this magic work?</p>
<h2 id="devcontainers-in-a-nutshell">Devcontainers in a nutshell</h2>
<p>What you need besides vscode is a docker<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup> container containing all your requirements and a <code class="language-plaintext highlighter-rouge">devcontainer.json</code> file describing how to use your container. In the following example, I will set up a customized container for creating and debugging a C++-application. For extra convenience, I will include some developer tools and vscode extensions into the container as well. Once set up, vscode will connect to the container, install all specified extensions and run a server to accept its commands. After that, all operations will be done in the running container. If you need help seeting up <a href="https://docs.docker.com/get-docker/">docker</a> or <a href="https://code.visualstudio.com/">visual studio code</a> I would kindly refer you to the official documentation.</p>
<p>The remote-container extension needed for running devcontainers can be installed by pressing <code class="language-plaintext highlighter-rouge">CTRL+P</code> and then typing <code class="language-plaintext highlighter-rouge">ext install remote-containers</code> in the command bar.</p>
<figure>
<img src="/images/devcontainer/installing.png" alt="Installing the `remote container` extension in visual studio code" onclick="toggleSize(this)" />
<figcaption>Installing the `remote container` extension in visual studio code
</figcaption>
</figure>
<h2 id="project-structure">Project structure</h2>
<p>The devcontainer configuration is stored in a <code class="language-plaintext highlighter-rouge">devcontainer.json</code> file either in a folder named <code class="language-plaintext highlighter-rouge">.devcontainer</code> or named <code class="language-plaintext highlighter-rouge">.devcontainer.json</code> in the root of your project. I prefer the folder approach, because it is a bit easier to work with <code class="language-plaintext highlighter-rouge">Dockerfile</code>s and because it lets me keep all the needed files together.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>├── CMakeLists.txt
├── .devcontainer
│ ├── devcontainer.json
│ └── Dockerfile <span class="o">(</span>Optional<span class="o">)</span>
└── src
└── main.cpp
</code></pre></div></div>
<h2 id="defining-the-container">Defining the container</h2>
<p>Let’s start with a simple <code class="language-plaintext highlighter-rouge">Dockerfile</code> that uses a predefined container and applies some customization. Additionally to what is provided I install a few more packages, notably <code class="language-plaintext highlighter-rouge">gdb</code> for debugging the application easily inside the container and some tools like <code class="language-plaintext highlighter-rouge">curl</code>, <code class="language-plaintext highlighter-rouge">vim</code>, and the <code class="language-plaintext highlighter-rouge">bash-completion</code> to make working in the console easier. Since the container from conan comes with a predefined user <code class="language-plaintext highlighter-rouge">conan</code> I briefly switch to root for installing and afterward switch back to conan. Adding this customization is not strictly necessary, but it makes working inside the container easier.</p>
<div class="language-Dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">FROM</span><span class="s"> conanio/clang10:1.39.0</span>
<span class="c"># switch to root</span>
<span class="k">USER</span><span class="s"> root</span>
<span class="c"># install a few tools for more convenient developing</span>
<span class="k">RUN </span>apt-get update<span class="p">;</span> <span class="se">\
</span> apt-get <span class="nt">-y</span> <span class="nb">install</span> <span class="nt">--fix-missing</span> <span class="se">\
</span> gdb curl bash-completion vim
<span class="k">USER</span><span class="s"> conan</span>
<span class="c">#install git shell extension</span>
<span class="k">RUN </span>curl <span class="nt">-L</span> https://raw.github.com/git/git/master/contrib/completion/git-prompt.sh <span class="o">></span> ~/.bash_git <span class="o">&&</span> <span class="nb">echo</span> <span class="s2">"source ~/.bash_git"</span> <span class="o">>></span> ~/.bashrc
<span class="k">RUN </span><span class="nb">sed</span> <span class="nt">-Ei</span> <span class="s1">'s/(PS1=.*)(\\\[\\033\[00m\\\]\\\$.*)/\1\\[\\033[01;33m\\]$(__git_ps1)\2/p'</span> ~/.bashrc
</code></pre></div></div>
<p>For even more convenience I then add the <code class="language-plaintext highlighter-rouge">git-ps1</code> to get the current branch name as part of the console stub. I use this frequently, but there are of course many more customization that can be done.</p>
<figure>
<img src="/images/devcontainer/PS1_console.png" alt=" The console blurb inside the container after installing the git PS1" onclick="toggleSize(this)" />
<figcaption> The console blurb inside the container after installing the git PS1
</figcaption>
</figure>
<h2 id="telling-vscode-to-use-the-container">Telling vscode to use the container</h2>
<p>Now that the docker image to be used is defined, let’s tell vscode how to use it. For this, we place a <code class="language-plaintext highlighter-rouge">devcontainer.json</code> file and place it in the folder <code class="language-plaintext highlighter-rouge">.devcontainer</code>.</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"dockerfile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Dockerfile"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"extensions"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"ms-vscode.cpptools"</span><span class="p">,</span><span class="w">
</span><span class="s2">"ms-vscode.cmake-tools"</span><span class="p">,</span><span class="w">
</span><span class="s2">"vadimcn.vscode-lldb"</span><span class="p">,</span><span class="w">
</span><span class="s2">"cheshirekow.cmake-format"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>First, we tell vscode that it needs to build the container itself and then we pass the relative path to the <code class="language-plaintext highlighter-rouge">Dockerfile</code> to it. Afterward there is the extension block, that tells vscode which extension to install <em>inside</em> the container once it is built. For a C++ project, I consider the following the minimum set of extensions to install.</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">ms-vscode.cpptools</code>: The C++ language support for vscode</li>
<li><code class="language-plaintext highlighter-rouge">ms-vscode.cmake-tools</code>: cmake support for vscode</li>
<li><code class="language-plaintext highlighter-rouge">vadimcn.vscode-lldb</code>: lldb debugger support for easy debugging by pressing F5</li>
<li><code class="language-plaintext highlighter-rouge">cheshirekow.cmake-format</code>: cmake-format is not strictly necessary, but nobody wants to read ugly code</li>
</ul>
<p>And that is all that is needed to be ready to get going.</p>
<h3 id="using-a-container-from-a-container-registry">Using a container from a container registry</h3>
<p>Defining the docker image locally over a <code class="language-plaintext highlighter-rouge">Dockerfile</code> allows for customization, but the container has to be built locally each time it changes. Depending on the complexity of the image this might be tedious, so an alternative is to pull the image from an image repository such as <a href="https://hub.docker.com/">dockerhub</a>. In that case, instead of adding a <code class="language-plaintext highlighter-rouge">build</code> information to the <code class="language-plaintext highlighter-rouge">devcontainer.json</code> we can directly specify the image to use.
In the example below I’m pulling an existing image that includes Qt and gcc9 ready to be used.</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nl">"image"</span><span class="p">:</span><span class="w"> </span><span class="s2">"bbvch/conan_qt-5.15.2_builder_gcc9"</span><span class="p">,</span><span class="w">
</span><span class="nl">"extensions"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="s2">"ms-vscode.cpptools"</span><span class="p">,</span><span class="w">
</span><span class="s2">"ms-vscode.cmake-tools"</span><span class="p">,</span><span class="w">
</span><span class="s2">"vadimcn.vscode-lldb"</span><span class="p">,</span><span class="w">
</span><span class="s2">"cheshirekow.cmake-format"</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<h2 id="starting-the-container">Starting the container</h2>
<p>Once the devcontainer is set up and the image is built (or downloaded) we’re ready to go. You can either actively switch to the container by pressing <code class="language-plaintext highlighter-rouge">CTRL+P</code> and selecting “Rebuild and Reopen in Container” command.</p>
<figure>
<img src="/images/devcontainer/cmd-reopen.png" alt="Reopening the container using the vscode command line" onclick="toggleSize(this)" />
<figcaption>Reopening the container using the vscode command line
</figcaption>
</figure>
<p>Or just restart vscode in which case you will be asked if you want to switch to the devcontainer.</p>
<figure>
<img src="/images/devcontainer/reopen-in-container.png" alt="vscode will prompt you to open a workspace in a devcontainer" onclick="toggleSize(this)" />
<figcaption>vscode will prompt you to open a workspace in a devcontainer
</figcaption>
</figure>
<p>In the lower left corner a green sign appears that tells you whenever you’re working in a devcontainer. Clicking on it will bring up the command palette for dev containers.</p>
<figure>
<img src="/images/devcontainer/container-indicator.png" alt="the devcontainer indicator" onclick="toggleSize(this)" />
<figcaption>the devcontainer indicator
</figcaption>
</figure>
<p>Your devcontainer is up and running and you can start to work without worrying about diverging build environments. If your devcontainer works as expected I recommend to use the same container in your CI as well, for even more consistency when building C++ apps. And that’s it for setting it up - Happy coding!</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>Other container runtimes such as podman will also work, but I am not familiar with them <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>“But it compiles on MY machine!” Is one of the phrases that every C++ coder hates. Even with CMake building a C++ app is often hard, because of missing system dependencies, or people have different libraries installed or are just building with another flavor of make or ninja. But thanks to the remote container extension of visual studio code this has gotten much easier.