Author: jrhaws

  • Boost Your Productivity: 3 Proven Time Management Strategies for Optimal Results

    As a software developer, time management is crucial to your success. With so many tasks and deadlines, it can be challenging to stay organized and on track. That’s why having a solid time management strategy is key. Not only does it help increase productivity, but it also reduces stress, improves focus, and supports a healthy work-life balance. In this post, we’ll explore three effective time management strategies that can help you optimize your work and achieve your goals. Get ready to take your productivity to the next level!

    1. Increases productivity: By prioritizing tasks and staying organized, you can work more efficiently and get more done in less time.
    2. Reduces stress: Having a clear plan for how to manage your time can help alleviate the pressure and anxiety that comes with having too much to do and too little time to do it.
    3. Improves focus: Good time management strategies help you eliminate distractions and stay focused on what’s important, allowing you to make the most of your workday.
    4. Achieves goals: With effective time management, you can ensure that you have enough time to complete all the tasks you need to and meet your deadlines, thereby helping you to achieve your goals.
    5. Promotes work-life balance: By managing your time more effectively, you can carve out time for the things that are important to you outside of work, leading to a healthier and more fulfilling life.

    Here are three solid frameworks or strategies that I have found useful over my career.

    Weekly and Daily Planning

    I wrote about the importance of weekly planning on my Daily Dad Life blog as well, but wanted to reiterate a few points here as they relate to time management.

    1. The things that get scheduled are the things that get done.
    2. Vague plans produce vague goals.
    3. World-class weeks soon morph into the sensational quarters that lead into spectacular years that generate sublime decades.
    Robin Sharma, Chapter 61 of The Everyday Hero Manifesto
    A post about Weekly Planning from my other blog, DailyDadLife.com

    My weekly planning system to get myself organized follows these five main steps:

    1. Connection: I always review my big 5 and my vision of my future. This helps me to connect with my vision over and over again and ensure that I am focused on what truly matters to me – a visit to my “personal lighthouse” as Robin Sharma puts it.
    2. Reflection: I then spend a few minutes writing in my journal, reflecting on my past week. I celebrate my victories, express gratitude for those wins, and make note of areas where I can grow and improve.
    3. Prioritization: Next, I write down a laundry list of specific actions, behaviors, and habits that have proven to me to provide immense value and positive results. I try to link these to specific goals that I am currently working toward, but even if not tied to a specific goal, these are actions I will do this week to ensure I live it to the fullest.
    4. Templatization: Now I get down to the details. I will map out a template for each day, blocking out time for all that I have made sacred first — my morning victory hour, family, nature, etc. Then I add in my commitments — work, callings, other events, etc. Finally, I block out time for those key actions — my laundry list from step 3. In this way, I can be sure that I have time in my schedule for my goals, because that which gets scheduled gets done, right?
    5. Execution: Last of all is execution. As we learned from the Law of Creation, doing really is the easy part. Creating and holding our vision, feeling the emotions associated with that vision, and speaking only words in support of that is the hard part. Doing will come naturally as we continually feed our subconscious minds the image of our creation. 

    This has been the number one habit that has helped me manage my time. By making these 5 steps a sacred part of my week, I’m able to keep all my plates spinning and execute each of my projects at the top of my game.

    Mini-Sprints

    As a software developer, the concept of sprints is likely familiar to you. Sprint planning involves defining a set of tasks to complete over a specific period of time, and focusing exclusively on those tasks until they’re completed.

    But what if you could apply the same methodology to your daily work routine? This is where the idea of mini-sprints comes in. By dividing your week into day-long mini-sprints, you can focus your efforts and achieve greater productivity.

    To get started with mini-sprints, first, plan out your tasks for each day as you would in a team sprint. I do this during the templatization phase of my weekly planning. Use your task list as a guide and allocate the time in each mini-sprint accordingly. During each mini-sprint, give your full attention to the tasks at hand, avoiding the temptation to work on other items on your list.

    It’s important to remember that mini-sprints are not meant to be inflexible. Make sure to leave time for unexpected distractions and support requests from your team. And, when planning your tasks, make use of tools such as Kanban boards or issue trackers to keep you on track.

    The secret to making mini-sprints work for you is simple: focus. By dedicating your efforts to the tasks at hand, you’ll be able to complete them more efficiently and effectively. Give mini-sprints a try and see how they can help you boost your productivity.

    Take Time to Recover

    This may not be a specific strategy, but for me, the key to maintaining peak performance on a daily basis is adhering to this fundamental principle. Recovery time is just as critical, if not more so, than being fixated on completing your tasks.

    As creative professionals, the idea of working longer hours to increase productivity doesn’t always hold true. In fact, some of the world’s greatest minds found that the key to success was balancing focused work with intentional rest and recovery.

    According to productivity expert Robin Sharma, working five hours a day with intense focus can yield maximum results. Beyond that, the returns start to diminish and can even lead to burnout.

    For me, finding time for rest and renewal is essential. I find solace in nature and regularly engage in the practice of “shinrin-yoku” or forest-bathing. Being surrounded by nature, experiencing it with all my senses, and disconnecting from technology has a rejuvenating effect on my mind and body.

    Photo by Johannes Plenio on Pexels.com
    Photo by Johannes Plenio from Pexels.com

    So, next time you’re feeling drained and overworked, consider the power of rest and recovery. By taking the time to recharge, you’ll be able to work more efficiently and effectively in the long run.


    Time management is a critical aspect of productivity, especially for software developers. By implementing the strategies outlined in this blog post, you can prioritize your tasks, stay focused, and achieve your goals more efficiently. Whether you make use of one or all of them, these strategies can help you take control of your time and maximize your results. Remember, the key to success is consistency, so choose the strategy (or strategies) that works best for you and stick with it. With a little effort and discipline, you’ll be amazed at how much you can accomplish in a day!

  • Maximizing Productivity: 6 Essential Time Management Tools and Techniques

    In today’s fast-paced and competitive work environment, managing time effectively is more important than ever. As a software developer, you likely have a long list of tasks to complete and deadlines to meet, making it essential to make the most of your time. The good news is that there are a variety of tools and techniques available to help you maximize your productivity and reach your goals. In this post, we’ll take a closer look at six essential time management tools and techniques that every software developer should know about. Whether you’re just starting out or you’re looking to improve your existing skills, these tips and tricks are sure to help you work smarter, not harder. So sit back, relax, and let’s get started!

    Parkinson’s Law

    This is not necessarily a tool or technique, but a principle that, when understood, can help you increase your productivity.

    Parkinson’s Law states that “work expands so as to fill the time available for its completion”. Knowing this, you can set up specific procedures in your planning to help mitigate this.

    • Set earlier deadlines for your task, so you complete it sooner.
    • Set up artificial time limits to complete your task.
    • If using a Pomodoro (more on that later), set a limited number of cycles to complete the task.

    Eisenhower Matrix

    The Eisenhower Matrix is a tool for prioritizing your list of tasks into various quadrants of a 4-cell matrix. To do this, start by rating each task as important or unimportant. Then, rate each task again as urgent or non-urgent.

    Eisenhower Matrix

    When I rate tasks, I use the following metrics:

    • Important vs. Not Important: Does the task lead toward fulfilling my long term goals or core values?
    • Urgent vs. Not Urgent: Does the task need to be done within the next day or two?

    Once you have your ratings, drop your tasks into the matrix and that will dictate what you need to focus on. Spend time in the top two quadrants first. If you are able, delegate the important, but not urgent tasks. Anything that is not important and not urgent, simply drop from your list. You don’t need to do those tasks since they don’t contribute to your goals nor are urgent.

    The 80/20 Rule

    The 80/20 rule simply states that 20 percent of your actions yield 80 percent of your results. This is also called the Pareto Principle, after the Italian economist Vilfredo Pareto.

    Similar to Parkinson’s Law, this is less a technique and more a rule of thumb. You can use this to help you prioritize your tasks. Look at your task list and determine which of them will have the most impact, ranking each one until you have a prioritized list from top to bottom. This rule states (roughly) that by accomplishing the first 20% of your tasks, you’ll achieve 80% of the results you are after.

    If you don’t have a clean task list, or are trying to break a task down into smaller pieces, try following these steps:

    1. Identify the major problems you are trying to solve, or identify the major building blocks of the feature you are developing. Within each block, try to identify high level concepts of what needs to go into it.
    2. Assign a category to the problems or building blocks. For example, if writing a library you could have the interface, internal logic, unit testing, and build system as various categories.
    3. Now, assign a score to each high level concept within each problem or building block category. For the example given previously, you could assign scores to stubbing in the build files and filling in details for each module for the build system category.
    4. Once you have scored everything, simply total the scores for each category and then rank the categories in order.
    5. Execute! By focusing on highest scoring categories first, the 80/20 rule says that you will arrive at 80% of your functionality by completing the top 20% of your tasks.

    Clearly, you cannot use the 80/20 rule to complete a project after one round. However, I have found it to be very useful when tackling problems that I have been resisting because I don’t have a clear vision of the end solution. Application of this rule helps me to break down the problem into digestible chunks that I can work with.

    Also, successful application of this rule will also give you a nice shot of dopamine from seeing your success, which can provide the necessary motivation to move from the 80% complete to 100% complete sooner!

    Time Blocking

    Time blocking is a straightforward technique that involves allocating specific chunks of time to various tasks on your to-do list. These time slots can be customized to your preference and could range from 15 minutes to an hour or more. This method is especially useful for larger tasks that take considerable time to complete, such as creating architectural or interface designs, writing requirement specifications, etc.

    The secret to successful time blocking is to stick to the designated time frame for each task. If you have assigned yourself 1 hour for a task, it’s crucial to stop working on it once that hour is up, save your progress, and move on to your next task. Although some tasks may require multiple time blocks to complete, time blocking guarantees that you are making steady progress towards completing them all.

    clear glass with red sand grainer
    Photo by Pixabay from Pexels.com

    The Pomodoro Technique is a closely related method to time blocking, and I often incorporate it into my time blocking practice. A Pomodoro is a focused work session lasting 25 minutes, during which you work without distractions. Once the timer goes off, you take a short break of 5–10 minutes, and then return to another 25-minute work session. After four full Pomodoros, take a longer break of 20–30 minutes.

    It’s crucial to make the most of the breaks and not skip them, as these breaks provide the necessary time for recovery. By taking a break and doing something refreshing, like grabbing a drink from the water cooler, chatting with a colleague, or having lunch with a loved one, you’ll come back to work with a renewed sense of creativity and cognitive focus.

    Eat the Frog

    “Eating the frog” is a phrase often used in time management to refer to tackling the most challenging and important task of the day first thing in the morning. The idea is that by completing the most difficult task, the rest of the day will feel like a breeze in comparison. It’s a straightforward and effective strategy for increasing productivity and motivation throughout the day.

    The origin of the phrase “eat the frog” is attributed to Mark Twain, who famously said, “If it’s your job to eat a frog, it’s best to do it first thing in the morning. And If it’s your job to eat two frogs, it’s best to eat the biggest one first.” This quote encapsulates the idea of prioritizing and tackling the most challenging tasks early in the day, when you have the most energy and focus.

    If it’s your job to eat a frog, it’s best to do it first thing in the morning. And If it’s your job to eat two frogs, it’s best to eat the biggest one first.

    Mark Twain

    By eating the frog first thing in the morning, you’ll start your day feeling productive and motivated. This sense of accomplishment will carry over into the rest of your day, giving you the energy to tackle the rest of your to-do list with ease. Additionally, when you eat the frog first thing in the morning, you’ll have the peace of mind that comes with knowing that you’ve accomplished the most difficult task of the day.

    In contrast to this, is the advice from Admiral McRaven, who talks about making your bed in his famous commencement speech at the University of Texas in 2014. According to Admiral McRaven, making your bed, even if it’s just a small task, can set a positive precedent for the rest of your day and is thus, incredibly important. Despite it not being the most difficult task of the day, a perfectly made bed can help lay the foundation for a productive and successful day ahead.

    Make your bed.

    Change the world.

    Tight Bubble of Total Focus

    I save the “Tight Bubble of Total Focus”, a term from Robin Sharma, for last because I find it is one of the most powerful. Many of the techniques described previously rely on eliminating distractions, and this technique is a way to do that.

    It is a concept that refers to the ability to fully immerse oneself in a task and eliminate all distractions. When you’re in a tight bubble of total focus, you’re able to give your full attention to the task at hand, allowing you to achieve maximum productivity and efficiency. This technique is especially useful when working on complex or challenging projects that require a great deal of concentration and attention to detail.

    Entering into the bubble requires discipline and the ability to tune out distractions. This might involve turning off your phone, closing your email, or working in a quiet place where you won’t be disturbed. It’s important to eliminate as many distractions as possible, so you can give your full attention to the task at hand.

    The benefits of a tight bubble of total focus are numerous. For starters, you’ll be able to complete tasks faster and with greater accuracy. You’ll also be less likely to make mistakes or miss important details, leading to a higher-quality output. In addition, by giving your full attention to a task, you’ll be able to experience a deeper level of engagement and satisfaction in your work.


    Effective time management is crucial for software developers who want to be productive and achieve their goals. By incorporating the tools and techniques discussed in this post, such as time blocking, pomodoro technique, eating the frog, tight bubble of total focus, and others, you can optimize your time, increase your productivity, and achieve a better work-life balance. Remember, it takes time to implement new strategies and habits, so be patient with yourself and keep trying until you find what works best for you. With these tools and techniques in your arsenal, you’ll be able to tackle any task with confidence and ease, and reach new heights in your career as a software developer.

  • Mastering CPM: 6 Tips for Effectively Managing Dependencies with CMake’s Package Manager

    Include the right version right in your repo; provide a custom command to update CPM.cmake — example

    Setup cmake_modules path — example CMake

    Best Practices

    • Avoid short hand
    • Keep package dependencies with their targets

    CMake is a cross-platform build system that can be used to build, test, and package software projects. One of the most powerful features of CMake is the ability to manage dependencies, and using CMake Package Manager (CPM) make that a breeze. CPM is a package manager for CMake that allows you to easily download and use libraries and other dependencies in your project without having to manually manage them.

    Using CPM effectively can greatly simplify the process of managing dependencies in your project. Here are a few tips to help you get the most out of CPM:

    1. Start by adding CPM to your project. This can be done by adding the following line to the top of your CMakeLists.txt file (note that you will need to have a cmake/CPM.cmake in that path relative to your CMakeLists.txt file). You can find up-to-date versions of CPM.cmake here (documentation is here).
    include(cmake/CPM.cmake)Code language: PHP (php)
    1. Next, specify the dependencies you need for your project by adding CPMAddPackage commands. For example, to add the msgpack-c library, you would add the following stanza:
    CPMAddPackage(
      NAME msgpack
      GIT_TAG cpp-4.1.1
      GITHUB_REPOSITORY "msgpack/msgpack-c"
      OPTIONS "MSGPACK_BUILD_DOCS OFF" "MSGPACK_CXX20 ON" "MSGPACK_USE_BOOST OFF"
    )
    Code language: JavaScript (javascript)
    1. Once you have added all the dependencies you require, you can use them in your project by including the appropriate headers and linking against the appropriate libraries. Note that CPM will pull the dependency from the repository specified and then run CMake (if the project uses CMake to build). Because CMake is run for the project, the CMake targets are available for you to use. For example, to use the msgpack-c library in your project, you would add the following lines to your CMakeLists.txt file:
    target_link_libraries(your_target msgpackc-cxx)
    1. CPM also allows you to specify options for modules, such as disabling tests or building in a specific configuration. To specify options for a module, you can use the OPTIONS argument, as shown above.
    2. When the dependency does not have a CMakeLists.txt file, CPM will still checkout the repository, but will not configure it. In that case, you are required to write your own CMake to perform the build as required. For example, the embedded web server called Mongoose from Cesanta does not provide a CMakeLists.txt to build it, but we can still pull it in like this (note the use of the CPM generated variable mongoose_SOURCE_DIR):
    CPMAddPackage(
      NAME mongoose
      GIT_TAG 7.8
      GITHUB_REPOSITORY "cesanta/mongoose"
    )
    if (mongoose_ADDED)
      add_library(mongoose SHARED ${mongoose_SOURCE_DIR}/mongoose.c)
      target_include_directories(mongoose SYSTEM PUBLIC ${mongoose_SOURCE_DIR})
      target_compile_definitions(mongoose PUBLIC MG_ENABLE_OPENSSL)
      target_link_libraries(mongoose PUBLIC ssl crypto)
      install(TARGETS mongoose)
    endif(mongoose_ADDED)Code language: PHP (php)
    1. Add dependencies with CPM in the same CMakeLists.txt as the target that uses the dependency. If multiple targets use the same dependency, CPM will not pull multiple copies, rather it will use the copy already downloaded. By doing this, you ensure that if you ever refactor your CMake, or pull a CMakeLists.txt for a module, you get all the dependencies and don’t miss anything.

    CPM is a powerful tool that can help simplify the process of managing dependencies in your project. By following the tips outlined in this blog, you can effectively use CPM to manage your dependencies, ensuring that your project is always up-to-date and easing the burden of keeping your dependencies up-to-date as well. With the right approach, CPM can help you save time and effort when managing your project dependencies, allowing you to focus on building and delivering your project.

  • Getting Started with CMake: A Beginner’s Guide to Building Your Project

    CMake is an open-source, cross-platform build system that helps developers to manage their projects and build them on different platforms. It is widely used in the software development community, especially for C and C++ projects. In this blog post, we will explore how to use CMake effectively to manage your projects and improve your workflow as a software developer.

    An Example CMakeLists.txt

    First, let’s start with the basics of CMake. CMake uses a simple, human-readable language called CMakeLists.txt to describe the build process of a project. This file contains instructions on how to find and configure dependencies, set compiler flags, and create the final executable or library. Here is an example of how I typically define my CMake from my open-source ZeroMQ-based RPC library.

    ###############################################################################
    # CMakeLists.txt for zRPC library
    #  - Creates a CMake target library named 'zRPC'
    ###############################################################################
    cmake_minimum_required(VERSION 3.14 FATAL_ERROR)
    
    # Define the project, including its name, version, and a brief description
    project(zRPC
            VERSION "0.0.1"
            DESCRIPTION "0MQ-based RPC client/server library with MessagePack support"
           )
    
    # Define CMake options to control what targets are generated and made available to build
    option(ZRPC_BUILD_TESTS "Enable build of unit test applications" ON)
    
    # Setup default compiler flags
    set(CMAKE_C_STANDARD 11)
    set(CMAKE_C_STANDARD_REQUIRED ON)
    set(CMAKE_CXX_STANDARD 20)
    set(CMAKE_CXX_STANDARD_REQUIRED ON)
    set(compile_options -pedantic-errors
                        -pedantic
                        -Wall
                        -Wextra
                        -Wconversion
                        -Wsign-conversion
                        -Wno-psabi
                        -Werror
        CACHE INTERNAL "Compiler Options"
       )
    
    ###############################################################################
    # Bring in CPM
    ###############################################################################
    include(cmake/CPM.cmake)
    
    ###############################################################################
    # Bring in CPPZMQ header-only API
    ###############################################################################
    CPMAddPackage(
      NAME cppzmq
      VERSION 4.8.1
      GITHUB_REPOSITORY "zeromq/cppzmq"
      OPTIONS "CPPZMQ_BUILD_TESTS OFF"
    )
    
    ###############################################################################
    # Bring in MSGPACK-C header-only API
    ###############################################################################
    CPMAddPackage(
      NAME msgpack
      GIT_TAG cpp-4.1.1
      GITHUB_REPOSITORY "msgpack/msgpack-c"
      OPTIONS "MSGPACK_BUILD_DOCS OFF" "MSGPACK_CXX20 ON" "MSGPACK_USE_BOOST OFF"
    )
    
    ###############################################################################
    # Bring in C++ CRC header-only API
    ###############################################################################
    CPMAddPackage(
      NAME CRCpp
      GIT_TAG release-1.1.0.0
      GITHUB_REPOSITORY "d-bahr/CRCpp"
    )
    if(CRCpp_ADDED)
      add_library(CRCpp INTERFACE)
      target_include_directories(CRCpp SYSTEM INTERFACE ${CRCpp_SOURCE_DIR}/inc)
    endif(CRCpp_ADDED)
    
    ###############################################################################
    # zRPC library
    ###############################################################################
    add_library(${PROJECT_NAME} SHARED)
    target_include_directories(${PROJECT_NAME} PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include)
    target_link_libraries(${PROJECT_NAME} PUBLIC cppzmq msgpackc-cxx CRCpp pthread)
    target_sources(${PROJECT_NAME} PRIVATE  src/zRPCClient.cpp
                                            src/zRPCServer.cpp
                                            src/zRPCPublisher.cpp
                                            src/zRPCSubscriber.cpp
                                   PUBLIC   include/zRPC.hpp
                  )
    target_compile_options(${PROJECT_NAME} PUBLIC ${compile_options})
    
    
    ###############################################################################
    # Test applications
    ###############################################################################
    if (ZRPC_BUILD_TESTS)
      add_executable(client tests/client.cpp)
      target_link_libraries(client zRPC)
      target_compile_options(client PUBLIC ${compile_options})
    
      add_executable(server tests/server.cpp)
      target_link_libraries(server zRPC)
      target_compile_options(server PUBLIC ${compile_options})
    
      add_executable(publisher tests/publisher.cpp)
      target_link_libraries(publisher zRPC)
      target_compile_options(publisher PUBLIC ${compile_options})
    
      add_executable(subscriber tests/subscriber.cpp)
      target_link_libraries(subscriber zRPC)
      target_compile_options(subscriber PUBLIC ${compile_options})
    
      include(cmake/CodeCoverage.cmake)
      append_coverage_compiler_flags()
      add_executable(unittest tests/unit.cpp)
      target_link_libraries(unittest zRPC)
      target_compile_options(unittest PUBLIC ${compile_options})
    
      setup_target_for_coverage_gcovr_xml(NAME ${PROJECT_NAME}_coverage
                                          EXECUTABLE unittest
                                          DEPENDENCIES unittest
                                          BASE_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
                                          EXCLUDE "tests"
    				                             )
    endif(ZRPC_BUILD_TESTS)
    Code language: CMake (cmake)

    Once you have your CMakeLists.txt file created, you can use the CMake command-line tool to generate the build files for a specific platform, such as Makefiles or Visual Studio project files. It is considered best practice to keep your build files separated from your source files, so I am in the habit of creating a “_bld” directory for that purpose.

    mkdir _bld; cd _bld
    cmake ..

    CMake Targets

    Targets are the basic building blocks of a CMake project. They represent the executable or library that is built as part of the project. Each target has a unique name and is associated with a set of source files, include directories, and libraries that are used to build it.

    CMake also supports creating custom targets, which can be used to run arbitrary commands as part of the build process, such as running tests or generating documentation. You can specify properties for the target, like include directories, libraries, or compile options. You can also specify dependencies between the targets, so that when one target is built, it will automatically build any targets it depends on.

    This is a really powerful feature that CMake provides because when I define my library target, I define what it needs to build such as the source files, includes, and external libraries. Then, when I define my executable, I only need to specify the library that it depends on — the requisite includes and other libraries that need to be linked in come automatically!

    add_library(${PROJECT_NAME} SHARED)
    target_include_directories(${PROJECT_NAME} PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include)
    target_link_libraries(${PROJECT_NAME} PUBLIC cppzmq msgpackc-cxx CRCpp pthread)
    target_sources(${PROJECT_NAME} PRIVATE  src/zRPCClient.cpp
                                            src/zRPCServer.cpp
                                            src/zRPCPublisher.cpp
                                            src/zRPCSubscriber.cpp
                                   PUBLIC   include/zRPC.hpp
                  )
    target_compile_options(${PROJECT_NAME} PUBLIC ${compile_options})
    
    # The executable only needs to depend on zRPC now,
    # not all the other dependencies and include directories
    add_executable(client tests/client.cpp)
    target_link_libraries(client zRPC)Code language: PHP (php)

    Dependency Management

    One of the most important aspects of CMake is its ability to help you find and use dependencies. CMake provides a number of built-in commands that can be used to find and configure dependencies, such as find_package and find_library. These commands can be used to locate and configure external libraries, such as Boost or OpenCV, and make them available to your project. This can save you a lot of time and effort compared to manually configuring dependencies for each platform, which is how it was done with plain Makefiles in the past.

    In my example above, I use a tool called CPM, or the CMake Package Manager. This is an abstraction of the find_package and find_library methods available in the CMake language. One huge advantage of this tool is that it can not only be used to find and use local packages, but it can be used to pull packages at a specific version or tag from remote git repositories. You can see how I used this to pull in the cppzmq, msgpack, and CRCpp packages that my library depends on.

    ###############################################################################
    # Bring in CPM
    ###############################################################################
    include(cmake/CPM.cmake)
    
    ###############################################################################
    # Bring in CPPZMQ header-only API
    ###############################################################################
    CPMAddPackage(
      NAME cppzmq
      VERSION 4.8.1
      GITHUB_REPOSITORY "zeromq/cppzmq"
      OPTIONS "CPPZMQ_BUILD_TESTS OFF"
    )
    
    ###############################################################################
    # Bring in MSGPACK-C header-only API
    ###############################################################################
    CPMAddPackage(
      NAME msgpack
      GIT_TAG cpp-4.1.1
      GITHUB_REPOSITORY "msgpack/msgpack-c"
      OPTIONS "MSGPACK_BUILD_DOCS OFF" "MSGPACK_CXX20 ON" "MSGPACK_USE_BOOST OFF"
    )
    
    ###############################################################################
    # Bring in C++ CRC header-only API
    ###############################################################################
    CPMAddPackage(
      NAME CRCpp
      GIT_TAG release-1.1.0.0
      GITHUB_REPOSITORY "d-bahr/CRCpp"
    )
    if(CRCpp_ADDED)
      add_library(CRCpp INTERFACE)
      target_include_directories(CRCpp SYSTEM INTERFACE ${CRCpp_SOURCE_DIR}/inc)
    endif(CRCpp_ADDED)
    Code language: CMake (cmake)

    Cross-Platform Build Support

    Another powerful feature of CMake is its ability to generate build files for multiple platforms. For example, you can use CMake to generate Makefiles for Linux, Visual Studio project files for Windows, or Xcode project files for macOS. This allows you to easily build and test your project on different platforms, without having to manually configure the build process for each one.

    # Basic command to generate Makefiles (Linux/MacOS)
    cmake -G "Unix Makefiles" ..
    
    # Basic command to generate Visual Studio build files
    cmake -G "Visual Studio 16" -A x64 ..
    
    # More complex command from the VS Code CMake extension performing cross-compilation for ARM
    /usr/bin/cmake --no-warn-unused-cli -DCMAKE_EXPORT_COMPILE_COMMANDS:BOOL=TRUE \
        -DCMAKE_BUILD_TYPE:STRING=Release 
        -DCMAKE_C_COMPILER:FILEPATH=/usr/bin/arm-linux-gnueabihf-gcc-10 
        -DCMAKE_CXX_COMPILER:FILEPATH=/usr/bin/arm-linux-gnueabihf-g++-10 
        -DARCH:STRING=armv7 -DENABLE_TESTS:STRING=ON 
        -S/workspaces/zRPC -B/workspaces/zRPC/_bld/ARM/Release 
        -G "Unix Makefiles"Code language: PHP (php)

    Best Practices

    To improve your workflow with CMake, there are a few best practices that you should follow:

    • Keep your CMakeLists.txt files small and organized. The build process of a project can become complex, so it’s important to keep your CMakeLists.txt files well-organized and easy to understand.
    • Use variables to define common build options, such as compiler flags or library paths. This makes it easy to change these options globally, without having to modify multiple parts of your CMakeLists.txt files.
    • Use include() and add_subdirectory() commands to split your project into smaller, more manageable parts. This makes it easier to understand the build process, and also makes it easy to reuse parts of your project in other projects. I have found that many, small CMake files are easier to manage and maintain than fewer, large CMake files.
    • Use the install() command to specify where the final executable or library should be installed. This makes it easy to distribute your project to other users.
    • Use the add_custom_command() and add_custom_target() commands to add custom build steps to your project. For example, you can use these commands to run a script that generates source code files or to run a test suite after building.

    By following these best practices, you can effectively use CMake to manage your projects and improve your workflow as a software developer. CMake is a powerful tool that can save you a lot of time and effort, and by mastering its features, you can build and distribute your projects with ease.

  • 4 Must-Have C++17 Features for Embedded Developers

    C++17 is a version of the C++ programming language that was standardized in 2017, and adds additonal new features and improvements to what is considered “modern C++”. Some major new features that I have really loved in C++17 include:

    • Structured bindings: Allows you to easily extract multiple variables from a single tuple or struct, and to use them in a more readable way.
    • Inline variables: Allows for the definition of variables with the “inline” keyword in classes and elsewhere.
    • New attributes: Allows for more readable code by marking certain areas with specific attributes that the compiler understands.
    • std::shared_mutex: Allows for multiple “shared” locks and a single “exclusive” lock. This is basically a standard read/write mutex!

    In embedded systems, support for C++17 will depend on the compiler and platform you are using. Some more popular compilers, such as GCC and Clang, have already added support for C++17. However, support for C++17 for your project may be limited due to the lack of resources and the need to maintain backwards compatibility with older systems.

    A good summary of the new features and improvements to the language can be found on cppreference.com. They also provide a nice table showing compiler support for various compilers.

    Structured Bindings

    I already wrote about my love for structured bindings in another post (and another as well), but this article would be incomplete without listing this feature!

    My main use of structured bindings is to extract named variables from a std::tuple, whether that be a tuple of my own creation or one returned to me by a function.

    But I just realized there is another use for them that makes things so much better when iterating over std::maps:

    // What I used to write
    for (const auto &val : myMap)
    {
        // Print out map data, key:value
        std::cout << val.first << ':' << val.second << std::endl;
    }
    
    // What I now write
    for (const auto &[key, value] : myMap)
    {
        // Print out map data, key:value
        std::cout << key << ':' << value << std::endl;
    }Code language: PHP (php)

    The second for loop is much more readable and much easier for a maintainer to understand what is going on. Conscious coding at its best!

    Inline Variables

    An inline variable is a variable that is defined in the header file and is guaranteed to be the same across all translation units that include the header file. This means that each compilation unit will have its own copy of the variable, as opposed to a single shared copy like a normal global variable.

    One of the main benefits of inline variables is that they allow for better control over the storage duration and linkage of variables, which can be useful for creating more flexible and maintainable code.

    I find this new feature most useful when declaring static variables inside my class. Now I can simply declare a static class variable like this:

    class MyClass
    {
      public:
        static const inline std::string MyName = "MyPiClass";
        static constexpr inline double MYPI = 3.14159265359;
        static constexpr double TWOPI = 2*MYPI; // note that inline is not required here because constexpr implies inline
    };Code language: PHP (php)

    This becomes especially useful when you would like to keep a library to a single header, and you can avoid hacks that complicate the code and make it less readable.

    New C++ Attributes

    C++17 introduces a few standard attributes that make annotating your code for the compiler much nicer. These attributes are as follows.

    [[fallthrough]]

    This attribute is used to allow case body to fallthrough to the next without compiler warnings.

    In the example given by C++ Core Guidelines ES.78, having a case body fallthrough to the next case leads to hard to find bugs, and it just is plain hard to read. However, there are certain instances where this is absolutely appropriate. In those cases, you simply add the [[fallthough]]; attribute where the break statement would normally go.

    switch (eventType) {
    case Information:
        update_status_bar();
        break;
    case Warning:
        write_event_log();
        [[fallthough]];
    case Error:
        display_error_window();
        break;
    }Code language: JavaScript (javascript)

    [[maybe_unused]]

    This attribute is used to mark entities that might be unused, to prevent compiler warnings.

    Normally, when writing functions that take arguments that the body does not use, the approach has been to cast those arguments to void to eliminate the warning. Many static analyzers actually suggest this as the suggestion. The Core Guidelines suggest to simply not provide a name for those arguments, which is the preferred approach.

    However, in the cases where the argument is conditionally used, you can mark the argument with the [[maybe_unused]] attribute. This communicates to the maintainer that the argument is not used in all cases, but is still required.

    RPC::Status RPCServer::HandleStatusRequest(
        const RPC::StatusRequest &r [[maybe_unused]])
    {
      if (!m_ready)
      {
        return RPC::Status();
      }
      return ProcessStatus(r);
    }Code language: PHP (php)

    This attribute can also be used to mark a static function as possible unused, such as if it is conditionally used based on whether DEBUG builds are enabled.

    [[maybe_unused]] static std::string toString(const ProcessState state);Code language: PHP (php)

    [[nodiscard]]

    This attribute is extremely useful when writing robust and reliable library code. When marking a function or method with this attribute, the compiler will generate errors if a return value is not used.

    Many times, developers will discard return values by casting the function to void, like this.

    (void)printf("Testing...");Code language: JavaScript (javascript)

    This is against the Core Guidelines ES-48, but how do you get the compiler to generate errors for your functions in a portable, standard way? With [[nodiscard]]. When a developer fails to check the return value of your function (i.e., they don’t store the result in some variable or use it in a conditional), the compiler will tell them there is a problem.

    // This code will generate a compiler error
    [[nodiscard]] bool checkError(void)
    {
      return true;
    }
    
    int main(void)
    {
      checkError();
      return 0;
    }
    
    // Error generated
    scratch.cpp:36:13: error: ignoring return value of ‘bool checkError()’, declared with attribute ‘nodiscard’ [-Werror=unused-result]
       36 |   checkError();
    
    
    // However, this takes care of the issue because we utilize the return value
    if (checkError())
    {
      std::cout << "An error occurred!" << std::endl;
    }Code language: JavaScript (javascript)

    I love how this can be used to make your users think about what they are doing!

    Shared (Read/Write) Mutex

    A shared lock allows multiple threads to simultaneously read a shared resource, while an exclusive lock allows a single thread to modify the resource.

    In many instances, it is desirable to have a lock that protects readers from reading stale data. That is the whole purpose of a mutex. However, with a standard mutex, if one reader holds the lock, then additional readers have to wait for the lock to be released. When you have many readers trying to acquire the same lock, this can result in unnecessarily long wait times.

    With a shared lock (or a read/write mutex), the concept of a read lock and a write lock are introduced. A reader must wait for the write lock to be released, but will simply increment the read lock counter when taking the lock. A writer, on the other hand, must wait for all read and write locks to be released before it can acquire a write lock. Essentially, readers acquire and hold a shared lock, while writers acquire and hold an exclusive lock.

    Here is an example of how to use a shared_mutex:

    #include <iostream>
    #include <thread>
    #include <mutex>
    
    std::shared_mutex mtx;
    int sharedCount = 0;
    
    void writevalue()
    {
        for (int i = 0; i < 10000; ++i)
        {
            // Get an exclusive (write) lock on the shared_mutex
            std::unique_lock<std::shared_mutex> lock(mtx);
            ++sharedCount ;
        }
    }
    
    void read()
    {
        for (int i = 0; i < 10000; ++i)
        {
            // Get a shared (read) lock on the shared_mutex
            std::shared_lock<std::shared_mutex> lock(mtx);
            std::cout << sharedCount << std::endl;
        }
    }
    
    int main()
    {
        std::thread t1(writevalue);
        std::thread t2(read);
        std::thread t3(read);
    
        t1.join();
        t2.join();
        t3.join();
    
        std::cout << sharedCount << std::endl;
        return 0;
    }
    
    Code language: C++ (cpp)

    Here you can see that std::shared_mutex is used to protect the shared resource sharedCount. Thread t1 increments the counter using an exclusive lock, while threads t2 and t3 read the counter using shared locks. This allows for concurrent read operations and exclusive write operation. It greatly improves performance where you have a high number of read operations versus write operations.

    With C++17, this type of lock is standardized and part of the language. I have found this to be extremely useful and makes my code that much more portable when I need to use this type of mutex!


    C++17 offers a wide range of new features that provide significant improvements to the C++ programming language. The addition of structured bindings, inline variables, and the new attributes make code more readable and easier to maintain, and the introduction of the “std::shared_mutex” type provides performance improvements in the situations where that type of lock makes sense. Overall, C++17 provides an even more modern and efficient programming experience for developers. I encourage you to start exploring and utilizing these new features in your own projects!

  • Optimizing Boot Time on an iMX6ULL Processor

    I recently had the opportunity, or the critical need rather, to optimize the boot time of some applications on an iMX6ULL processor. For anyone unfamiliar with this processor, it is a single core ARM processor running a Cortex-A7 core at up to 900 MHz. It provides a number of common interfaces for a microprocessor, including 16-bit DDR, NAND/NOR flash, eMMC, Quad SPI, UART, I2C, SPI, etc.

    This particular implementation is running a recent Linux kernel with multiple Docker applications to execute the business logic. The problem was that the time from power on to the system being ready and available was over 6 minutes! Obviously, this was a huge issue and not acceptable performance from a user perspective, so I was tasked with reduce that boot time. I was not given any acceptable performance numbers, so I started by just doing the best that I could with the hardware.

    TL;DR

    By optimizing the kernel configuration and boot time of my applications, I was able to cut the boot time in half. Through this process, I was also able to find some issues in the hardware design of our board that we were able to address to improve read/write speeds to our eMMC.

    In total, I was able to shave over three minutes off of our boot time. It still is not optimal, but I am waiting on the hardware rework before continuing to see how much more is required.

    <insert a table showing gross estimates of time saved in kernel (drivers/modules), systemd (removing services not required), docker apps (upgrading docker to go version from python/rewriting plugin in rust)>

    Boot/Initialization PeriodOriginal TimeOptimized Time
    Bootloader and Kernel Load30 seconds22 seconds
    systemd Service Initialization5 minutes 30 seconds2 minutes 30 seconds
    Optimization Summary

    Approach

    My typical approach to optimizing a Linux system always starts by working to optimize the kernel. This can usually save only a few seconds of boot time, but it was what I was most familiar with, so that is where I began.

    The Linux kernel is designed to allow various drivers and features to be enabled or disabled at runtime via kernel modules. However, depending on your system and requirements, have everything available in the kernel by way of modules can significantly slow down your kernel load times. My approach when working in embedded systems with custom hardware is to optimize the kernel configuration in such a way that all the necessary drivers are built into the kernel and all critical features are also built into the kernel. Everything else that may be needed later in runtime by applications can be left as a kernel module. All features that are definitely not required are completely removed.

    Please note that there is a fine line here when optimizing your kernel. If you make everything built in, that bloats the size of your kernel, which means it will take longer to read from disk. So leaving some things as modules is okay to keep your kernel size smaller, this optimizing load time. But if you have too many modules, those have to be loaded before the rest of the system can be loaded that depends on them, so there is a balance to be struck.

    After I focused on the kernel, I turned my eye toward optimizing the run time and boot applications. Our system made use of systemd to manage system initialization, so it was clear that the right tool to use was going to be systemd-analyze. This is an extremely useful tool to use when you need to see how all the services interact one with another during initialization. You can get a list of how long each service took during the initialization process, view a graph of how those are all related, and even see the critical chain of services through initialization. The typical process to optimize your initialization is two-fold: a) identify what services are being run that you do not require and can turn off, and b) identify what services are taking an abnormal amount of time, so you can optimize those specific services.

    Finally, after the kernel was optimized, and I had removed all the unnecessary services, I was able to focus on the few services and applications that were simply taking forever to start up.

    Kernel Optimization

    In order to optimize the kernel, I had to make use of a few commands in the Yocto build system to manipulate the kernel configuration. In Yocto, most systems make use of kernel configuration fragments, which are a really easy and clean way to manage certain features and drivers in a hardware and kernel agnostic fashion.

    The way this works is that your primary kernel recipe in Yocto will manage a default kernel configuration that defines typical features and drivers for that version of the kernel. This primary layer typically will come from the microprocessor vendor, in this case from NXP. Your hardware layer will provide configuration fragments that override those features. Finally, you can have additional meta layers that provide additional fragments if necessary. You can read more about kernel configuration in the Yocto Project documentation here.

    bitbake linux-lmp-fslc-imx-rt -c menuconfig
    Using menuconfig from bitbake.

    With dealing with optimizations, however, you are typically overriding most everything in the default configuration. Rather than editing the defconfig file itself, or providing an entirely new one, I stuck with the fragment approach.

    # Create a kernel configuration from the default configuration
    # (i.e., build the kernel recipe through the configure step)
    bitbake linux-yocto -c kernel_configme -f
    
    # Make the necessary configuration changes desired for the fragment
    bitbake linux-yocto -c menuconfig
    # Exit menuconfig, saving the configuration
    
    # Create the fragment
    bitbake linux-yocto -c diffconfig
    
    # Copy the fragment to your repository and add to your recipe(s)Code language: PHP (php)

    You can read more about creating kernel configuration fragments in this quick tip.

    When editing the kernel configuration via menuconfig, I made all my desired changes and then generated the configuration fragment and built the kernel. This resulted in many QA warnings about certain features being defined in the configuration chain, but did not end up in the final configuration. Those warnings simply mean that the build system is having a hard time verifying what the correct configuration should be. This is usually because your defconfig file contains one CONFIG_*, but that is not part of the final configuration because you implicitly removed it (i.e., you removed a feature that the specific configuration depends on). To address this, simply take the CONFIG_* option mentioned in the QA warning and drop that into your optimization fragment with a “=n” at the end to explicitly disable it.

    Initialization Optimization via systemd-analyze

    systemd is a popular choice for managing system components in Linux systems. Most all modern Linux systems make use of it in one way or another. It provides a wide range of tools to help administrators manage their systems as well.

    In my case, since my system was using systemd to manage all the run time services and initialization, I was able to make use of the systemd-analyze tool. Here is a short snippet from the man page:

    systemd-analyze may be used to determine system boot-up performance statistics and retrieve other state and tracing information from the system and service manager, and to verify the correctness of unit files. It is also used to access special functions useful for advanced system manager debugging.

    For this exercise, I made use of the commands blame, plot, dot, and critical-chain.

    To start my debug process, I wanted to know at a high level what services were taking the longest to initialize. To do this, I made use of the blame and plot commands.

    systemd-analyze blame will look at all the services that systemd manages and provide a sorted list and the amount of time they took to initialize. This was exactly the kind of information I was after and gave me a starting point in my search for what to optimize. However, when looking at this data you have to be a little careful, because services are many times interdependent. The initialization time of one service could be really long because it cannot finish initializing until another service is also complete with its own initialization.

    user@localhost:~$ systemd-analyze blame
    22.179s NetworkManager-wait-online.service
    21.986s docker-vxcan.service
    19.405s docker.service
    14.119s dev-disk-by\x2dlabel-otaroot.device
    12.161s systemd-resolved.service
    10.973s systemd-logind.service
    10.673s containerd.service
     9.702s systemd-networkd.service
     9.443s systemd-networkd-wait-online.service
     6.789s ModemManager.service
     5.690s fio-docker-fsck.service
     5.676s systemd-udev-trigger.service
     5.552s systemd-modules-load.service
     5.127s btattach.service
     5.062s user@1001.service
     4.670s sshdgenkeys.service
     3.793s NetworkManager.service
     3.780s systemd-journald.service
     2.945s systemd-timesyncd.service
     2.837s bluetooth.service
     2.409s systemd-udevd.service
     2.084s zram-swap.service
     1.677s systemd-userdbd.service
     1.621s avahi-daemon.service
     1.284s systemd-remount-fs.service
     1.080s dev-mqueue.mount
     1.025s sys-kernel-debug.mount
     1.015s modprobe@fuse.service
     1.010s sys-kernel-tracing.mount
     1.004s modprobe@configfs.service
     1.004s modprobe@drm.service
      997ms kmod-static-nodes.service
      871ms systemd-rfkill.service
      832ms systemd-journal-catalog-update.service
      732ms systemd-tmpfiles-setup.service
      668ms systemd-sysusers.service
      592ms systemd-user-sessions.service
      562ms systemd-tmpfiles-setup-dev.service
      533ms ip6tables.service
      507ms iptables.service
      464ms systemd-sysctl.service
      390ms systemd-journal-flush.service
      364ms systemd-random-seed.service
      359ms systemd-update-utmp-runlevel.service
      346ms systemd-update-utmp.service
      318ms user-runtime-dir@1001.service
      232ms tmp.mount
      220ms var.mount
      219ms sys-fs-fuse-connections.mount
      210ms var-rootdirs-mnt-boot.mount
      207ms systemd-update-done.service
      201ms var-volatile.mount
      165ms sys-kernel-config.mount
      147ms docker.socket
      140ms sshd.socket
      134ms ostree-remount.service
      123ms dev-zram0.swapCode language: JavaScript (javascript)

    Because blame doesn’t show any dependency information, systemd-analyze plot > file.svg was the next tool in my quiver to help. This command will generate the same information as blame but will place it all in a nice plot form, so you can see what services started first, how long each took, and also pick out some dependencies between services.

    Section of plot generated with systemd-analyze plot command.

    My main use of the blame and plot commands was to identify services that were obviously taking a lot of time, but to also identify services that were simply not required. systemctl –type=service also helped with this process by simply listing all the services that systemd had enabled.

    Dependencies can still be hard to definitively spot when just looking at the plot, however. Because of that, the systemd-analyze dot ‘<pattern>’ command is really handy. When optimizing my boot time, I would use blame and plot to identify potential culprits and then run them through dot to see how they were related. For example, I found that my network configuration was taking an abnormal amount of time, so I took a look at systemd-analyze dot ‘*network*.*’ to see how the systemd-networkd and related services were interacting one with another. This led me to some information that helped me understand that half of the services that were being started in support of the network were not actually required (such as NetworkManager and the IPv6 support services). By disabling those, I was able to save over 30 seconds of boot time, simply by removing a couple of services that were not required.

    Finally, I was able to make use of the systemd-analyze critical-chain command to view just the critical chain of services. This command simply prints the time-critical chain of services that lead to a fully-initialized system. Only the services taking the most amount of time are shown in this chain.

    user@localhost:~$ systemd-analyze critical-chain
    The time when unit became active or started is printed after the "@" character.
    The time the unit took to start is printed after the "+" character.
    
    multi-user.target @1min 21.457s
    `-docker.service @1min 3.755s +17.683s
      `-fio-docker-fsck.service @26.784s +36.939s
        `-basic.target @26.517s
          `-sockets.target @26.493s
            `-sshd.socket @26.198s +258ms
              `-sysinit.target @25.799s
                `-systemd-timesyncd.service @23.827s +1.922s
                  `-systemd-tmpfiles-setup.service @23.224s +518ms
                    `-local-fs.target @22.887s
                      `-var-volatile.mount @22.596s +222ms
                        `-swap.target @22.393s
                          `-dev-zram0.swap @22.208s +113ms
                            `-dev-zram0.device @22.136sCode language: JavaScript (javascript)

    This information is also useful because it is more of a quick and dirty way of getting to the same information as using blame, plot, and dot separately. However, because it doesn’t show all services, it can only help you optimize the worst offenders.

    Application Optimization

    Finally, after all the kernel and service optimizations, I still was seeing that a couple application services were taking the majority of the time to start up. Specifically, these applications were Docker and a Docker plugin for managing CAN networks.

    These services were the last of the service chain to start up, so there were no other dependent services they were waiting on. Once the network was up and configured, these services would start up. Because they were not dependent on any other services, I was able to laser focus in on what was causing those applications to take so long to start and optimize them directly.

    First, Docker Compose was taking well over two minutes to start and load containers. Second, my Docker plugin for CAN networks was taking well over 20 seconds to startup as well.

    When I checked my version of Docker Compose, I found that it was still running a Python-based version of Compose, rather than the newer and faster Golang-based version. By upgrading my version of Compose, I was able to reduce my startup time from well over two minutes to about 1 minute 40 seconds.

    I also found that the CAN network plugin was written in Python as well. So, rather than continue using it, I rewrote that plugin in Rust, which also gave me the opportunity to fix a couple of shortcomings I found in the plugin itself. This reduced the initialization time of the plugin from over 30 seconds to under a second — a huge savings!

    488ms docker-vxcan.serviceCode language: CSS (css)

    Conclusion

    Overall, this was a great exercise for me in the steps to optimize the boot process of a Linux system. While I certainly could optimize the system some more, I believe the gains to be had will be minimal — at least until I have some new hardware with the flash memory interfaces running at full speed. Then I can revisit this and see if I can get things any faster!

    What is your experience with speeding up Linux boot and initialization on embedded systems? Any tips and tricks you would like to share? Comment them below!

  • Quick Tip: Generating Kernel Configuration Fragments with Bitbake

    Generating a kernel configuration fragment is a common task in kernel development when working with the Yocto Project. Configuration fragments are extremely useful to define groups of kernel configuration options that you can then reuse between projects simply by adding the fragment to your kernel bbappend file.

    For example, if I wanted to enable USB to serial UART device drivers via a kernel configuration fragment, I’d run through the following steps:

    1. Configure the kernel to setup the configuration baseline to work from
    2. Run menuconfig to enable/disable the desired options
    3. Generate the fragment by running the diffconfig command of bitbake
    4. Copy the fragment to my recipe overlay directory
    5. Add a reference to the fragment to my kernel bbappend file and rebuild

    1. Default Kernel Configuration

    Most all Yocto kernels are going to have a defconfig file that defines the default options for the kernel. When you run the configme command of bitbake, this defconfig will be copied to the kernel build directory as the .config for the build.

    # Create a kernel configuration from the default configuration
    # (i.e., build the kernel recipe through the configure step)
    bitbake linux-lmp-fslc-imx-rt -c kernel_configme -fCode language: PHP (php)

    2. Make Configuration Changes for Fragment

    Once you have configured your kernel with bitbake, now you can edit the kernel configuration using menuconfig. Simply make all the necessary changes to the kernel configuration required for your device and save the configuration by exiting menuconfig.

    # Make the necessary configuration changes desired for the fragment
    bitbake linux-lmp-fslc-imx-rt -c menuconfigCode language: PHP (php)
    Use menuconfig to make the desired changes to the kernel configuration.

    3. Generate the Kernel Configuration Fragment

    Now that you have saved the kernel configuration, the .config file in your build folder is updated with your changes. However, these only reside in your build directory and will not persist. To keep these changes around, you need to generate the configuration fragment with the diffconfig command of bitbake.

    # Create the fragment
    bitbake linux-yocto -c diffconfigCode language: PHP (php)

    The output of this command will tell you where the fragment was stored. In my case, it was stored in:

    /build/lmp/_bld/tmp-lmp/work/imx6ullwevse-lmp-linux-gnueabi/linux-lmp-fslc-imx-rt/5.10.90+gitAUTOINC+ec9e983bd2_fcae15dfd5-r0/fragment.cfg

    4. Copy Fragment to Kernel Overlay

    Now, I can copy that configuration fragment to my directory with my recipe and add it to my recipe overlay via the bbappend.

    cp /build/lmp/_bld/tmp-lmp/work/imx6ullwevse-lmp-linux-gnueabi/linux-lmp-fslc-imx-rt/5.10.90+gitAUTOINC+ec9e983bd2_fcae15dfd5-r0/fragment.cfg ../layers/meta-consciouslycode/recipes-kernel/linux/linux-lmp-fslc-imx-rt/prolific-pl2303.cfg

    5. Add Fragment to Kernel Recipe and Rebuild Kernel

    Finally, add the fragment to the kernel recipe’s bbappend and rebuild the kernel!

    Kernel recipe bbappend file containing the SRC_URI addition with the configuration fragment.
    # Build the new kernel!
    bitbake linux-lmp-fslc-imx-rtCode language: PHP (php)

    Conclusion

    You can use this method to capture any group of kernel configuration options you want in a fragment. That fragment can then be reused across many projects to easily enable and disable certain kernel features.

  • Using Bitbucket Pipelines to Automate Project Releases

    Generating releases for your project shouldn’t be a chore, yet many times it does prove to be a pain. If you don’t release very often or on a regular schedule, you have to go back and remember how to do it. This can result in inconsistencies in your releases, which makes it harder on your users. Read on to learn how you can define steps in your Bitbucket Pipelines to automate your project releases.

    I recently set up automated releases for a C++ project in Bitbucket. This process automates the steps to create a release branch, bump the version per Semantic Versioning, generate a changelog according to Conventional Commits, and push the release back to git, fully tagged and ready to go. Here is how I did it.

    Define the common steps in your pipeline

    # Default image to use - version 3.x
    image: atlassian/default-image:3
    definitions:
      services:
        docker:
          memory: 7128
      steps:
          - step: &Build-Application
              name: Build Application
              image: rikorose/gcc-cmake
              size: 2x
              script:
                # Update the submodules
                - git submodule update --recursive --init
                # Install the dependencies
                - apt-get update && export DEBIAN_FRONTEND=noninteractive
                - apt-get -y install --no-install-recommends uuid-dev libssl-dev libz-dev libzmq5 libzmq3-dev
                # Print the Linux version.
                - uname -a
                # Print the gcc version.
                - gcc --version
    
                # Print the CMake version.
                - cmake --version
                # Setup the build
                - mkdir _bld && cd _bld
                # Call CMake
                - cmake -DCMAKE_BUILD_TYPE=Debug ..
                # Build project
                - make -j10
          - step: &Build-Container
              name: Test Container Build
              size: 2x
              script:
                # Update the submodules
                - git submodule update --recursive --init
                # Build the container
                - docker build --file ./Dockerfile .
              services:
                - dockerCode language: PHP (php)

    In this snippet, I define which image to use by default for all the steps. I chose to use the default Atlassian image, but specified version 3. If you do not specify a version here (with the :3) you wind up with a really old version of the image that is kept around for backwards compatibility.

    I also define two common build steps, called Build-Application and Build-Container, which I use later on in my pipeline.

    Define the pipeline(s)

    pipelines:
      custom:
        generate-release:
          - step:
              name: Generate release branch
              script:
                - git checkout master
                - git pull --ff-only
                - git checkout -b release/next
                - git push -u origin release/next
      pull-requests:
        '**': # all PRs
          - step: *Build-Application
          - step: *Build-Container
      branches:
        master:
          - step: *Build-Application
          - step: *Build-ContainerCode language: PHP (php)

    This snippet generates a few pipelines, one that runs each time the master branch is updated on the server, one that runs for every pull request created, and one custom pipeline that must be run manually.

    The master branch and pull-requests pipelines are identical and simply utilize the defined steps from the previous step. The custom pipeline however has a single role: create a new branch called release/next and push that back to the server. As you’ll see in the next section, this will trigger another branch pipeline.

    Define the release generation pipeline

      branches:
        # master branch defined here previously
        release/next:
          - step:
              name: Generate Release
              script:
                # Configure npm to work properly as root user in Ubuntu
                - npm config set user 0
                - npm config set unsafe-perm true
                # Install necessary release packages and generate release, pushing back to repo
                - npm install -g release-it @release-it/conventional-changelog @j-ulrich/release-it-regex-bumper --save-dev
                - release-it --ci
          - parallel:
            - step:
                name: Publish to External Continuous Delivery System
                script:
                  - export APP="name_of_app"
                  - git clone --recursive https://url.of.your.cd.com/your-cd-repo.git
                  - cd containers
                  - git checkout testing
                  - git submodule update --recursive --init
                  - cd ${APP} && git checkout master
                  - git pull
                  - export VERSION=$(git tag | sort -V | tail -1)
                  - >
                    echo "Updating ${APP} to Release Version: ${VERSION}"
                  - git checkout ${VERSION}
                  - cd ../
                  - git add ${APP}
                  - >
                    git -c user.name='Bitbucket Pipeline' -c user.email='bitbucket-pipeline@witricity.com' commit -m "${APP}: update to version ${VERSION}"
                  - git push
            - step:
                name: Create Pull Request
                caches:
                  - node
                script:
                  - apt-get update
                  - apt-get -y install curl jq
                  - export DESTINATION_BRANCH="master"
                  - export CLOSE_ME="true"
                  - >
                    export BB_TOKEN=$(curl -s -S -f -X POST -u "${BB_AUTH_STRING}" \
                      https://bitbucket.org/site/oauth2/access_token \
                      -d grant_type=client_credentials -d scopes="repository" | jq --raw-output '.access_token')
                  - >
                    export DEFAULT_REVIEWERS=$(curl https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/default-reviewers \
                      -s -S -f -X GET \
                      -H "Authorization: Bearer ${BB_TOKEN}" | jq '.values' | jq 'map({uuid})' )
                  - >
                    curl https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/pullrequests \
                      -s -S -f -X POST \
                      -H 'Content-Type: application/json' \
                      -H "Authorization: Bearer ${BB_TOKEN}" \
                      -d '{
                            "title": "Release '"${BITBUCKET_BRANCH}"'",
                            "description": "Automated PR release :)",
                            "source": {
                              "branch": {
                                "name": "'"${BITBUCKET_BRANCH}"'"
                              }
                            },
                            "destination": {
                              "branch": {
                                "name": "'"${DESTINATION_BRANCH}"'"
                              }
                            },
                            "close_source_branch": '"${CLOSE_ME}"',
                            "reviewers": '"${DEFAULT_REVIEWERS}"'
                          }'
    Code language: PHP (php)

    This is a rather large block, but is pretty straight-forward.

    The first step, called “Generate Release”, is where the release magic happens. It uses the NPM tool called release-it to generate the release. Basically, this utilizes a configuration file in the repository named .release-it.json. Based on that file, it will automatically do the following:

    • Bump the version, based on how you define it in .release-it.json
    • Generate and update a changelog
    • Git commit, tag, and push
    • And much more if you so choose…

    Since this is run in the release/next branch, the version, changelog and all other changes are made and pushed here. At that point, I wanted to do two things: first, publish this new release to my external continuous delivery system; and second, automatically generate a pull request in Bitbucket to get the release back in the master branch.

    Note, that when installing release-it on Ubuntu 22.04 (or older), you may run into issues with older versions of nodejs. To remedy this, run these commands:

    # Remove old version of nodejs
    sudo apt-get purge nodejs
    sudo apt-get autoremove # remove any lingering dependencies
    
    # Install updated nodejs (20 is latest at time of this writing)
    curl -sL https://deb.nodesource.com/setup_20.x | sudo -E bash -
    sudo apt-get install -y nodejs
    
    # Finally, install release-it
    npm install -g release-it @release-it/conventional-changelog @j-ulrich/release-it-regex-bumper --save-dev
    Code language: Bash (bash)

    My .release-it.json file looks like this:

    {
      "git": {
        "commitMessage": "[skip ci] ci: release v${version}"
      },
      "plugins": {
        "@release-it/conventional-changelog": {
            "preset": {
                "name": "conventionalcommits",
                "commitUrlFormat": "{{host}}/{{owner}}/{{repository}}/commits/{{hash}}",
                "compareUrlFormat": "{{host}}/{{owner}}/{{repository}}/compare/{{currentTag}}..{{previousTag}}",
                "types": [
                  {
                    "type": "feat",
                    "section": "Features"
                  },
                  {
                    "type": "fix",
                    "section": "Bug Fixes"
                  },
                  {
                    "type": "perf",
                    "section": "Performance Improvements"
                  }
                ]
            },
            "infile": "CHANGELOG.md"
        },
        "@j-ulrich/release-it-regex-bumper": {
            "out": [
                {
                    "file": "CMakeLists.txt",
                    "search": "VERSION {{semver}}",
                    "replace": "VERSION {{versionWithoutPrerelease}}"
                },
                {
                    "file": "Dockerfile",
                    "search": "Version={{semver}}",
                    "replace": "Version={{versionWithoutPrerelease}}"
                }
            ]
        }
      }
    }
    Code language: JSON / JSON with Comments (json)

    At this point, to generate a release, just go to the pipelines page for your repository and select Run pipeline. Then choose what branch you want to use for the basis of your release (I typically release from master) and choose the ‘custom: generate-release’ pipeline and off you go!

    Conclusion

    This process greatly simplifies my life when it comes to release a new version of my projects. Could this be fully automated? Absolutely — I’m just not there quite yet.

    I hope you find this useful!

    Links

  • Why Writing Good Comments Makes You a Great Developer

    Commented, xkcd.com #156

    When you think of a great developer, I’m sure someone who writes good comments often is not at the top of the list. However, writing good comments is one of the most important skills a developer can have. Good comments not only help you understand your code better, but they also make it easier for others to read and work with. In this blog post, we’ll look at why writing good comments makes you a great developer and some tips for improving your commenting style. Because if you are mindful in your commenting, it is an indication that you are mindful in your coding!

    Comments Should Be Present

    Well-written code comments are like a good road map. They provide clear direction and helpful information that can make working with code much easier. Good code comments can be incredibly useful, providing critical insights and details that might otherwise be easy to miss. Think of them as important signposts along the way that can save a developer hours of debugging.

    Here is an example of something I came across recently that was not obvious. I was writing a CMake function to add unit tests using CTest. I was passing in a CMake string as my “TEST_COMMAND” variable. When I would call add_test with that variable as the value for the COMMAND option the test would fail to run properly, especially if the command took command line arguments! After spending some time digging I learned that the COMMAND option to add_test should be a CMake list rather than a string for the arguments to be passed properly.

    I commented my CMakeLists.txt as such to ensure that was clear to the reader.

    # TRICKY: Change the command string to a CMake list for use in add_test()
    string(REPLACE " " ";" TEST_LIST ${TEST_COMMAND})
    add_test(NAME ${TEST_NAME} COMMAND ${TEST_LIST})
    Code language: CMake (cmake)

    Without the “TRICKY” comment, a maintainer of this code may look at this and see potential for an optimization, removing the conversion, and then they would be searching for solutions to the same problem I had already solved.

    Comments Should Use Proper Spelling, Casing, Grammar, and Full Sentences

    Good code comments are spelled correctly. They are also properly cased. This attention to detail shows that the programmer cares about their work.

    Take a look at the two examples of code below. Which one would you say is written by a mindful developer? Which one would you rather work to maintain?

    // copute ac/a+c
    double prodOverSum(int a, double c)
    
    {// git the nmeratr for the rtn
      double n = (double)a * c;
    
       // get the denomination for the value
      int d = a + (int)c;
    
    /// comput and return the quotient
      return n / (double)d;
    }
    Code language: C++ (cpp)
    // Compute the product over sum for
    // the provided values, a and c.
    //        (A * C)
    //   X = ---------
    //        (A + C)
    double prodOverSum(int a, double c)
    {
      double prod = (double)a * c;
      int sum = a + (int)c;
    
      // Cast sum to a double to ensure
      // the compiler does not promote prod
      // to an integer and perform integer
      // division
      return prod / (double)sum;
    }
    Code language: C++ (cpp)

    It’s clear that the programmer cares about their craft when they put so much effort into writing clear, readable comments. It would be almost impossible to maintain this level of detail by chance, which makes me believe it is intentional as opposed just being accidental! That gives me confidence that the code itself is well-written, properly tested, and ready for use.

    When writing comments, it is important to use full sentences with proper grammar as well. This will help ensure that your comments are clear and easy to understand. Additionally, using proper grammar will help to give your comments a more professional appearance.

    Comments Should Be Smartly Formatted

    Comments are meant to convey a message about the surrounding code to the developer. Sometimes information is best conveyed in a particular format. So, when commenting your code, ensure that your comment is formatted in such a way that conveys your message as clearly and concisely as possible.

    Code formatters can help and hinder this. If your comments require lots of horizontal scrolling to read, then consider breaking them into multiple lines or rewording to be more concise! However, sometimes a new line in the middle of your documentation is undesirable and you will need to instruct your formatter to leave it alone by wrapping with “control comments”.

    For example, consider this method. If I was to line up all the columns in the table neatly, this would make for some very long lines of text. Most formatters would break these lines into multiple ones. Instead, make judicious use of white space to get the message across to the reader. If you have to use multiple lines, you decide where those line breaks are – don’t leave it up to your formatter!

    void Quaternion2DCM(const double * const q, double * const dcm)
    {
      // Don't do this! Your formatter will either add new lines or ignore this
      // if you add protection blocks around the table, making for really long 
      // lines that are harder to read.
      // To compute the DCM given a quaternion, the following definition is used
      //       +-------------------------------------------------------------------------------------------+
      //       | (q4^2 + q1^2 - q2^2 - q3^2)    2*(q1q2 + q3q4)                2*(q1q3 - q2q4)             |
      // dcm = | 2*(q1q2 - q3q4)                (q4^2 - q1^2 + q2^2 - q3^2)    2*(q2q3 - q1q4)             |
      //       | 2*(q1q3 + q2q4)                2*(q2q3 − q1q4)                (q4^2 - q1^2 - q2^2 + q3^2) |
      //       +-------------------------------------------------------------------------------------------+
      // clang-format off
      // Adapted from https://www.vectornav.com/resources/inertial-navigation-primer/math-fundamentals/math-attitudetran
      // clang-format on
      dcm[0] = q[3]*q[3] + q[0]*q[0] - q[1]*q[1] - q[2]*q[2];
      dcm[1] = 2*(q[0]*q[1] + q[2]*q[3]);
    ...
      dcm[7] = 2*(q[1]*q[2] - q[0]*q[3]);
      dcm[8] = q[3]*q[3] - q[0]*q[0] - q[1]*q[1] + q[2]*q[2];
    }
    Code language: C++ (cpp)
    void Quaternion2DCM(const double * const q, double * const dcm)
    {
      // Instead, you can do this - just simple white space, still very readable
      // by the user and it fits on a single line!
      // To compute the DCM given a quaternion, the following definition is used
      //       +-------------------------------------------------------------------+
      //       | (q4^2 + q1^2 - q2^2 - q3^2)    2*(q1q2 + q3q4)    2*(q1q3 - q2q4) |
      // dcm = | 2*(q1q2 - q3q4)    (q4^2 - q1^2 + q2^2 - q3^2)    2*(q2q3 - q1q4) |
      //       | 2*(q1q3 + q2q4)    2*(q2q3 − q1q4)    (q4^2 - q1^2 - q2^2 + q3^2) |
      //       +-------------------------------------------------------------------+
      // Adapted from https://www.vectornav.com/resources/inertial-navigation-primer/math-fundamentals/math-attitudetran
      dcm[0] = q[3]*q[3] + q[0]*q[0] - q[1]*q[1] - q[2]*q[2];
      dcm[1] = 2*(q[0]*q[1] + q[2]*q[3]);
    ...
      dcm[7] = 2*(q[1]*q[2] - q[0]*q[3]);
      dcm[8] = q[3]*q[3] - q[0]*q[0] - q[1]*q[1] + q[2]*q[2];
    }
    Code language: C++ (cpp)

    How Much Should I Comment?

    Good code will be somewhat self-documenting, but there is always a limit. For example, the method below is so obvious I don’t really need to comment on it, do I?

    int sum(const int a, const int b)
    {
      return a + b;
    }
    Code language: C++ (cpp)

    However, for something more involved, comments can clarify a lot of things for the developer and can link them to more information, as in the example of the Quaternion2DCM method described above.

    So, then, how do you define what is obvious? For me, I think in terms of my average user and/or maintainer. What sort of things do I expect them to understand? What about more junior software engineers who may need to work in this code? Basic math and logic knowledge seems okay. Syntax is a given; even more advanced syntax, such as lambdas or function pointers, I would expect them to be able to read. However, anything beyond that typically indicates the need for a detailed comment that explains things.

    It also helps me to think in terms of what will help me understand this design decision tomorrow, or 6 months from now, or even a year from now. Maybe it is obvious to me now what that this conditional with multiple clauses means and why it is this way in the design, but I’ll likely forget tomorrow and want to refactor.

    Comments In Your IDE

    To make working with comments easier, look for ways to get your IDE to help you! I use VS Code for nearly all my coding right now and I found the extension Better Comments to be extremely helpful.

    Image Credit: Better Comments Plugin

    With this plugin I can add additional formatting and mark comments specifically. For example, in my code I tend to leave myself reminders using TODO and will often prioritize those TODO comments using regular TODO and prefixing with ‘*’ and ‘!’.

    // ! TODO: This is an important TODO that needs to be taken care of immediately
    // * TODO: This is an important TODO that should be taken care of soon
    // TODO: This is a TODO that should be taken care of eventuallyCode language: C++ (cpp)

    With this plugin my comments are color-coded for me, making it easy to see what needs to be done first.


    In the end, your code is the reflection of you, so it only makes sense that your commenting reflects how much care and attention to detail there really is in what’s being written. If poor commenting sends a message on its own then people will be able to tell if they need more time before investing any kind of faith into your code—or even worse, they’ll just move onto another potentially better implementation!

    What do you think? What makes a good comment in your book? Let us know in the comments below!