As developers, we are constantly striving to write clean, maintainable code. But as codebases grow and evolve, it can be challenging to ensure consistency across the entire project. Consistent code formatting can make it easier to read and understand code, and can even reduce bugs and errors.
To address this issue, many teams use code formatters like clang-format to automatically enforce a consistent style across their codebase. But how can we ensure that our code is properly formatted before it even reaches the code review stage? This is where Bitbucket Pipelines comes in.
By integrating clang-format checks into your Bitbucket Pipeline, you can automatically test that your code is properly formatted on every push or pull request. This helps catch formatting errors early in the review process, making it easier to maintain a consistent codebase and ultimately reducing technical debt.
In this blog post, we’ll walk through how to set up a Bitbucket Pipeline step to test code formatting with clang-format. We’ll also discuss best practices for code formatting and how to integrate these checks into your development workflow. So, let’s get started!
Add a consistency check step to your pipeline
To test that formatting is correct with clang-format in a Bitbucket Pipeline, you can add a step to install clang-format and then run a check against your code files. Here’s an example of how you could do this:
Installs clang-format by running apt-get update and apt-get install -y clang-format.
Prints the version of clang-format by running clang-format --version. This is useful for debugging and logging information in your pipeline.
Runs a find command to search the repository for all C/C++ source files (extensions *.h, *.c, *.hpp, and *.cpp). If you have code in files with other extensions, you can add them to the search. Just add -o -name '*.<your extension>' after the '*.cpp' search term. Mind the space between the last term and the closing ')'!
Runs clang-format against the found code files with the -style=file flag, which tells clang-format to use the formatting style specified in the .clang-format file in the root of your repository. The command generates an XML report of the formatting changes.
Pipes the XML report to grep "<replacement ", which searches for any lines in the report that contain the <replacement> tag. This tag indicates that clang-format made a formatting change to the code. If any replacements are found, the pipeline will exit with an error code (exit 1). Otherwise, it will exit successfully (exit 0).
The pipeline step will pass if this returns with exit code of 0 and fail otherwise. Place this step as the first step in your pull request pipeline and when a pull request is opened it will fail quickly if there are formatting issues. This indicates to the developer they need to fix those issues before continuing.
Note that this checks the entire repository, not just the changes!
Best Formatting Practices
Ensuring consistent formatting is definitely a good practice to work into your development practice. To make it easy for developers to do this, many IDEs support formatting files per some standard. For example, Eclipse allows you to define a style and easily format highlighted sections or the entire file. VS Code has options to format your code when you save a file, in addition to formatting specific sections of code you are working on.
Formatting options in VS Code.
Outside of your IDE, my favorite option is to use a git hook to format your patches as you commit them to the repository. By using the pre-commit hook and some clang-format scripts, you can ensure that any new code or changed code gets formatted properly before getting pushed to the server. Read more about how to set that up in my other post.
Best practices for enforcing consistent formatting comes down to two approaches, in my opinion, and depends on whether you are working with an established code base or a new one.
Consistent Formatting for Established Code Base
For established code bases, it may not be desirable or feasible to format all the code. That would require a huge amount of retest and revalidation that just may not be possible. In that case, set up your pipeline to just check the formatting on patches in the pull request. As developers make changes to fix bugs or add features, those new changes will go through the testing and validation process and also get the correct formatting applied. Over time, the consistency of the formatting will increase as more and more of the code gets updated.
To help developers automate this, set up the development environment to format changes with a git pre-commit hook. This will cut down on pipeline failures which enforce that patches are formatted properly on pull requests.
An example pipeline that enforces formatting on just patches is shown here:
definitions:
steps:
- step: &Check-Patch-Formatting
name: Check code formatting with clang-format-diff
image: atlassian/default-image:4
script:
# Install clang-format-diff
- apt-get update && apt-get install -y clang-format
# Run clang-format-diff on modified files
- if [[ $(git diff -U0 --no-color --relative origin/master...origin/${BITBUCKET_BRANCH} | clang-format-diff -p1) ]]; then exit1; fi
pipelines:
pull-requests:
'**':
- step: *Check-Patch-FormattingCode language:PHP(php)
Consistent Formatting for New Code Base
When working with a new project and a new code base, there is no reason why you can’t enforce formatting right off the bat.
First, set up your IDE to format your files on save, to ensure they stay properly formatted. In addition, ensure you have the pre-commit hook installed to validate formatting (and to fix it) before committing. Finally, set up the pipeline with two steps: one for validating the patch on each PR, and another that checks the entire code base as part of a nightly (or weekly) build process. The first pipeline in this post shows this type of pipeline step (for a pull-request).
Conclusion
In conclusion, maintaining consistent code formatting is crucial for writing readable, maintainable code. With Bitbucket Pipelines and clang-format, we can automate the process of checking our code formatting and catch errors before they make it into production. By setting up a simple Pipeline step to run clang-format checks, we can ensure that our codebase stays tidy and easy to read.
Remember that consistent code formatting is just one piece of the puzzle when it comes to writing high-quality code. It’s important to also focus on good design principles, writing clear and concise code, and properly testing our applications. By making conscious decisions and striving to improve our development practices, we can create software that is not only functional, but also maintainable and sustainable in the long run.
Additional Resources
Bitbucket Pipelines documentation: The official documentation provides a detailed guide on setting up and using Pipelines, including how to integrate with external tools like clang-format.
clang-format documentation: The official documentation provides a comprehensive guide on using clang-format to format code, including style options and command line usage.
GitHub Action for clang-format: If you use GitHub instead of Bitbucket, you can check out this GitHub Action which can help you integrate clang-format checks into your workflow.
A lot has been written on design patterns, but they were woefully absent in my education as a developer. I learned about them after a few years as a developer, and they were game-changing for me. In this post, I want to share with you a few of the most useful design patterns for C/C++ that I often use in my own designs.
When I first learned about design patterns, they were presented to me as the proper way to organize classes to solve specific types of problems. In fact, the first design pattern I was introduced to was the state pattern for implementing state machines. Up until that point, I was resorting to complex case statements with lots of conditional checks to implement my state machine logic, simply because that was all I knew.
I was taught that I could encapsulate all the transition logic into state classes. I was blown away by how much that single refactor could simplify my code, and how much easier it was to add and remove states.
Over time, I started to find that the state pattern applied to so many other problems, and then that got me into trouble. I had this shiny new tool in my toolbox and I just wanted to use it for everything!
When Should Patterns Be Applied?
Thanks to some great mentors, I was able to step back and start looking at other patterns that existed — the factory pattern, adapters, singletons, and strategies. All of these different tools could be used in my designs to solve different problems.
And that is the key — design patterns are tools, nothing more. As with any tool in your toolbox, there is a time and place to use it. Design patterns are no different. For C/C++ developers, they can lead to some strikingly simple and elegant solutions, but sometimes they can be a rabbit hole that will just add unnecessary abstraction to your code when a simple, straightforward solution would do.
As conscious developers, our goal is to design amazing applications, using the best tools available to us. We understand that if a certain tool does not work for a given problem, we move on. We don’t try to force a tool on a design, which is one of the major pitfalls for new developers once they learn about design patterns.
Types of Design Patterns
The literature on design patterns typically breaks all the common patterns down into three primary categories, each of which deal with a different aspect of the development process:
Creational Patterns
Structural Patterns
Behavioral Patterns
I’m going to share with you a few of my favorite design patterns from each category. You can find tons of information about design patterns through a simple search, however one resource I have found extremely valuable is refactoring.guru. That site has lots of good information, all of which can be applied to being a conscious developer!
Creational Design Patterns
Creational design patterns are used in software development to provide a way to create objects in a flexible and reusable way. They help to encapsulate the creation process of an object, decoupling it from the rest of the system. This makes it easier to change the way objects are created or to switch between different implementations.
Creational design patterns provide a variety of techniques for object creation, such as abstracting the creation process into a separate class, using a factory method to create objects, or using a prototype to create new instances. These patterns are useful when the creation of objects involves complex logic or when the object creation process needs to be controlled by the system.
Some examples of creational design patterns include the Singleton pattern, which ensures that only one instance of a class is created, and the Factory pattern, which provides a way to create objects without specifying the exact class of object that will be created.
Overall, creational design patterns help to improve the flexibility and reusability of software systems by providing a more structured and standardized approach to object creation.
Singleton Creational Pattern
The Singleton design pattern is a creational pattern that ensures that a class has only one instance and provides a global point of access to that instance. This pattern is useful in situations where only one instance of a class should exist in the program, such as managing system resources or ensuring thread safety. However, it should be used with caution, as it can introduce global state and make testing more difficult.
UML describing the Singleton Creational Pattern
In C++, this can be achieved by doing two things:
Make the constructor for your class private.
Provide a static method to access the singleton instance.
The static method creates the singleton instance the first time it is called and returns it on subsequent calls. Ultimately, this static method typically returns either a reference to the newly created class, or a pointer to it. I prefer to use references because they are safer, but there have been cases where I create a std::shared_ptr in my static function and return that to the caller to access the singleton.
C++ Example of the Singleton Creational Pattern
To implement the Singleton pattern in C++, a common approach is to define a static method in the class definition. Then, in the implementation file, define the static method and have it define a static member variable of the class type. The static method will initialize the singleton instance if it has not been created yet, and return it otherwise. Here’s an example:
classSingleton {public:
static Singleton& getInstance(){
static Singleton instance; // The singleton instance is created on the first call to getInstance()return instance;
}
private:
Singleton() {} // The constructor is private to prevent typical instantiation Singleton(const Singleton&) = delete; // Delete the copy constructor to prevent copying Singleton& operator=(const Singleton&) = delete; // Delete the assignment operator to prevent copying also};
Code language:C++(cpp)
In this example, the getInstance() method returns a reference to the Singleton instance, creating it on the first call using a static variable. The constructor is private to prevent instantiation from outside the class. The copy constructor and assignment operator are deleted to prevent copying.
Your getInstance() method can take arguments as well and pass them along to the constructor, if that is desired. You need to be aware that those are only used to create the object the first time, and never again. For this reason, I consider it to be best practice for getInstance() to not take any arguments to not confuse a user making a call.
Abstract Factory Creational Pattern
The Abstract Factory design pattern is a creational pattern that provides an interface for creating families of related objects without specifying their concrete classes. This pattern is useful in situations where there are multiple families of related objects, and the actual type of object to be created should be determined at runtime. For example, in a messaging application, there might be multiple types of messages that need to be created. Each type of message should be consistent in interface, but with specific behavior.
UML describing a modified Abstract Factory Creational Pattern
C++ Example of the Abstract Factory Creational Pattern
In C++, this can be achieved by defining an abstract base class for each family of related objects. Then define concrete subclasses for each type of object in each family. Here’s an example from a recent project of mine:
structEvent {std::string m_uuid; // define a UUID for the eventstd::string m_msg; // message related to the eventstd::chrono::time_point m_timestamp; // time event occurredvirtual ~Event () {}
virtualvoidhandle()= 0;
virtualvoidclear()= 0;
};
structErrorEvent :public Event {
uint32_t m_id; // Error ID for the specific error that occurredvoidhandle()override{
// Perform "handling" action specific to ErrorEvent }
voidclear()override{
// Perform "clear" action specific to ErrorEvent }
};
structStateChange :public Event {
uint32_t m_state; // state identifiervoidhandle()override{
// Perform "handling" action specific to StateChange }
voidclear()override{
// Perform "clear" action specific to StateChange }
};
classComponent {public:
template <
classT,std::enable_if_t<std::is_base_of_v<Event, T>, bool> = true>
std::shared_ptr<T> GetEvent(void){returnstd::make_shared<T>(); }};Code language:C++(cpp)
In this example, the Event base class defines virtual methods for handling the event and clearing the event. The ErrotEvent and StateChange classes are concrete subclasses that implement these methods for the specific events. The Component class defines a GetEvent() methods for creating Events. Now, when I want to create a new event, I just inherit the Event base class and can call GetEvent() to create a new instance of the event.
Pay special attention to lines 37. This ensures that the compiler will give an error if I try to call GetEvent with a data type that is not derived from my Event class. And that is ideal — turning what were once run time errors into compiler errors.
This is just one use of the Abstract Factory pattern. There are others that allow you to define specific behavior for different platforms or architectures as well!
Structural Design Patterns
Structural design patterns are used in software development to solve problems related to object composition and structure. These patterns help to simplify the design of a software system by defining how objects are connected to each other and how they interact.
Structural design patterns provide a variety of techniques for object composition, such as using inheritance, aggregation, or composition to create complex objects. They also provide ways to add new functionality to existing objects without modifying their structure.
Some examples of structural design patterns include the Adapter pattern, which allows incompatible objects to work together by providing a common interface, and the Decorator pattern, which adds new behavior to an object by wrapping it with another object.
Overall, structural design patterns help to improve the modularity, extensibility, and maintainability of software systems by providing a more flexible and adaptable way to compose objects and structures. They are particularly useful in large and complex software systems where managing the relationships between objects can become challenging.
Adapter Structural Pattern
The Adapter design pattern is a structural pattern that allows incompatible interfaces to work together. In C++, this pattern is used when a class’s interface doesn’t match the interface that a client is expecting. An adapter class is then used to adapt between the two interfaces.
UML describing the Adapter Structural Pattern
C++ Example of the Adapter Structural Pattern
To use this in C++, define a class that implements the interface that the client expects, and internally use an instance of the incompatible class that needs to be adapted. Here’s an example:
In this example, the ExpectedInterface class defines the interface that the client expects, which is the request() method. The IncompatibleInterface class has a method called specificRequest() that is not compatible with the ExpectedInterface. The Adapter class implements the ExpectedInterfaceand internally uses an instance of the IncompatibleInterface class to make the specificRequest() method compatible with the ExpectedInterface.
Using the Adapter pattern allows us to reuse existing code that doesn’t match the interface that the client expects, without having to modify that code. Instead, we can write an adapter class that mediates between the incompatible interface and the client’s expected interface.
Decorator Structural Pattern
The Decorator design pattern is a structural pattern that allows adding behavior or functionality to an object, without affecting other objects of the same class. This pattern is commonly used to attach additional responsibilities to an object by wrapping it with a decorator object.
UML describing the Decorator Structural Pattern
C++ Example of the Decorator Structural Pattern
To use this pattern in C++, define an abstract base class that defines the interface for both the component and the decorator classes. Then define a concrete component class that implements the base interface and a decorator class that also implements the same interface and holds a pointer to the component object it is decorating. Here’s an example:
classComponent {public:
virtual ~Component() {}
virtualvoidoperation()= 0;
};
classConcreteComponent :public Component {
public:
voidoperation()override{
// Perform some operation }
};
classDecorator :public Component {
public:
Decorator(Component* component) : component_(component) {}
voidoperation()override{
component_->operation();
}
private:
Component* component_;
};
classConcreteDecoratorA :public Decorator {
public:
ConcreteDecoratorA(Component* component) : Decorator(component) {}
voidoperation()override{
Decorator::operation();
// Add some additional operation }
};
classConcreteDecoratorB :public Decorator {
public:
ConcreteDecoratorB(Component* component) : Decorator(component) {}
voidoperation()override{
// Do not call base class operation() to remove that functionality from this instance// Add some different additional operation }
};
Code language:C++(cpp)
In this example, the Component class defines the interface that the concrete component and decorator classes will implement. The ConcreteComponent class is a concrete implementation of the Component interface. The Decorator class is an abstract class that also implements the Component interface and holds a pointer to a component object it is decorating. The ConcreteDecoratorA and ConcreteDecoratorB classes are concrete decorators that add additional behavior to the ConcreteComponent object by calling the Decorator base class’s operation() method and adding their own additional behavior.
Note that if you only needed a single decorator (i.e., you did not require decorators A and B), you can get away with simply adding the necessary additional operations directly to the Decorator class in the example. However, following the pattern as shown requires not much more work immediately and will make it easier when you need to define additional concrete decorators down the road.
Using the Decorator pattern allows us to add or remove behavior from an object without affecting other objects of the same class. It also allows us to use composition instead of inheritance to extend the functionality of an object.
Facade Structural Pattern
The Facade design pattern is a structural pattern that provides a simplified interface to a complex subsystem of classes, making it easier to use and understand. A Facade class can then be used by clients to access the subsystem without having to know about the complexity of the lower-level classes.
UML describing the Facade Structural Pattern
C++ Example of the Facade Structural Pattern
In C++, this pattern can be used to create a high-level interface that hides the complexity of the lower-level subsystem. To implement this pattern in C++, we can define a Facade class that provides a simplified interface to the subsystem classes. Here’s an example:
In this example, the SubsystemA and SubsystemB classes represent the complex subsystem that the Facade class will simplify. The Facade class provides a simplified interface to the subsystem by hiding the complexity of the lower-level classes. The Facade class also holds instances of the subsystem classes and calls their methods to perform the operation.
This pattern allows us to simplify the interface to a complex subsystem, making it easier to use and understand. But, my favorite use of it is to isolate clients from changes to the subsystems by abstraction.
Behavioral Design Patterns
Behavioral design patterns are used in software development to address problems related to object communication and behavior. These patterns provide solutions for managing the interactions between objects and for coordinating their behavior.
Behavioral design patterns provide a variety of techniques for object communication, such as using message passing, delegation, or collaboration to manage the interactions between objects. They also provide ways to manage the behavior of objects by defining how they respond to events or changes in the system.
Some examples of behavioral design patterns include the Observer pattern, which allows objects to be notified when a change occurs in another object, and the Command pattern, which encapsulates a request as an object, allowing it to be parameterized and queued.
Overall, behavioral design patterns help to improve the flexibility, modularity, and extensibility of software systems by providing a more structured and standardized way to manage object communication and behavior. They are particularly useful in systems that involve complex interactions between objects, such as user interfaces, network protocols, or event-driven systems.
Mediator Behavioral Pattern
The Mediator design pattern is a behavioral pattern that promotes loose coupling between objects by encapsulating their communication through a mediator object. In C++, this pattern can be used to reduce dependencies between objects that communicate with each other.
UML describing the Mediator Behavioral Pattern
C++ Example of the Mediator Behavioral Pattern
To implement the Mediator pattern in C++, we can define a Mediator class that knows about all the objects that need to communicate with each other. The Mediator class then provides a centralized interface for these objects to communicate through. Here’s an example:
In this example, the Component classes represent objects that need to communicate with each other. The Mediator class provides a centralized interface for the Component classes to communicate through. The ConcreteMediator class knows about all the Component objects and provides the sendMessage() method to send messages between them.
I like to use this pattern in a system where I have multiple components, such as interfaces to external hardware modules or subsystems. Those interface classes will inherit the Component base class and communicate one with another via the Mediator. In this manner, if one Component changes its interface, then I don’t need to go change all other Component classes — just the changing Component and the necessary portions of the Mediator.
Using the Mediator pattern allows us to reduce dependencies between objects that communicate with each other, making our code more maintainable and easier to understand. It also promotes loose coupling between objects, which makes it easier to change the way objects communicate without affecting other parts of the system.
Strategy Behavioral Pattern
The Strategy design pattern is a behavioral pattern that defines a family of algorithms, encapsulates each algorithm, and makes them interchangeable at runtime. This pattern allows the algorithms to vary independently from clients that use them.
I have found this pattern extremely useful when I have an application that needs to support many protocols to various clients. Using the strategy pattern, I can easily swap which protocol is in use at any given time based on the client connection.
UML describing the Strategy Behavioral Pattern
C++ Example of the Strategy Behavioral Pattern
To implement this pattern, we define an abstract Strategy class that represents the interface for all algorithms. Then, we can define concrete implementations of the Strategy class for each algorithm. In my protocol case, the Strategy class was my base Protocol class. Then I had concrete protocol classes derived from the base Protocol.
In this example, the Strategy class represents the interface for all algorithms. The ConcreteStrategyA and ConcreteStrategyB classes represent concrete implementations of the Strategy class for two different algorithms.
The Context class represents the client that uses the algorithms. It has a setStrategy() method to set the current algorithm and an executeStrategy() method to execute the current algorithm.
Using the Strategy pattern allows us to change the behavior of a system at runtime by simply changing the current algorithm in the Context object. Conscious use of this pattern promotes code reuse, flexibility, and maintainability.
State Behavioral Pattern
I saved my favorite for last!
The State design pattern is a behavioral pattern that allows an object to alter its behavior when its internal state changes. This pattern is useful when an object’s behavior depends on its state, and that behavior must change dynamically at runtime depending on the state.
Essentially this boils down to defining a State base class that defines the basic structure for state information and defines the common interface, such as entry(), do(), and exit() methods.
UML describing the State Behavioral Pattern
C++ Example of the State Behavioral Pattern
In C++, we can implement the State pattern using inheritance and polymorphism. We create a State base class that represents the interface for all states. Then, we create concrete implementations of the State class for each possible state of the object. Finally, we define a Context class that acts as the context in which the state machine operates. Clients utilize the interface in the Context class to manipulate the state machine.
Here’s an example:
classState {public:
virtualvoidentry()= 0;
virtualvoiddo()= 0;
virtualvoidexit()= 0;
};
classConcreteStateA :public State {
public:
voidentry()override{
// Entry behavior for state A }
voiddo()override{
// State behavior for state A }
voidexit()override{
// Exit behavior for state A }
};
classConcreteStateB :public State {
public:
voidentry()override{
// Entry behavior for state B }
voiddo()override{
// State behavior for state B }
voidexit()override{
// Exit behavior for state B }
};
classContext {public:
Context(State* state) : state_(state) {}
voidtransitionTo(State* state){
state_->exit();
state_ = state;
state_->entry();
}
voidrequest(){
state_->do();
}
private:
State* state_;
};
Code language:C++(cpp)
In this example, the State class represents the interface for all states. The ConcreteStateA and ConcreteStateB classes represent concrete implementations of the State class for two different states.
The Context class represents the object whose behavior depends on its internal state. It has a transitionTo() method to set the current state and a request() method to trigger a behavior that depends on the current state.
You can also make use of templates in the Context class to define an addState() function. In this manner, you can enforce transitions to only a specific set of State classes and utilize custom lambda functions for the entry(), do(), and exit() functions for each state.
Using the State pattern allows us to change the behavior of an object at runtime by simply changing its internal state. This makes our code more flexible and easier to maintain. It also promotes code reuse, as we can easily add new states by implementing new State classes.
Conclusion
In conclusion, design patterns are a powerful tool in software development that can help us solve common problems and improve our code’s flexibility, maintainability, and scalability. In this post, we have explored several design patterns in C++, including the Singleton, Abstract Factory, Adapter, Decorator, Facade, Mediator, Strategy, and State patterns.
While each pattern has its unique characteristics and use cases, they all share the same goal: to provide a well-structured, reusable, and extensible solution to common software development problems. By understanding and using these patterns, we can write more efficient, robust, and maintainable code that can be easily adapted to changing requirements.
As a conscious software developer, it’s essential to keep learning and improving our skills by exploring new ideas and concepts. Design patterns are an excellent place to start, as they can provide us with a deeper understanding of software architecture and design principles. By mastering design patterns, we can become more efficient and effective developers who can deliver high-quality, scalable, and maintainable software solutions.
Additional Resources
Here are a few additional resources for diving deeper into design patterns for software development.
“Design Patterns: Elements of Reusable Object-Oriented Software” by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. This book is considered the definitive guide to design patterns and is a must-read for anyone interested in the subject.
“Head First Design Patterns” by Eric Freeman, Elisabeth Robson, Bert Bates, and Kathy Sierra. This book offers a more accessible and engaging approach to learning design patterns and is ideal for beginners.
Design Patterns in Modern C++ – This Udemy course covers the basics of design patterns and shows how to apply them using modern C++ programming techniques.
C++ Design Patterns – This GitHub repository contains a collection of code examples for various design patterns in C++.
Refactoring Guru – This website provides an extensive catalog of design patterns with code examples in multiple programming languages, including C++.
Software Engineering Design Patterns – This Coursera course covers the principles and applications of design patterns in software engineering, including C++ examples.
Writing clean and effective code is essential for software developers. Not only does it make the code easier to maintain and update, but it also ensures that the code runs efficiently and without bugs. As a programming language, C/C++ is widely used in many applications, from system programming to game development. To help you write better C/C++ code, I’ve compiled a list of 10 tips from my laundry list of what makes good, clean, and effective C/C++ code. I hope these will guide you in making conscious decisions when coding, since many of these tips can be applied to other languages as well! So, whether you are an experienced C/C++ developer or just starting out, these tips will help you write cleaner, more efficient, and effective code.
Tip #1: Variable Scope Awareness
In C/C++, variables can have three different scopes: global scope, local scope, and member scope. Each of them have their place in software development and each have their own pros and cons.
My rule of thumb is this. Make everything a local variable. If I need access to it in other object methods, I promote it to a member variable. If that still doesn’t work (which is extremely rare), then I make it a static global variable. With proper software design, I have found I never need to declare a true global variable, even if I protect it with appropriate locks.
One last comment when dealing with global variables — you really should always make them const. The guidelines also state that you should always prefer scoped objects, rather than ones on the heap.
Tip #2: Use Standard Types When Available
Using standard type definitions in your C/C++ code has several benefits that can make your code more readable, portable, and maintainable. Here are some reasons why you should consider using standard type definitions in your code:
Readability: Standard type definitions like size_t, int32_t, uint64_t, etc. are self-documenting and convey a clear meaning to the reader of your code. For example, using size_t instead of int to represent the size of a container makes it clear that the variable can only hold non-negative integers, which can help prevent bugs.
Portability: Different platforms may have different data types with different sizes and behaviors. By using standard type definitions, you can ensure that your code is portable and will work consistently across different platforms.
Type safety: Using standard type definitions can help prevent bugs caused by type mismatches, such as assigning a signed int to an unsigned int variable, or passing the wrong type of parameter as a function argument.
Code maintenance: Standard type definitions can make your code easier to maintain by reducing the need for manual conversions and ensuring that the types of your variables are consistent throughout your codebase.
Overall, using standard type definitions can help make your code more readable, portable, and maintainable, and following these recommendations can help you make conscious decisions about which type definitions to use in your code.
Tip #3: Organize Related Data Into Objects
When working with complex systems, it is often worthwhile to organize sets of data into objects for three primary reasons: encapsulation, abstraction, and modularity. Each of these are powerful principles that can help improve your code.
Encapsulation
Encapsulation is a fundamental principle of object-oriented programming and can help make your code more modular and maintainable.
By organizing related data into an object, you can encapsulate the data and the operations that can be performed on it. This allows you to control access to the data and ensure that it is only modified in a safe and consistent way. In addition, you can make changes to the underlying data representation without changing the interface, which means that users of your object don’t have to change as well.
Abstraction
Objects allow you to abstract away the details of the data and provide a simplified interface for interacting with it. This can make your code easier to read and understand, as well as more resistant to changes in the underlying data representation.
Modularity
Organizing related data into an object can help you break down a large, complex problem into smaller, more manageable pieces. Each object can represent a distinct component of the system, with its own data and behavior, that can be developed and tested independently of the other components.
Finally, once you have objects that you are manipulating, you can start returning those objects from your functions. Even cooler than that, you can return tuples containing your object and status information from your methods!
Tip #4: Be Consistent in the Organization of Your Objects
When you organize your data into objects and start defining member variables and methods, be consistent in the organization of your objects. For example, declare all public interface information up front, and keep all protected and private information at the end of the class.
By declaring all private member variables and methods in a single private section, it makes the class definition much easier to read and follow. I know that when I read the GoodExample class definition that when I see the private keyword that everything coming after that keyword will be private and not accessible to me as a normal user.
Tip #5: Place All Documentation in Header Files
When you document your functions and variables, document them in the header file for one primary reason: keep the interface and implementation separate.
Keeping the interface definition of your object separate from the implementation is a solid object-oriented design principle. The header file is where you define the interface for your users. That is where your users are going to look to understand what the purpose of a function is, how it should be used, what the arguments mean, and what the return value will contain. Many times the user of your object will not have access to the source code, so placing documentation there is pointless, from an interface perspective.
Tip #6: Enforce a Coding Style
Enforcing a code style can bring several benefits to your development process, including:
Consistency: By enforcing a code style, you can ensure that your codebase looks consistent across different files and modules. This can make your code easier to read and understand, and can help reduce the amount of time developers spend trying to figure out how different parts of the codebase work.
Maintainability: A consistent code style can also make your code easier to maintain, as it can help you identify patterns and common practices that are used throughout the codebase. This can make it easier to update and refactor the code, as you can more easily find and update all instances of a particular pattern.
Collaboration: Enforcing a code style can also make it easier to collaborate with other developers, especially if they are working remotely or in different time zones. By using a consistent code style, developers can more easily understand each other’s code and can quickly identify where changes need to be made.
Automation: Enforcing a code style with clang-format can also help automate the code review process, as it can automatically format code to the desired style. This can save time and effort in the code review process, and can ensure that all code is formatted consistently, even if developers have different preferences or habits.
Industry standards: Many organizations and open-source projects have established code style guidelines that are enforced using tools like clang-format. By following these standards, you can ensure that your codebase adheres to best practices and can more easily integrate with other projects.
Tip #7: Be const-Correct in All Your Definitions
A major goal of mine when working in C and C++ is to make as many potential pitfalls and runtime bugs compiler errors rather than runtime errors. Striving to be const-correct in everything accomplishes a few things for the conscious coder:
It conveys intent about what the method or variable should do or be. A const method cannot modify an object’s state, and a const variable cannot change its value post-declaration. This can make your code safer and reduce the risk of bugs and unexpected behavior.
It makes your code more readable, as it can signal to other developers that the value of the object is not meant to be changed. This can make it easier for other developers to understand your code and can reduce confusion and errors.
It allows the compiler to make certain optimizations that can improve the performance of your code. For example, the compiler can cache the value of a const object, which can save time in certain situations.
It promotes a consistent coding style, making it easier for other developers to work with your code and reduce the risk of errors and confusion.
It makes your code more compatible with other libraries and frameworks. Many third-party libraries require const-correctness in order to work correctly, so adhering to this standard can make it easier to integrate your code with other systems.
Here are a couple of examples:
classMyConstCorrectClass{
public:
MyConstCorrectClass() = default;
void SetFlag(const bool flag) { m_flag = flag; } // Method not marked const because it modifies the state// The argument is immutable though, and is thus marked const
bool GetFlag() const { return m_flag' } // Marked as const because it does not modify state
private:
bool m_flag{false};
};
void function1(void)
{
MyConstCorrectClass A;
A.SetFlag(true);
std::cout << "A: " << A.GetFlag() << std::endl;
const MyConstCorrectClass B;
B.SetFlag(true); // !! Compiler error because B is constant
std::cout << "B: " << B.GetFlag() << std::endl;
}Code language:PHP(php)
Tip #8: Wrap Single-line Blocks With Braces
Single-line blocks, such as those commonly found in if/else statements, should always be wrapped in braces. Beyond the arguments that it increases readability, maintainability, and consistency, for me this is a matter of safety. Consider this code:
if (isSafe())
setLED(LED::OFF);Code language:C++(cpp)
What happens when I need to take additional action when the function returns true? Sleeping developers would simply add the new action right after the setLED(LED::OFF) statement like this:
if (isSafe())
setLED(LED::OFF);
controlLaser(LASER::ON, LASER::HIGH_POWER);
Code language:C++(cpp)
Now consider the implications of such an action. The controlLaser(LASER::ON, LASER::HIGH_POWER); statement gets run every single time, not just if the function isSafe() returns true. This has serious consequences, which is exactly why you should always wrap your single-line blocks with braces!
if (isSafe())
{
setLED(LED::OFF);
controlLaser(LASER::ON, LASER::HIGH_POWER);
}
Code language:C++(cpp)
Tip #9: Keep Your Code Linear — Return from One Spot
This is also known as the “single exit point” principle, but the core of it is that you want your code to be linear. Linear code is easier to read, to maintain, and debug. When you return from a function in multiple places, this can lead to hard to follow logic that obscures what the developer is really trying to accomplish. Consider this example:
std::stringTransaction::GetUUID(void)const{std::string uuid = xg::Guid(); // empty ctor for xg::Guid gives a nil UUIDif (m_library->isActionInProgress()) {return m_library->getActionIdInProgress(); }return uuid;}Code language:C++(cpp)
This seems fairly simple to follow and understand, but it doesn’t follow the single exit point principle — the flow of the method is non-linear. If the logic in this function ever gets more complex, this can quickly get harder to debug. This simple change here ensures that the flow is linear and that future modifications follow suit.
You may argue that the first function is slightly more efficient because you save the extra copy to the temporary variable uuid. But most any modern compiler worth using will optimize that copy out, and you’re left with the same performance in both.
A quick bit of wisdom — simple code, even if it has more lines, more assignments, etc. is more often than not going to result in better performance than complex code. Why? The optimizer can more readily recognize simple constructs and optimize them than it can with complex algorithms that perform the same function!
Conclusion
In this post, we covered a variety of topics related to C++ programming best practices. We discussed the benefits of using standard type definitions, the importance of organizing related data into objects, the placement of function documentation comments, the use of clang-format to enforce code style, the significance of being const-correct in all your definitions, and the reasons why it is important to wrap single-line blocks with braces and to return from only a single spot in your function.
By adhering to these best practices, C++ programmers can create code that is more readable, maintainable, and easy to debug. These principles help ensure that code is consistent and that common sources of errors, such as memory leaks or incorrect program behavior, are avoided.
Overall, by following these best practices, C++ programmers can create high-quality, efficient, and robust code that can be easily understood and modified, even as the codebase grows in size and complexity.
As software developers, we rely on variables to store and manipulate data in our programs. However, it is crucial to understand the scope of a variable and how it affects its accessibility and lifetime. In C and C++, the scope of a variable determines where in the program it can be used and for how long it will exist. In this blog post, we will be exploring the different types of scopes in C/C++ and the best practices for handling them to write clean, maintainable, and effective code.
We will look at global, local, and member scopes and how they affect the lifetime of variables. We will also discuss how to properly handle pointers, which have their own unique set of considerations when it comes to scope. By understanding the different types of scopes and how to handle them, you will be equipped to make conscious decisions about how you use variables in your code, leading to more reliable, efficient, and maintainable programs.
Variable Scope Awareness
Awareness of variable lifetimes and scopes, particularly when it comes to pointers, is critical to writing clean and effective C/C++ code. The lifetime of a variable is the period of time during which it is allocated memory and exists in the program. In C/C++, variables can have three different scopes: global scope, local scope, and member scope.
Global Scope Variables
Global scope variables are declared outside of all functions and are accessible throughout the entire program. They have a longer lifetime and persist throughout the execution of the program, but using too many global scope variables can lead to cluttered code and potential naming conflicts. However, in my mind, the more serious implications of improper use of a global variable is race conditions.
A race condition occurs when two or more threads access a shared resource, such as a global variable, simultaneously and the final result depends on the timing of the access. In a safety critical environment, where errors in the system can have severe consequences, race conditions can cause significant harm.
// Example of a global variable, including a potential race conditionint32_t g_temperature_C = 0;
voidthread1(void){// Read the temperature from the sensor g_temperature_C = ReadTemperatureFromSensor();}voidthread2(void){if ((g_temperature_C > 0) && (g_temperature_C < 70)) // !! Simple race condition {// Do some safety critical work }else {// Manage temperature out of bounds (i.e., cool down or heat up) }}Code language:C++(cpp)
In the example above, thread2 is doing some safety critical work, but only when g_temperature_C is within a certain range, which is updated in thread1. If the temperature is out of bounds, then the system needs to take a different action. The issue here is that the wrong action can lead to serious consequences, either for the safety of the system, or in the case where humans are involved, the safety of the user.
In this case, a global variable is a poor choice of scope for g_temperature_C.
If you find you do have to use global variables, you can still limit their scope to the specific compilation unit where they are defined (i.e., the file where the variable is declared). You can do this by adding the static keyword to the variable declaration. The advantage to this is that it limits the scope of the variable to just the specific module, rather than the entire program.
// Limit scope of global variable to the specific compilation unit (i.e., this file)staticint32_t g_temperature_C = 0;
Code language:C++(cpp)
Local Scope Variables
Local scope variables, on the other hand, are declared within a function or block and are only accessible within that specific scope. They have a shorter lifetime, are allocated on the stack, and are automatically deallocated from memory once the function or block has finished execution. Using local scope variables is recommended over global variables as they limit the potential for naming conflicts, allow for cleaner code, and also eliminate race conditions.
// Example of a local variable, resolving the race condition abovevoidthread2(void){int32_t l_temperature_C = ReadTemperatureFromSensor();if ((l_temperature_C > 0) && (l_temperature_C < 70)) // !! NO race condition {// Do some safety critical work }else {// Manage temperature out of bounds (i.e., cool down or heat up) }}Code language:C++(cpp)
As you can see, the race condition from using a global variable is avoided here because the variable is local and cannot be changed outside of this function.
Member Scope Variables
Member scope variables, also known as class member variables, are declared within a class and are accessible by all member functions of that class. Their scope is tied to the lifetime of the object they are a member of.
You can think of the scope of member variables to be similar to that of static global variables. Instead of being limited to the compilation unit where they are declared, they are limited to the scope of the class that they are part of. Race conditions on member variables are a real possibility. Precautions must be taken to ensure you avoid them, such as proper locking or an improved architecture to avoid the race altogether.
Properly Scoping Pointers
Pointers are a powerful tool in C and C++, allowing you to efficiently work with data objects in your programs. However, naive usage of pointers can lead to significant problems, including hard to find bugs and difficult to maintain code.
In C and C++, pointers have their own lifetime, separate from the objects they point to. When a pointer goes out of scope, the object it referenced remains in memory but is no longer accessible. When dynamically allocating memory, this leads to memory leaks where the memory is not properly deallocated, leading to a buildup of memory usage over time.
Smart Pointers
To prevent memory leaks and ensure that your programs are efficient and reliable, it is important to handle pointers with care. Modern C++ provides smart pointers types, which automatically manage the lifetime of objects and deallocate them when they are no longer needed. Using smart pointer types of std::shared_ptr and std::unique_ptr, you can be assured that when you create (and allocate) a pointer to an object, that object is constructed (and initialized if following RAII principles) and the pointer is valid. Then, when that pointer goes out of scope, the object is destructed and the memory is deallocated.
#include <memory>#include <iostream>
void PrintTemperature()
{
// Create a unique pointer to a TemperatureSensor object
std::unique_ptr<TemperatureSensor> pTS = std::make_unique<TemperatureSensor>();
// Use the unique pointer within the scope of the current function
std::cout << "Temperature: " << pTS->GetTemperature() << std::endl;
// The unique pointer goes out of scope at the end of the main function// and its dynamically allocated memory is automatically deallocated
}Code language:PHP(php)
When working with raw pointers, it’s critical to be aware of the lifetime of the objects being pointed to. For example, if the lifetime of the object ends before the pointer is deallocated, the pointer becomes a “dangling pointer”. This can cause undefined behavior, such as crashing the program or returning incorrect results. Smart pointers are typically a better choice and avoid this risk by managing the lifetime of the object themselves.
In conclusion, understanding and properly handling the scope of variables in C/C++ is a crucial aspect of writing clean, maintainable, and effective code. By becoming familiar with global, local, and member scopes, and considering the lifetime and accessibility of variables, you can make informed decisions about how to use variables in your programs.
Additionally, pointers require their own set of considerations when it comes to scope, and it is essential to handle them with care to prevent memory leaks and other issues.
By following best practices and being aware of the potential pitfalls, you can ensure that your programs are reliable, efficient, and easy to maintain.
Include the right version right in your repo; provide a custom command to update CPM.cmake — example
Setup cmake_modules path — example CMake
Best Practices
Avoid short hand
Keep package dependencies with their targets
CMake is a cross-platform build system that can be used to build, test, and package software projects. One of the most powerful features of CMake is the ability to manage dependencies, and using CMake Package Manager (CPM) make that a breeze. CPM is a package manager for CMake that allows you to easily download and use libraries and other dependencies in your project without having to manually manage them.
Using CPM effectively can greatly simplify the process of managing dependencies in your project. Here are a few tips to help you get the most out of CPM:
Start by adding CPM to your project. This can be done by adding the following line to the top of your CMakeLists.txt file (note that you will need to have a cmake/CPM.cmake in that path relative to your CMakeLists.txt file). You can find up-to-date versions of CPM.cmake here (documentation is here).
include(cmake/CPM.cmake)Code language:PHP(php)
Next, specify the dependencies you need for your project by adding CPMAddPackage commands. For example, to add the msgpack-c library, you would add the following stanza:
Once you have added all the dependencies you require, you can use them in your project by including the appropriate headers and linking against the appropriate libraries. Note that CPM will pull the dependency from the repository specified and then run CMake (if the project uses CMake to build). Because CMake is run for the project, the CMake targets are available for you to use. For example, to use the msgpack-c library in your project, you would add the following lines to your CMakeLists.txt file:
target_link_libraries(your_target msgpackc-cxx)
CPM also allows you to specify options for modules, such as disabling tests or building in a specific configuration. To specify options for a module, you can use the OPTIONS argument, as shown above.
When the dependency does not have a CMakeLists.txt file, CPM will still checkout the repository, but will not configure it. In that case, you are required to write your own CMake to perform the build as required. For example, the embedded web server called Mongoose from Cesanta does not provide a CMakeLists.txt to build it, but we can still pull it in like this (note the use of the CPM generated variable mongoose_SOURCE_DIR):
CPMAddPackage(
NAME mongoose
GIT_TAG 7.8
GITHUB_REPOSITORY "cesanta/mongoose"
)
if (mongoose_ADDED)
add_library(mongoose SHARED ${mongoose_SOURCE_DIR}/mongoose.c)
target_include_directories(mongoose SYSTEM PUBLIC ${mongoose_SOURCE_DIR})
target_compile_definitions(mongoose PUBLIC MG_ENABLE_OPENSSL)
target_link_libraries(mongoose PUBLIC ssl crypto)
install(TARGETS mongoose)
endif(mongoose_ADDED)Code language:PHP(php)
Add dependencies with CPM in the same CMakeLists.txt as the target that uses the dependency. If multiple targets use the same dependency, CPM will not pull multiple copies, rather it will use the copy already downloaded. By doing this, you ensure that if you ever refactor your CMake, or pull a CMakeLists.txt for a module, you get all the dependencies and don’t miss anything.
CPM is a powerful tool that can help simplify the process of managing dependencies in your project. By following the tips outlined in this blog, you can effectively use CPM to manage your dependencies, ensuring that your project is always up-to-date and easing the burden of keeping your dependencies up-to-date as well. With the right approach, CPM can help you save time and effort when managing your project dependencies, allowing you to focus on building and delivering your project.
CMake is an open-source, cross-platform build system that helps developers to manage their projects and build them on different platforms. It is widely used in the software development community, especially for C and C++ projects. In this blog post, we will explore how to use CMake effectively to manage your projects and improve your workflow as a software developer.
An Example CMakeLists.txt
First, let’s start with the basics of CMake. CMake uses a simple, human-readable language called CMakeLists.txt to describe the build process of a project. This file contains instructions on how to find and configure dependencies, set compiler flags, and create the final executable or library. Here is an example of how I typically define my CMake from my open-source ZeroMQ-based RPC library.
################################################################################ CMakeLists.txt for zRPC library# - Creates a CMake target library named 'zRPC'###############################################################################cmake_minimum_required(VERSION 3.14 FATAL_ERROR)
# Define the project, including its name, version, and a brief descriptionproject(zRPC
VERSION "0.0.1" DESCRIPTION "0MQ-based RPC client/server library with MessagePack support" )
# Define CMake options to control what targets are generated and made available to buildoption(ZRPC_BUILD_TESTS "Enable build of unit test applications"ON)
# Setup default compiler flagsset(CMAKE_C_STANDARD 11)
set(CMAKE_C_STANDARD_REQUIRED ON)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(compile_options -pedantic-errors
-pedantic
-Wall
-Wextra
-Wconversion
-Wsign-conversion
-Wno-psabi
-Werror
CACHE INTERNAL "Compiler Options" )
################################################################################ Bring in CPM###############################################################################include(cmake/CPM.cmake)
################################################################################ Bring in CPPZMQ header-only API###############################################################################CPMAddPackage(
NAME cppzmq
VERSION 4.8.1 GITHUB_REPOSITORY "zeromq/cppzmq" OPTIONS "CPPZMQ_BUILD_TESTS OFF")
################################################################################ Bring in MSGPACK-C header-only API###############################################################################CPMAddPackage(
NAME msgpack
GIT_TAG cpp-4.1.1 GITHUB_REPOSITORY "msgpack/msgpack-c" OPTIONS "MSGPACK_BUILD_DOCS OFF""MSGPACK_CXX20 ON""MSGPACK_USE_BOOST OFF")
################################################################################ Bring in C++ CRC header-only API###############################################################################CPMAddPackage(
NAME CRCpp
GIT_TAG release-1.1.0.0 GITHUB_REPOSITORY "d-bahr/CRCpp")
if(CRCpp_ADDED)
add_library(CRCpp INTERFACE)
target_include_directories(CRCpp SYSTEM INTERFACE ${CRCpp_SOURCE_DIR}/inc)
endif(CRCpp_ADDED)
################################################################################ zRPC library###############################################################################add_library(${PROJECT_NAME} SHARED)
target_include_directories(${PROJECT_NAME} PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include)
target_link_libraries(${PROJECT_NAME} PUBLIC cppzmq msgpackc-cxx CRCpp pthread)
target_sources(${PROJECT_NAME} PRIVATE src/zRPCClient.cpp
src/zRPCServer.cpp
src/zRPCPublisher.cpp
src/zRPCSubscriber.cpp
PUBLIC include/zRPC.hpp
)
target_compile_options(${PROJECT_NAME} PUBLIC ${compile_options})
################################################################################ Test applications###############################################################################if (ZRPC_BUILD_TESTS)
add_executable(client tests/client.cpp)
target_link_libraries(client zRPC)
target_compile_options(client PUBLIC ${compile_options})
add_executable(server tests/server.cpp)
target_link_libraries(server zRPC)
target_compile_options(server PUBLIC ${compile_options})
add_executable(publisher tests/publisher.cpp)
target_link_libraries(publisher zRPC)
target_compile_options(publisher PUBLIC ${compile_options})
add_executable(subscriber tests/subscriber.cpp)
target_link_libraries(subscriber zRPC)
target_compile_options(subscriber PUBLIC ${compile_options})
include(cmake/CodeCoverage.cmake)
append_coverage_compiler_flags()
add_executable(unittest tests/unit.cpp)
target_link_libraries(unittest zRPC)
target_compile_options(unittest PUBLIC ${compile_options})
setup_target_for_coverage_gcovr_xml(NAME ${PROJECT_NAME}_coverage
EXECUTABLE unittest
DEPENDENCIES unittest
BASE_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} EXCLUDE "tests" )
endif(ZRPC_BUILD_TESTS)
Code language:CMake(cmake)
Once you have your CMakeLists.txt file created, you can use the CMake command-line tool to generate the build files for a specific platform, such as Makefiles or Visual Studio project files. It is considered best practice to keep your build files separated from your source files, so I am in the habit of creating a “_bld” directory for that purpose.
mkdir _bld; cd _bld
cmake ..
CMake Targets
Targets are the basic building blocks of a CMake project. They represent the executable or library that is built as part of the project. Each target has a unique name and is associated with a set of source files, include directories, and libraries that are used to build it.
CMake also supports creating custom targets, which can be used to run arbitrary commands as part of the build process, such as running tests or generating documentation. You can specify properties for the target, like include directories, libraries, or compile options. You can also specify dependencies between the targets, so that when one target is built, it will automatically build any targets it depends on.
This is a really powerful feature that CMake provides because when I define my library target, I define what it needs to build such as the source files, includes, and external libraries. Then, when I define my executable, I only need to specify the library that it depends on — the requisite includes and other libraries that need to be linked in come automatically!
add_library(${PROJECT_NAME} SHARED)
target_include_directories(${PROJECT_NAME} PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include)
target_link_libraries(${PROJECT_NAME} PUBLIC cppzmq msgpackc-cxx CRCpp pthread)
target_sources(${PROJECT_NAME} PRIVATE src/zRPCClient.cpp
src/zRPCServer.cpp
src/zRPCPublisher.cpp
src/zRPCSubscriber.cpp
PUBLICinclude/zRPC.hpp
)
target_compile_options(${PROJECT_NAME} PUBLIC ${compile_options})
# The executable only needs to depend on zRPC now,# not all the other dependencies and include directories
add_executable(client tests/client.cpp)
target_link_libraries(client zRPC)Code language:PHP(php)
Dependency Management
One of the most important aspects of CMake is its ability to help you find and use dependencies. CMake provides a number of built-in commands that can be used to find and configure dependencies, such as find_package and find_library. These commands can be used to locate and configure external libraries, such as Boost or OpenCV, and make them available to your project. This can save you a lot of time and effort compared to manually configuring dependencies for each platform, which is how it was done with plain Makefiles in the past.
In my example above, I use a tool called CPM, or the CMake Package Manager. This is an abstraction of the find_package and find_library methods available in the CMake language. One huge advantage of this tool is that it can not only be used to find and use local packages, but it can be used to pull packages at a specific version or tag from remote git repositories. You can see how I used this to pull in the cppzmq, msgpack, and CRCpp packages that my library depends on.
################################################################################ Bring in CPM###############################################################################include(cmake/CPM.cmake)
################################################################################ Bring in CPPZMQ header-only API###############################################################################CPMAddPackage(
NAME cppzmq
VERSION 4.8.1 GITHUB_REPOSITORY "zeromq/cppzmq" OPTIONS "CPPZMQ_BUILD_TESTS OFF")
################################################################################ Bring in MSGPACK-C header-only API###############################################################################CPMAddPackage(
NAME msgpack
GIT_TAG cpp-4.1.1 GITHUB_REPOSITORY "msgpack/msgpack-c" OPTIONS "MSGPACK_BUILD_DOCS OFF""MSGPACK_CXX20 ON""MSGPACK_USE_BOOST OFF")
################################################################################ Bring in C++ CRC header-only API###############################################################################CPMAddPackage(
NAME CRCpp
GIT_TAG release-1.1.0.0 GITHUB_REPOSITORY "d-bahr/CRCpp")
if(CRCpp_ADDED)
add_library(CRCpp INTERFACE)
target_include_directories(CRCpp SYSTEM INTERFACE ${CRCpp_SOURCE_DIR}/inc)
endif(CRCpp_ADDED)
Code language:CMake(cmake)
Cross-Platform Build Support
Another powerful feature of CMake is its ability to generate build files for multiple platforms. For example, you can use CMake to generate Makefiles for Linux, Visual Studio project files for Windows, or Xcode project files for macOS. This allows you to easily build and test your project on different platforms, without having to manually configure the build process for each one.
# Basic command to generate Makefiles (Linux/MacOS)
cmake -G "Unix Makefiles" ..
# Basic command to generate Visual Studio build files
cmake -G "Visual Studio 16" -A x64 ..
# More complex command from the VS Code CMake extension performing cross-compilation for ARM
/usr/bin/cmake --no-warn-unused-cli -DCMAKE_EXPORT_COMPILE_COMMANDS:BOOL=TRUE \
-DCMAKE_BUILD_TYPE:STRING=Release
-DCMAKE_C_COMPILER:FILEPATH=/usr/bin/arm-linux-gnueabihf-gcc-10
-DCMAKE_CXX_COMPILER:FILEPATH=/usr/bin/arm-linux-gnueabihf-g++-10
-DARCH:STRING=armv7 -DENABLE_TESTS:STRING=ON
-S/workspaces/zRPC -B/workspaces/zRPC/_bld/ARM/Release
-G "Unix Makefiles"Code language:PHP(php)
Best Practices
To improve your workflow with CMake, there are a few best practices that you should follow:
Keep your CMakeLists.txt files small and organized. The build process of a project can become complex, so it’s important to keep your CMakeLists.txt files well-organized and easy to understand.
Use variables to define common build options, such as compiler flags or library paths. This makes it easy to change these options globally, without having to modify multiple parts of your CMakeLists.txt files.
Use include() and add_subdirectory() commands to split your project into smaller, more manageable parts. This makes it easier to understand the build process, and also makes it easy to reuse parts of your project in other projects. I have found that many, small CMake files are easier to manage and maintain than fewer, large CMake files.
Use the install() command to specify where the final executable or library should be installed. This makes it easy to distribute your project to other users.
Use the add_custom_command() and add_custom_target() commands to add custom build steps to your project. For example, you can use these commands to run a script that generates source code files or to run a test suite after building.
By following these best practices, you can effectively use CMake to manage your projects and improve your workflow as a software developer. CMake is a powerful tool that can save you a lot of time and effort, and by mastering its features, you can build and distribute your projects with ease.
If you’re a developer, coder, or software engineer and have not been hiding under a rock, then you’re probably familiar with Git. Git is a distributed version control system that helps developers track changes to their code and collaborate with others. While Git can be a bit complex (especially if used improperly), there are some easy commands you can learn to improve your workflow. In this blog post, we’ll walk you through 10 of the most essential Git commands.
TL;DR
The commands we address in this post are:
git config
git clone
git branch / git checkout
git pull
git push
git status / git add
git commit
git stash
git restore
git reset
It is assumed that you have basic knowledge of what the terms like branch, commit, or checkout mean. If not, or you really want to get into the nitty-gritty details, the official Git documentation book is a must-read!
Setup and Configuration
First things first – to get started with Git you need to get it installed and configured! Any Linux package manager today is going to have Git available:
If you happen to be on Windows or Mac, you can find a link to download Git here.
Once you have Git installed, it’s time to do some initial configuration using the command git config. Git will store your configuration in various configuration files, which are platform dependent. On Linux distributions, including WSL, it will setup a .gitconfig file in your user’s home directory.
There are two things that you really need to setup at first:
Who you are
What editor you use
To tell git who you are so that it can tag your commits properly, use the following commands:
The –global option tells git to store the configuration in the global configuration file, which is stored in your home directory. There are times when you might need to use different email addresses for your commits in different respositories. To set that up, you can run the following command from the git repository in question:
In order to work with repositories, there are a few primary commands you need to work with — clone, branch, checkout, pull, and push.
Cloning
git clone is the command you will use to pull a repository from a URL and create a copy of it on your machine. There are a couple protocols you can use to clone your repository: SSH or HTTPS. I always prefer to set up SSH keys and use SSH, but that is because in the past it wasn’t as easy to cache your HTTPS credentials for Git to use. Those details are beyond the scope of this post, but there is plenty of information about using SSH and HTTPS here.
To clone an existing repository from a URL, you would use the following command:
This will reach out to the URL, ask for your HTTPS credentials (if anonymous access is not allowed), and then download the contents of the repository to a new folder entitled zRPC. You can then start to work on the code!
Sometimes a repository may refer to other Git repositories via Git submodules. When you clone a repository with submodules, you can save yourself a separate step to pull those by simply passing the --recursive option to git clone, like so:
When working with Git repositories, the most common workflow is to make all of your changes in a branch. You can see a list of branches using the git branch command and optionally see what branches are available on the remote server:
$ git branch # list only your local branches
$ git branch --all # list all branches (local and remote)Code language:PHP(php)
To checkout an existing branch, simply use the git checkout command:
$ git checkout amazing-new-feature
Switched to branch 'amazing-new-feature'
Your branch is up to date with'origin/amazing-new-feature'.Code language:JavaScript(javascript)
You can also checkout directly to a new branch that does not exist by passing the -b option to git checkout:
$ git checkout -b fix-problem-with-writer
Switched to a new branch 'fix-problem-with-writer'Code language:JavaScript(javascript)
Interacting with the Remote Server
Let’s now assume that you have a new bug fix branch in your local repository, and have committed your changes to that branch (more on that later). It is time to understand how to interact with the remote server, so you can share your changes with others.
First, to be sure that you are working with the latest version of the code, you will need to pull the latest changes from the server using git pull. This is best done before you start a branch for work and periodically if other developers are working in the same branch.
$ git pull
This will reach out to the server and pull the latest changes to your current branch and merge those changes with your local changes. If you have files that have local changes and the pull would overwrite those, Git will notify you of the error and ask you to resolve it. If there are no conflicts, then you are up-to-date with the remote server.
Now that you are up-to-date, you can push your local commits to the remote server using git push:
$ git push
git push will work as long as the server has a branch that your local one is tracking. git status will tell you whether that is the case:
$ git status
On branch master
Your branch is up to date with'origin/master'.
nothing to commit, working tree cleanCode language:JavaScript(javascript)
$ git status
On branch fix-problem-with-writer
nothing to commit, working tree cleanCode language:JavaScript(javascript)
If you happen to be on a local branch with no remote tracking branch, you can use git push to create a remote tracking branch on the server:
Git makes it very easy to work with your source code. There are a few commands that are easy to use and make managing code changes super simple. Those commands are: status, add, commit, stash, and reset.
Staging Your Changes
To stage your changes in Git means to prepare them to be added in the next commit.
In order to view the files that have local changes, use the git status command:
$ git status
On branch fix-problem-with-writer
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: CMakeLists.txt
modified: README.md
no changes added to commit (use "git add" and/or "git commit -a")Code language:Bash(bash)
Once you are ready to stage your changes, you can stage them using git add:
$ git add README.md
If README.md has a lot of changes, and you want to separate them into different commits? Just pass the -p option to git add to add specific pieces of the patch.
$ git add -p README.md
$ git status
On branch fix-problem-with-writer
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
modified: README.md
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: CMakeLists.txt
Code language:JavaScript(javascript)
To commit these changes you have staged, you would use the git commit command:
$ git commit
Git commit will bring up an editor where you can fill out your commit message (for a good commit message format, read this; you can also read this for details on how to set up your Git command line to enforce a commit log format).
You can also amend your last commit if you forgot to include some changes or made a typo in your commit message. Simply stage your new changes, then issue:
$ git commit --amend
Storing Changes For Later
Git has a fantastic tool that allows you to take a bunch of changes you have made and save them for later! This feature is called git stash. Imagine you are making changes in your local branch, fixing bug after bug, when your manager calls you and informs you of a critical bug that they need you to fix immediately. You haven’t staged all your local changes, nor do you want to spend the time to work through them to write proper commit logs.
Enter git stash. git stash simply “stashes” all your local, unstaged changes off to the side, leaving you with a pristine branch. Now you can switch to a new branch for this critical bug fix, make the necessary changes, push to the server, and jump right back into what you were working on before. That sort of flow would look like this:
<working in fix-problem-with-writer>
$ git status
On branch fix-problem-with-writer
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: CMakeLists.txt
no changes added to commit (use "git add" and/or "git commit -a")
$ git stash
Saved working directory and index state WIP on fix-problem-with-writer
$ git status
On branch fix-problem-with-writer
nothing to commit, working tree clean
$ git checkout fix-problem-with-reader
Switched to branch 'fix-problem-with-reader'
<make necessary changes>
$ git add <changes>
$ git commit
$ git push
$ git checkout fix-problem-with-writer
Switched to branch 'fix-problem-with-writer'
$ git status
On branch fix-problem-with-writer
nothing to commit, working tree clean
$ git stash pop
On branch fix-problem-with-writer
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: CMakeLists.txt
no changes added to commit (use "git add" and/or "git commit -a")
Dropped refs/stash@{0} (5e3a53d36338f1906e871b52d3c97236f139b75e)Code language:JavaScript(javascript)
There are a couple of things to understand about git stash:
The stash is a stack – you can stash as much as you want on it and when you pop, you’ll get the last thing stashed
The stash will try to apply all the changes in the stash, and in the event of a conflict, will notify you of the conflict and leave the stash on the stack
I run into the second bullet quite often, but it isn’t hard to fix. If I run into that sort of issue, it is usually simple conflicts that are easily addresses manually. Manually address the conflicts in the file, restore all staged changes from the git stash pop, and then drop the last stash.
$ git stash pop
Auto-merging CMakeLists.txt
CONFLICT (content): Merge conflict in CMakeLists.txt
The stash entry is kept in case you need it again.
$ git status
On branch fix-problem-with-writer
Unmerged paths:
(use "git restore --staged <file>..." to unstage)
(use "git add <file>..." to mark resolution)
both modified: CMakeLists.txt
no changes added to commit (use "git add" and/or "git commit -a")
$ vim CMakeLists.txt # manually edit and resolve the conflicts
$ git status
On branch fix-problem-with-writer
Unmerged paths:
(use "git restore --staged <file>..." to unstage)
(use "git add <file>..." to mark resolution)
both modified: CMakeLists.txt
no changes added to commit (use "git add" and/or "git commit -a")
$ git restore --staged CMakeLists.txt
$ git stash drop
Dropped refs/stash@{0} (6c7d34915b38e5d75072eacee856fb427f916aa8)Code language:HTML, XML(xml)
Undoing Changes or Commits
There are often times when I need to undo the previous commit or I accidentally added the wrong file to my stage. When this happens it is useful to know that you have ways to back up and try again.
To remove files from your staging area, you would use the git restore command, like so:
This will remove the file from your staging area, but your changes will remain intact. You can also use restore to revert a file back to the version in the latest commit. To do this, simply omit the --staged option:
You can do similar things with the git reset command. One word of caution with the git reset command — you can truly and royally mess this up and lose lots of hard work — so be very mindful of your usage of this command!
git reset allows you to undo commits from your local history — as many as you would like! To do this, you would use the command like so:
$ git reset HEAD~n
# For example, to remove 3 commits
$ git reset HEAD~3
Unstaged changes after reset:
M CMakeLists.txt
M tests/unit.cppCode language:PHP(php)
The HEAD~n indicates how many commits you want to back up, replacing n with the number you want. With this version of the command, all the changes present in those commits are placed in your working copy as unstaged changes.
You can also undo commits and discard the changes:
$ git reset --hard HEAD~n
# For example, to discard 1 commit
$ git reset --hard HEAD~1
HEAD is now at 345cd79 fix(writer): upgrade writer to v1.73Code language:PHP(php)
So there you have it – our top 10 Git commands to help improve your workflow. As we have mentioned before, when you take the time to understand your language and tools, you can make better decisions and avoid common pitfalls! Improving your Git workflow is a conscious decision that can save you a lot of time and headaches! Do you have a favorite command that we didn’t mention? Let us know in the comments below!