This is an opinionated post about five libraries we use in the production code of quasardb.
We of course use many more great libraries (for example Boost.ASIO which is not listed here). Maybe those five libraries are not the most important, but I felt they deserved some special highlight as they are not so well-known or understood.
cppformat
Maybe you don't like what I will have to say about C++ metaprogramming or your project might have constraints that prevent you from using the latest C++ features.
But if there is one thing I am sure you dislike it's C++ iostreams. Or if you like iostreams, iostreams don't like you.
Need to format your number precisely and align your content? Don't worry, with C++ iostreams it will be painful to write, awful to maintain and will actually never work the way you want.
Of course when I write never, it is a figure a style because I actually mean never ever in all the possible universes.
As a bonus, if at some point you need to use facets to display dates according to your specific needs you'll leak memory because C++ was so much better in 1998.
That's why a lot of C++ programmers still use printf.
Unfortunately printf also has a lot of problems, especially on the safety side. It's very easy to get your format string wrong and although compilers became good at catching those errors, they still let slip a couple because nothing beats waking up to the music of an 0-day.
You might ask "what about Boost.Format?".
Boost.Format is safe, convenient (if you put aside the weird %
syntax), full featured and also slow as Hell. I hope you didn't need that string to be displayed quickly, because the next ice age will come first.
If you need performance, use cppformat.
The only drawback of cppformat is that it throws on errors and cannot truncate if your buffer is too small. That part put aside, it's very, very good. It is safe, fast, multi-platform, can do zero-allocation (i.e. you can format in a provided buffer) and compilation time is also short!
No more "%ull"
format strings, just use "{}"
. And with some variadic templates magic, variables arguments are safe and fast:
template <typename... Args>
void my_log_function(const char * f, const Args & ... args)
{
// something-something
fmt::print(f, args...);
// something-something
}
Intel Threading Building Blocks
Intel TBB is actually three libraries in one:
- A low-level toolkit containing micro-locks, scalable allocators and atomics
- A set of containers optimized for concurrent use
- A collection of parallel algorithms running in the provided scheduler
We currently use only the concurrent containers as we've found the scheduler of TBB isn't suitable for our needs (long story short: it created some latency issues).
The containers are well done and thought through. Not only they are simple to use, but their usage is safe and simple. Need an example?
struct bim
{
explicit bim(int x) : bam(x) {}
int bam;
};
// the code below can be safely performed by concurrent threads
tbb::concurrent_vector<my_struct> boom;
boom.emplace_back(bim(3));
I personally like the collection of lightweight-locks the most. I am really happy I didn't have to do them myself. Just be careful with their usage! Spin locks are a two edged sword and can devastate performance when improperly used.
Boost.Fusion
Boost.Fusion is one of my favorite libraries. There are many ways to describe Boost.Fusion, one of them being: the compile-time introspection tookit.
Here is a couple of examples of what Boost.Fusion can do for you:
- Iterate on the members of your class with no runtime cost
- Create a list of heterogeneous types (imagine a tuple from which you can easily remove the head)
- Create sets indexed by types
- And much more!
In my personal opinion the most powerful feature of Boost.Fusion is the adapter. Adapters transforms an arbitrary structure into a Boost.Fusion container.
See how we assemble Boost.Fusion, cppformat and C++ 14 lambdas to print the members of a class:
// definitions
struct bam { int a; std::string b; char c; };
BOOST_FUSION_ADAPT_STRUCT(bam, (int, a)(std::string, b)(char, c));
// code
bam b;
// stuff happens
// print members because it is a matter of national security
boost::fusion::for_each(b, [](auto m){ fmt::print("{}n", m); });
Boost.Spirit
I can't seem to be able to think of a project where I didn't have to parse input at some point.
Boost.Spirit enables you to embed parsers into your C++ code directly. It also contains a generator and a lexer.
This is a very powerful feature as having a real parser is always better to regex (that are often wrong) or custom parsers ridden with bugs and difficult to extend.
Last but not least parsers generated by Spirit are extremely fast, often faster than sprintf or the sort.
On our github you can find this basic int parser/generator.
Since we switched to cppformat we use these functions much less, but for example our memcache compatible layer is a Boost.Spirit parser.
Honorable mention: Hana
I'll give Hana an honorable mention because we currently do not use it in the code base. We build our software on Windows using Visual Studio and Hana heavily relies on C++ 14 features that will probably be available not before Visual Studio 2045.
We'll therefore have to wait for a production-ready Windows version of clang to use this wonderful library. Meanwhile we will continue to have our custom classes and use Boost.MPL.
What is Hana? In a couple of words, Boost.Hana is your modern TMP toolkit. Boost.MPL was created at a time where variadic templates and generic lambdas didn't exists. This results in heavy macros and ultra-long compilation times.
Hana is lighter, easier and faster to compile. Need more information? The cppcon14 presentation is available here.
My biggest surprise might come from that fact it's written Hana and not Хана. This is probably the first TMP library I used which wasn't written by a Russian.
Is that all?
There are so many C++ libraries out there that these list should hardly be considered as an exclusive list, but I hope you learned something new today and these examples made you want try something new.
Which library would you recommend?