Author Archives: Peter Bigot

Ultimate Overloads

Generic algorithms in C++ operate by substituting specific types into templates that use features of the underlying types. Optimized implementations of algorithms can be selected when the type parameters satisfy certain constraints. A standard technique for this is overload selection via tag dispatching, but maybe you want a more abstract solution.

As usual, code examples are available in a github gist.

As shown previously, std::enable_if can be used to select an override if types are assignable. The condition can be arbitrarily complex; taking advantage of decltype and std::declval you could even check whether a particular member function can be called and will return a particular type:

namespace details {
template <typename T,
          typename = std::enable_if<std::is_same<void, decltype(std::declval<T>().clear())>::value>>
void clear_ (T& value, int)
{
  value.clear();
}

template <typename T>
void clear_ (T& value, long)
{
  value = std::move(T{});
}
} // ns details

template <typename T>
void clear (T& value)
{
  return details::clear_(value, 0);
}

The problem is that you need something to distinguish and prioritize the different overloads, so when multiple candidates are possible the best one is selected. In this case we used int and long. The blog post Remastered enable_if covers the options well, but the final word is in a followup post Beating overload resolution into submission which presents a solution with unbounded extensibility. It’s the one at the bottom of that post, if you don’t want to read the whole thing (though you should).

All the heavy lifting was done in those posts. I want to add a little value because some things about the solution leave me discomforted:

  1. There’s a magic number 10 used to ground the template recursion, based on an assumption that no more than 10 overloads are necessary;
  2. There’s a distinct type that’s used for the “none-of-the-above” situation, which in my view is non-orthogonal;
  3. The example doesn’t work with clang: it compiles without warning, but prints “fizzbuzz” for every N.

Let’s deal with the last one first. Each overload is of the form:

template<unsigned N, EnableIf<is_multiple_of<N, 15>>...>
void print_fizzbuzz(choice<0>){ std::cout << "fizzbuzz\n"; }

but it’s clear the underlying std::enable_if hidden inside the EnableIf isn’t doing its job.

I reduced this to LLVM bug 18677, which was promptly marked as a duplicate of LLVM bug 11723, showing that the bug has been present for about two years, so it’s clearly not a priority. In fact, it’s mentioned in the fine print of Remastered. The upshot is: using a variadic parameter pack to make signatures distinct without requiring an instance is a neat trick, but if you want to be portable to clang you can’t use it.

Fortunately, the solution in Beating shows us how to get an unbounded number of unique types that form an overload hierarchy without caring whether the template parameter is unique, so all we need do is change the canonical form back to the more traditional default template parameter:

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 15>::value>::type>
void print_fizzbuzz(choice<0>){ std::cout << "fizzbuzz\n"; }

This has the added benefit (IMO) of removing the custom template alias and directly using only constructs for which the meaning is well-defined and in the standard namespace.

The other two concerns are dealt with simply by inverting the priority indicated by the unsigned template parameter: let 0 be the lowest priority, which becomes what you use for the default case. To start the recursion, just keep track of the maximum overload count required for the particular function you need to support. Here’s the whole solution:

/* From: https://ideone.com/IB6tIR
 * Documented at: http://flamingdangerzone.com/cxx11/2013/03/11/overload-ranking.html
 * By: http://stackoverflow.com/users/500104/xeo
 * Modified: pabigot 20140201
 */

#include <type_traits>
#include <iostream>

template<int N, int M>
struct is_multiple_of : std::integral_constant<bool, N % M == 0>{};

template<unsigned I> struct overload_weight : overload_weight<I-1>{};
/* Lowest priority, use for default selection */
template<> struct overload_weight<0>{};

/* Helper constant, kept up to date as overload list is changed */
#define FIZZBUZZ_OVERLOAD_MAX_WEIGHT 10

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 15>::value>::type>
void print_fizzbuzz(overload_weight<10>){ std::cout << "fizzbuzz\n"; }

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 21>::value>::type>
void print_fizzbuzz(overload_weight<9>){ std::cout << "fizzbeep\n"; }

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 33>::value>::type>
void print_fizzbuzz(overload_weight<8>){ std::cout << "fizznarf\n"; }

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 35>::value>::type>
void print_fizzbuzz(overload_weight<7>){ std::cout << "buzzbeep\n"; }

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 55>::value>::type>
void print_fizzbuzz(overload_weight<6>){ std::cout << "buzznarf\n"; }

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 77>::value>::type>
void print_fizzbuzz(overload_weight<5>){ std::cout << "beepnarf\n"; }

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 3>::value>::type>
void print_fizzbuzz(overload_weight<4>){ std::cout << "fizz\n"; }

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 5>::value>::type>
void print_fizzbuzz(overload_weight<3>){ std::cout << "buzz\n"; }

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 7>::value>::type>
void print_fizzbuzz(overload_weight<2>){ std::cout << "beep\n"; }

template<unsigned N, typename = typename std::enable_if<is_multiple_of<N, 11>::value>::type>
void print_fizzbuzz(overload_weight<1>){ std::cout << "narf\n"; }

/* No conditional on default case */
template<unsigned N>
void print_fizzbuzz(overload_weight<0>){ std::cout << N << "\n"; }

template<unsigned N = 1>
void do_fizzbuzz(){
    print_fizzbuzz<N>(overload_weight<FIZZBUZZ_OVERLOAD_MAX_WEIGHT>{});
    do_fizzbuzz<N+1>();
}

template<>
void do_fizzbuzz<100>(){
    print_fizzbuzz<100>(overload_weight<FIZZBUZZ_OVERLOAD_MAX_WEIGHT>{});
}

int main(){
  do_fizzbuzz();
}

A trivial change, but this allows us to put the overload_weight template into a library and use it for multiple functions without having to change it if somebody adds more alternatives than were anticipated.

Diagnostics for Template Meta-Programming in C++

At C++Now 2012 Marshall Clow presented Generic Programming in C++: A Real World Example which addressed the addition of a hex/unhex pair of functions to Boost.Utility. A future post may address why I think the design for this specific feature took a wrong turn right at the start, but as a pedagogical example of intermediate C++ generic programming it’s worth viewing.

The design includes an algorithm which expects a template parameter to provide certain capabilities. The original solution used std::enable_if to disable the definition when those requirements were not met. Around 00:45:00, Stephan T. Lavavej pointed out that disabling unacceptable overloads with std::enable_if produces obscure errors uninterpretable by mortal users because the compiler won’t find a match, and that a cleaner solution is an outer function with a static_assert that invokes an inner function that implements the algorithm. After a very inconvenient interruption, comment from somebody I didn’t recognize at 00:47:40 pointed out that not all compilers terminate template expansion on the static_assert failure, so using this approach you get the static assert diagnostic followed by the no-matching-function diagnostics. The commenter went on to propose a workaround where the inner function takes a bool argument, constructed in the outer function from the std::enable_if calculation, which bypasses the body if the expansion is not valid. Unfortunately the audio is unintelligible and I can’t figure out what technique was being recommended (did he say “mpl:bool_“, “template bool“; is the flag a template parameter or a function parameter; …).

All that’s the topic of this post. You can get the full source for the examples at this github gist.

So let’s start with a simple example. Here’s a generic algorithm that assigns one value to another:

template <typename T1, typename T2>
void useit (T1& t1, T2 t2)
{
  t1 = t2;
}

Here’s code that invokes it, but with types that don’t satisfy the expectations of the algorithm:

int main ()
{
  std::wstring s1{L"wide"};
  std::string t1{"narrow"};
  useit(t1, s1);
}

And here’s the noise that GCC 4.9.0 produces in response:

no-check.cc: In instantiation of ‘void useit(T1&, T2) [with T1 = std::basic_string<char>; T2 = std::basic_string<wchar_t>]’:
no-check.cc:13:15:   required from here
no-check.cc:6:6: error: no match for ‘operator=’ (operand types are ‘std::basic_string<char>’ and ‘std::basic_string<wchar_t>’)
   t1 = t2;
      ^
no-check.cc:6:6: note: candidates are:
In file included from /usr/local/gcc-20140124/include/c++/4.9.0/string:52:0,
                 from no-check.cc:1:
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:554:7: note: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(const std::basic_string<_CharT, _Traits, _Alloc>&) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]
       operator=(const basic_string& __str) 
       ^
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:554:7: note:   no known conversion for argument 1 from ‘std::basic_string<wchar_t>’ to ‘const std::basic_string<char>&’
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:562:7: note: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(const _CharT*) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]
       operator=(const _CharT* __s) 
       ^
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:562:7: note:   no known conversion for argument 1 from ‘std::basic_string<wchar_t>’ to ‘const char*’
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:573:7: note: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(_CharT) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]
       operator=(_CharT __c) 
       ^
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:573:7: note:   no known conversion for argument 1 from ‘std::basic_string<wchar_t>’ to ‘char’
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:589:7: note: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(std::basic_string<_CharT, _Traits, _Alloc>&&) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]
       operator=(basic_string&& __str)
       ^
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:589:7: note:   no known conversion for argument 1 from ‘std::basic_string<wchar_t>’ to ‘std::basic_string<char>&&’
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:601:7: note: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(std::initializer_list<_Tp>) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]
       operator=(initializer_list<_CharT> __l)
       ^
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:601:7: note:   no known conversion for argument 1 from ‘std::basic_string<wchar_t>’ to ‘std::initializer_list<char>’

That’s not something I want my users to have to cope with. Sure, it says what the problem is, but there’s a lot of detail that’s just distracting, and it’d be a lot worse with more complex types in a more complex algorithm.

So: Assume we take the original approach from the talk and disable the generic algorithm when the types are not assignable:

/* Provide the algorithm only if the expectations are met */
template <typename T1, typename T2,
          typename = typename std::enable_if<std::is_assignable<T1, T2>::value>::type>
void useit (T1& t1, T2 t2)
{
  t1 = t2;
}

What that produces is not really better:

ei-check.cc: In function ‘int main()’:
ei-check.cc:16:15: error: no matching function for call to ‘useit(std::string&, std::wstring&)’
   useit(t1, s1);
               ^
ei-check.cc:16:15: note: candidate is:
ei-check.cc:7:6: note: template<class T1, class T2, class> void useit(T1&, T2)
 void useit (T1& t1, T2 t2)
      ^
ei-check.cc:7:6: note:   template argument deduction/substitution failed:
ei-check.cc:6:11: error: no type named ‘type’ in ‘struct std::enable_if<false, void>’
           typename = typename std::enable_if<std::is_assignable<T1, T2>::value>::type>
           ^

The diagnostic is shorter, and somewhat helpful because the conditional is so simple, but still obscure and indirect.

What STL appeared to propose was to add a static assert which verifies the expectations of the parameter and emits a diagnostic when they aren’t satisfied, then delegates to the original version:

template <typename T1, typename T2>
void useit_ (T1& t1, T2 t2)
{
  t1 = t2;
}

/* Generate a diagnostic if the expectations aren't met, but defer the
 * mis-use to another function */
template <typename T1, typename T2>
void useit (T1& t1, T2 t2)
{
  static_assert(template_types_ok::value, "cannot assign T2 to T1");
  useit_(t1, t2);
}

This is the same technique addressed in this blog post. And, just as the anonymous commenter in the video warned, the static assert failure didn’t prevent gcc from going on to produce the non-helpful cascading SFINAE errors:

sa-check.cc: In function ‘void useit(T1&, T2)’:
sa-check.cc:15:17: error: ‘template_types_ok’ has not been declared
   static_assert(template_types_ok::value, "cannot assign T2 to T1");
                 ^
sa-check.cc: In instantiation of ‘void useit_(T1&, T2) [with T1 = std::basic_string<char>; T2 = std::basic_string<wchar_t>]’:
sa-check.cc:16:16:   required from ‘void useit(T1&, T2) [with T1 = std::basic_string<char>; T2 = std::basic_string<wchar_t>]’
sa-check.cc:23:15:   required from here
sa-check.cc:7:6: error: no match for ‘operator=’ (operand types are ‘std::basic_string<char>’ and ‘std::basic_string<wchar_t>’)
   t1 = t2;
      ^
sa-check.cc:7:6: note: candidates are:
In file included from /usr/local/gcc-20140124/include/c++/4.9.0/string:52:0,
                 from sa-check.cc:1:
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:554:7: note: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(const std::basic_string<_CharT, _Traits, _Alloc>&) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]
       operator=(const basic_string& __str) 
       ^
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:554:7: note:   no known conversion for argument 1 from ‘std::basic_string<wchar_t>’ to ‘const std::basic_string<char>&’
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:562:7: note: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(const _CharT*) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]
       operator=(const _CharT* __s) 
       ^
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:562:7: note:   no known conversion for argument 1 from ‘std::basic_string<wchar_t>’ to ‘const char*’
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:573:7: note: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(_CharT) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]
       operator=(_CharT __c) 
       ^
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:573:7: note:   no known conversion for argument 1 from ‘std::basic_string<wchar_t>’ to ‘char’
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:589:7: note: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(std::basic_string<_CharT, _Traits, _Alloc>&&) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]
       operator=(basic_string&& __str)
       ^
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:589:7: note:   no known conversion for argument 1 from ‘std::basic_string<wchar_t>’ to ‘std::basic_string<char>&&’
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:601:7: note: std::basic_string<_CharT, _Traits, _Alloc>& std::basic_string<_CharT, _Traits, _Alloc>::operator=(std::initializer_list<_Tp>) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]
       operator=(initializer_list<_CharT> __l)
       ^
/usr/local/gcc-20140124/include/c++/4.9.0/bits/basic_string.h:601:7: note:   no known conversion for argument 1 from ‘std::basic_string<wchar_t>’ to ‘std::initializer_list<char>’

Not better.

I don’t know what the unrecognized commenter intended as the solution, but my reconstruction is the following: put the static assert in the user-called function, then delegate to a hidden overloaded implementation that provides the working algorithm only when the constraints are met, and provides a stub with no errors when they aren’t:

/* Provide the algorithm only if the expectations are met */
template <typename T1, typename T2,
          typename = typename std::enable_if<std::is_assignable<T1, T2>::value>::type>
void useit_ (T1& t1, T2 t2, std::true_type template_types_ok)
{
  t1 = t2;
}

/* Provide a no-op that doesn't produce errors when the expectations
 * are not met. */
template <typename T1, typename T2>
void useit_ (T1&, T2, std::false_type template_types_ok)
{ }

/* Bleat in distress when the template types don't satisfy
 * expectations, but unconditionally delegate to an implementation
 * that won't produce compiler errors in either case. */
template <typename T1, typename T2>
void useit (T1& t1, T2 t2)
{
  using template_types_ok = std::is_assignable<T1, T2>;
  static_assert(template_types_ok::value, "cannot assign T2 to T1");
  useit_(t1, t2, typename template_types_ok::type());
}

Walking through, in line 21 we alias template_types_ok to a type that’s equivalent to either std::true_type or std::false_type depending on whether or not the algorithm requirements are satisfied by the type parameters. In line 22 we check the satisfiability at compile-time and provide a user-level description of any failed expectation. Then line 23 we use the type that represents the satisfiability to select an implementation that won’t have compile-time errors. That one of the implementations wouldn’t work at runtime is irrelevant because it’s selected only when the static assert prevents compilation from succeeding.

Here’s what this tells the user:

sa-helper-check.cc: In instantiation of ‘void useit(T1&, T2) [with T1 = std::basic_string<char>; T2 = std::basic_string<wchar_t>]’:
sa-helper-check.cc:33:15:   required from here
sa-helper-check.cc:25:3: error: static assertion failed: cannot assign T2 to T1
   static_assert(template_types_ok::value, "cannot assign T2 to T1");
   ^

Now that’s what I want my users to see if they misuse my algorithms: a clear description of what they did wrong so they can fix things.

C++ Unit Test Frameworks

[toc]
One of the first decisions in a new project is which unit testing framework to use. Traditionally I’ve used CppUnit, so I pulled down the current release and started working.

This left me unhappy as the first test produced this compile-time error:

/usr/local/gcc-20140104/include/cppunit/TestAssert.h:109:6: note:   template argument deduction/substitution failed:
cpput_eval.cc:13:5: note:   deduced conflicting types for parameter ‘const T’ (‘int’ and ‘std::basic_string::size_type {aka long unsigned int}’)
     CPPUNIT_ASSERT_EQUAL(4, str.size());

For a couple days I worked around this by casting the integer literal to a type that satisfied the calls, but eventually I got fed up.

So I looked for alternatives. I found fault with the first two choices, but joy with the third. Herein are some examples with discussion of what they reveal about the choices. The files are available as a github gist.

The Test Criteria

Three specific assertions were found to cause trouble with various solutions, so the examples used below show all of them:

  • Comparing a std::string size() with an integer literal;
  • Pointer-equality testing for char * values;
  • Comparing a floating point result to a specific absolute accuracy

In addition, these criteria are relevant:

  • Verbosity: how much boilerplate do you have to add that isn’t really part of your test?
  • Installation overhead: is it easy to build the library for specific compiler flags or is the assumption that you build it once and share it? This matters when playing with advanced language feature flags such as -std=c++1y, which can affect linking test cases together.
  • Assertion levels: when a test fails can you control whether the test keeps going or aborts (e.g., when following assertions would be invalid if the first fails).
  • Assertion comparisons: can you express specific relations (not equal, greater than) or is it mostly a true/false capability?

CppUnit

Originally on SourceForge, this project has developed new life at freedesktop.org.

CppUnit comes with a standard configure/make/make install build process which installs the headers and the support library into the proper directories within a toolchain prefix. You need to provide a main routine to invoke the test driver.

CppUnit provides only one level of assertion: the test case aborts when it fails. It also has limited ability to express specific requirements (for example, there is CPPUNIT_ASSERT_EQUAL(x,y) but no CPPUNIT_ASSERT_NOT_EQUAL(x,y).

Here’s what the tests looks like with CppUnit:

#include <cppunit/extensions/HelperMacros.h>
#include <string>
#include <cmath>

class testStringStuff : public CppUnit::TestFixture
{
protected:
  void testBasic ()
  {
    const char * const cstr{"no\0no\0"};
    const std::string str("text");
    CPPUNIT_ASSERT_EQUAL(std::size_t{4}, str.size());
    CPPUNIT_ASSERT(cstr != (cstr+3));
  }

private:
  CPPUNIT_TEST_SUITE(testStringStuff);
  CPPUNIT_TEST(testBasic);
  CPPUNIT_TEST_SUITE_END();
};

CPPUNIT_TEST_SUITE_REGISTRATION(testStringStuff);

class testFloatStuff : public CppUnit::TestFixture
{
protected:
  void testBasic ()
  {
    CPPUNIT_ASSERT_DOUBLES_EQUAL(11.045, std::sqrt(122.0), 0.001);
  }

private:
  CPPUNIT_TEST_SUITE(testFloatStuff);
  CPPUNIT_TEST(testBasic);
  CPPUNIT_TEST_SUITE_END();
};

CPPUNIT_TEST_SUITE_REGISTRATION(testFloatStuff);

There’s a lot of overhead, what with the need to define and register the suites, though it didn’t really bother me until I saw what other frameworks require. And I did have to do that irritating explicit cast to get the size comparison to compile.

The output is terse and all tests pass:

testFloatStuff::testBasic : OK
testStringStuff::testBasic : OK
OK (2)

Boost.Test

Boost is a federated collection of highly-coupled but independently maintained C++ libraries covering a wide range of capabilities. It includes Boost.Test, the unit test framework used by boost developers themselves.

Boost.Test can be used as a header-only solution, but I happened to install it in library form. This gave me a default main routine for invocation, though I did have to have a separate object file with preprocessor defines which incorporated it into the executable.

Boost.Test also supports three levels of assertion. WARN is a diagnostic only; CHECK marks the test as failing but continues; and REQUIRE marks the test as failing and stops the test. There are also a wide variety of conditions (EQUAL, NE, GT, …), each of which is supported for each level.

Here’s what the tests look like with Boost.Test:

#include <boost/test/unit_test.hpp>
#include <string>
#include <cmath>

BOOST_AUTO_TEST_CASE(StringStuffBasic)
{
  const std::string str("text");
  float fa[2];
  const char * const cstr{"no\0no\0"};
  BOOST_CHECK_EQUAL(4, str.size());
  BOOST_CHECK_NE(fa, fa+1);
  BOOST_CHECK_NE(cstr, cstr+3);
}

BOOST_AUTO_TEST_CASE(FloatStuffBasic)
{
  BOOST_CHECK_CLOSE(11.045, std::sqrt(122), 0.001);
}

This is much more terse than CppUnit, and seems promising. Here’s what happens when it runs:

Running 2 test cases...
butf_eval.cc(10): error in "StringStuffBasic": check cstr != cstr+3 failed [no == no]
butf_eval.cc(15): error in "FloatStuffBasic": difference{0.0032685%} between 11.045{11.045} and std::sqrt(122){11.045361017187261} exceeds 0.001%

*** 2 failures detected in test suite "Master Test Suite"

Um. Whoops?

Boost.Test silently treats the char* pointers as though they were strings, and does a string comparison instead of a pointer comparison. Which is not what I asked for, and not what BOOST_CHECK_NE(x,y) will do with other pointer types.

Boost.Test also does not provide a mechanism for absolute difference in floating point comparison. Instead, it provides two relative solutions: BOOST_CHECK_CLOSE(v1,v2,pct) checks that v1 and v2 are no more than pct percent different (e.g. 10 would be 10% different), while BOOST_CHECK_CLOSE_FRACTION(v1,v2,frac) does the same thing but using fractions of a unit (e.g. 0.1 would be 10% different). Now, you can argue that there’s value in a relative error calculation. But to have two of them, and not have an absolute error check—that doesn’t work for me.

Boost.Test also has a few other issues. The released version has not been updated for four years, but the development version used internal to the Boost project has many changes, which are expected to be released at some point in the future. From comments on the boost developers mailing list the documentation is generally agreed to be difficult to use, and has produced a rewritten version (which, honestly, is what I had to use to try it out).

All in all, I don’t feel comfortable depending on Boost.Test.

Google Test

Google Test is another cross-platform unit test framework, which supports a companion mocking framework to support unit testing of capabilities that are not stand-alone.

The code comes with configure/make/install support, but also provides a single-file interface allowing it to be built easily within the project being tested with the same compiler and options as the code being tested. You do need a separate main routine, but it’s a two-liner to initialize the tests and run them all.

Google Test supports two levels of assertion: failure of an ASSERT aborts the test, while failure of EXPECT fails the test but continues to check additional conditions. It also provides a wide variety of conditions.

Here’s what the tests look like with Google Test:

#include <gtest/gtest.h>
#include <string>
#include <cmath>

TEST(StringStuff, Basic)
{
  const std::string str("text");
  const char * const cstr{"no\0no\0"};
  ASSERT_EQ(4, str.size());
  ASSERT_NE(cstr, cstr+3);
}

TEST(FloatStuff, Basic)
{
  ASSERT_NEAR(11.045, std::sqrt(122.0), 0.001);
}

Even more terse than Boost.Test, because it doesn’t use something like GTEST_TEST or GTEST_ASSERT_EQ. To avoid conflict with user code I normally expect framework tools to provide their interfaces within a namespace (literally for C++, or by using a standard identifier prefix where that wouldn’t work). Both CppUnit and Boost.Test do this for their macros, but for unit test code that doesn’t get incorporated into an application I think it’s ok that this isn’t done.

And here’s what you get when running it:

[==========] Running 2 tests from 2 test cases.
[----------] Global test environment set-up.
[----------] 1 test from StringStuff
[ RUN      ] StringStuff.Basic
[       OK ] StringStuff.Basic (0 ms)
[----------] 1 test from StringStuff (0 ms total)

[----------] 1 test from FloatStuff
[ RUN      ] FloatStuff.Basic
[       OK ] FloatStuff.Basic (0 ms)
[----------] 1 test from FloatStuff (0 ms total)

[----------] Global test environment tear-down
[==========] 2 tests from 2 test cases ran. (0 ms total)
[  PASSED  ] 2 tests.

A little more verbose than I’m accustomed to from CppUnit, but it’s tolerable. The most important bit is the last line tells you the overall success, so you only need to scroll up if something didn’t work.

Conclusions

Summarizing the individual tests for each criterion, with a bold answer being preferable from my subjective perspective:

FeatureCppUnitBoost.TestGoogle Test
Handles size_t/int comparesnoyesyes
Handles char* comparesyesnoyes
Handles absolute float deltayesnoyes
Verbosityhighlowlow
Installationtoolchainheader-only or toolchainproject
Assertion Levelsonethreetwo
Assertion Conditionsfeweverymany

So I’m now happily using Google Test as the unit test framework for new C++ projects.

In fact, I’ve also started to use Google Mock, which turns out to be even more cool and eliminates the biggest limitation on unit testing: what to do if the routine being tested normally needs a heavy-weight and uncontrollable supporting infrastructure to satisfy its API needs. But I can’t really add anything beyond what you’ll can find on their wiki, so will leave it at that.

C++11 and integer rotate

About two months ago when I was starting to catch up on modern C++, I ran across John Regehr’s discussion of portable C rotate. From the initial code:

uint32_t rotl32a (uint32_t x, uint32_t n)
{
  return (x<<n) | (x>>(32-n));
}

he evolves the solution to:

uint32_t rotl32c (uint32_t x, uint32_t n)
{
  assert (n<32);
  return (x<<n) | (x>>(-n&31));
}

which generates optimal code on x86 and avoids all undefined behavior. See the original post for full details.

In C++ I’d like to generalize this to any type that supports shift operations. To do this requires understanding exactly where the original version risked undefined behavior, and where the final version does once it’s been generalized beyond

uint32_t

.

So here are the gotchas, with reference to the ISO/IEC 14882:2011(E) section and paragraph that discusses them.

  • Integral promotion (4.5) is performed on both shift operands (5.8#1)
  • Shift operations greater than or equal to the number of bits in the promoted left operand produce undefined behavior (section 5.8#1).  Hence the assert in the final version, and the trickery of
    -n&31

    , about which more later.

  • Shifts on signed types with negative values are undefined (5.8#2,3). Left shifts on signed types with non-negative values are undefined if the shifted value exceeds the maximum representable value in the unsigned version of the result type (colloquially, if a 1 bit is shifted out of the sign bit).
  • Integral promotion is performed on the operand to unary minus, and the result of the operation is different depending on whether the operand is unsigned (5.3.2#1).
  • Integral numbers might use a representation other than 2’s complement (3.9.1#7).

After all this is taken into account, one ends up with the following (see complete code in a test harness at this gist):

template <typename T>
T
rotl (T v, unsigned int b)
{
  static_assert(std::is_integral<T>::value, "rotate of non-integral type");
  static_assert(! std::is_signed<T>::value, "rotate of signed type");
  constexpr unsigned int num_bits {std::numeric_limits<T>::digits};
  static_assert(0 == (num_bits & (num_bits - 1)), "rotate value bit length not power of two");
  constexpr unsigned int count_mask {num_bits - 1};
  const unsigned int mb {b & count_mask};
  using promoted_type = typename std::common_type<int, T>::type;
  using unsigned_promoted_type = typename std::make_unsigned<promoted_type>::type;
  return ((unsigned_promoted_type{v} << mb)
          | (unsigned_promoted_type{v} >> (-mb & count_mask)));
}

Some commentary:

  • Line 5 is a compile-time verification that the type is not a user-defined type, for which some of the other assumptions might not be valid.
  • Line 6 protects against rotation of signed values, which are known to risk undefined behavior.
  • Line 7 uses a standard-defined trait to find the number of bits in the representation of T.
  • Line 8 makes sure we’re not dealing with some weird type where an upcoming mask operation won’t produce the right answer (e.g., the MSPGCC uint20_t type).
  • Lines 9 and 10 use a bit mask to reduce the shift value to something for which it’s known the operation is defined; i.e. this function provides defined rotate behavior beyond what is mandated by C++ for shift.
  • Lines 11 and 12 deal with the possibility that the result of integral promotion of the (verified unsigned) type T might produce a signed type for which shift operations could produce undefined behavior.
  • Lines 13 and 14 implement the rotate now that all the preconditions have been validated.

And, of course, the template when instantiated for uint32_t produces the same optimal code as the original.

In meta-commentary, the addition of static_assert in C++11 is an awesome enhancement, which can be combined with std::enable_if for some neat template metaprogramming techniques that still produce comprehensible user diagnostics. The traits that provide implementation information on standard types are also a great enhancement for portable code. And the new using type alias capability makes things more readable than the equivalent typedef approach.

BTW: Somebody might suggest that the second argument be unsigned char b, since it’s reasonable to assume the shift count will be less than 256 for any integral type (though not necessarily for user-defined types). One reason not to do this is the classic argument that int is the native word size and there’s unlikely to be any benefit in using a smaller type. A second is more subtle and interesting:

  • Per 4.5#1, a prvalue of type unsigned char can promote to a prvalue of type int if representation preconditions are satisfied.
  • Per 5.3.1#8 the negation of an unsigned quantity is computed by subtracting its value from 2n where n is the number of bits in the promoted operand. The implication is that the negation of a signed quantity is computed by subtracting its value from zero.
  • While the representation of -1 in (for example) 16-bit 2’s complement is 0xFFFF, its representation in 16-bit 1’s complement is 0xFFFE and its representation in 16-bit sign-magnitude is 0x8001.

What this means is -mb&count_mask will not give you the right answer in a non-2’s-complement implementation if mb isn’t at least the same rank (4.13) as int. It also means that -mb does not produce the same value as 0-mb for all built-in integral types and processing environments.

Interesting stuff, IMO.