Exploring New C++ and MFC Features in Visual Studio 2010

By Ernest Riley,2014-04-03 09:06
8 views 0
Visual Studio 2010 offers huge benefits for C++ developers, from new C++0x features to MSBuild integration to a revived MFC Application Wizard. Join us for a tour of these new Visual C++ features.

Exploring New C++ and MFC

    Features in Visual Studio 2010

    Sumit Kumar

    Visual Studio 2010 presents huge benefits for C++ developers. From the ability

    to employ the new features offered by Windows 7 to the enhanced productivity

    features for working with large code bases, there is something new and

    improved for just about every C++ developer.

    In this article, I will explain how Microsoft has addressed some of the broad

    problems faced by C++ developers. Specifically, Visual Studio 2010 enables a

    more modern programming model by adding core language features from the

    upcoming C++0x standard, and by overhauling the standard library to take

    advantage of the new language features. There are new parallel programming

    libraries and tools to simplify the creation of parallel programs. You’ll also find

    enhanced overall performance and developer productivity thanks to

    IntelliSense and code-understanding features that scale to large code bases.

    And you’ll benefit from the improved performance of libraries and other

    features across design time, build time, compile time and link time.

    Visual Studio 2010 migrates the build system to MSBuild to make it more

    customizable and to support native multi-targeting. And enhancements in the

    MFC library harness the power of new Windows 7 APIs, enabling you to write

    great Windows 7 applications.

    Let’s take a closer look at these C++-focused advancements in Visual Studio


    C++0x Core Language Features

    The next C++ standard is inching closer to being finalized. To help you get

    started with the C++0x extensions, the Visual C++ compiler in Visual Studio

    2010 enables six C++0x core language features: lambda expressions, the auto

    keyword, rvalue references, static_assert, nullptr and decltype.

    Lambda expressions implicitly define and construct unnamed function objects.

    Lambdas provide a lightweight natural syntax to define function objects where

    they are used, without incurring performance overhead.

    Function objects are a very powerful way to customize the behavior of Standard

    Template Library (STL) algorithms, and can encapsulate both code and data

    (unlike plain functions). But function objects are inconvenient to define because

    of the need to write entire classes. Moreover, they are not defined in the place

    in your source code where you’re trying to use them, and the non-locality makes

    them more difficult to use. Libraries have attempted to mitigate some of the problems of verbosity and non-locality, but don’t offer much help because the

    syntax becomes complicated and the compiler errors are not very friendly. Using function objects from libraries is also less efficient since the function objects defined as data members are not in-lined.

    Lambda expressions address these problems. The following code snippet shows a lambda expression used in a program to remove integers between variables x and y from a vector of integers.


     v.end(), [x, y](int n) {

     return x < n && n < y; }),


    The second line shows the lambda expression. Square brackets, called the lambda-introducer, indicate the definition of a lambda expression. This lambda takes integer n as a parameter and the lambda-generated function object has the data members x and y. Compare that to an equivalent handwritten function object to get an appreciation of the convenience and time-saving lambdas provide:

class LambdaFunctor {


     LambdaFunctor(int a, int b) : m_a(a), m_b(b) { }

     bool operator()(int n) const {

     return m_a < n && n < m_b; }


     int m_a;

     int m_b;


    v.erase(remove_if(v.begin(), v.end(),

     LambdaFunctor(x, y)), v.end());

    The auto keyword has always existed in C++, but it was rarely used because it provided no additional value. C++0x repurposes this keyword to automatically determine the type of a variable from its initializer. Auto reduces verbosity and helps important code to stand out. It avoids type mismatches and truncation errors. It also helps make code more generic by allowing templates to be written that care less about the types of intermediate expressions and effectively deals with undocumented types like lambdas. This code shows how auto saves you from typing the template type in the for loop iterating over a vector:

vector v;

    for (auto i = v.begin(); i != v.end(); ++i) {

    // code


    Rvalue references are a new reference type introduced in C++0x that help solve the problem of unnecessary copying and enable perfect forwarding. When the right-hand side of an assignment is an rvalue, then the left-hand side object can steal resources from the right-hand side object rather than performing a separate allocation, thus enabling move semantics.

    Perfect forwarding allows you to write a single function template that takes n arbitrary arguments and forwards them transparently to another arbitrary function. The nature of the argument (modifiable, const, lvalue or rvalue) is preserved in this forwarding process.

    template void functionA(T1&& t1, T2&& t2) {

     functionB(std::forward(t1), std::forward(t2)); }

    A detailed explanation of rvalue references is out of scope for this article, so check the MSDN documentation at for more information.

    Static_assert allows testing assertions at compile time rather than at execution time. It lets you trigger compiler errors with custom error messages that are easy to read. Static_assert is especially useful for validating template parameters. For example, compiling the following code will give the error “error C2338: custom assert: n should be less than 5”:

template struct StructA {

     static_assert(n < 5, "custom assert: n should be less than 5"); };

int _tmain(int argc, _TCHAR* argv[]) {

     StructA<4> s1;

     StructA<6> s2;

     return 0;


    Nullptr adds type safety to null pointers and is closely related to rvalue references. The macro NULL (defined as 0) and literal 0 are commonly used as the null pointer. So far that has not been a problem, but they don’t work very

    well in C++0x due to potential problems in perfect forwarding. So the nullptr

    keyword has been introduced particularly to avoid mysterious failures in perfect forwarding functions.

    Nullptr is a constant of type nullptr_t, which is convertible to any pointer type, but not to other types like int or char. In addition to being used in perfect forwarding functions, nullptr can be used anywhere the macro NULL was used as a null pointer.

    A note of caution, however: NULL is still supported by the compiler and has not yet been replaced by nullptr. This is mainly to avoid breaking existing code due to the pervasive and often inappropriate use of NULL. But in the future, nullptr should be used everywhere NULL was used, and NULL should be treated as a feature meant to support backward compatibility.

    Finally, decltype allows the compiler to infer the return type of a function based on an arbitrary expression and makes perfect forwarding more generic. In past versions, for two arbitrary types T1 and T2, there was no way to deduce the type of an expression that used these two types. The decltype feature allows you to state, for example, an expression that has template arguments, such as sum() has the type T1+T2.

    Standard Library Improvements

    Substantial portions of the standard C++ library have been rewritten to take advantage of new C++0x language features and increase performance. In addition, many new algorithms have been introduced.

    The standard library takes full advantage of rvalue references to improve performance. Types such as vector and list now have move constructors and move assignment operators of their own. Vector reallocations take advantage of move semantics by picking up move constructors, so if your types have move constructors and move assignment operators, the library picks that up automatically.

    You can now create a shared pointer to an object at the same time you are constructing the object with the help of the new C++0x function template make_shared:

auto sp =



    In Visual Studio 2008 you would have to write the following to get the same functionality:

Using make_shared is more convenient (you’ll have to type the type name

    fewer times), more robust (it avoids the classic unnamed shared_ptr leak shared_ptr>

    because the pointer and the object are being created simultaneously), and more sp(new map(args));

    efficient (it performs one dynamic memory allocation instead of two). The library now contains a new, safer smart pointer type, unique_ptr (which has been enabled by rvalue references). As a result, auto_ptr has been deprecated; unique_ptr avoids the pitfalls of auto_ptr by being movable, but not copyable. It allows you to implement strict ownership semantics without affecting safety. It also works well with Visual C++ 2010 containers that are aware of rvalue references.

    Containers now have new member functionscbegin and cendthat provide a

    way to use a const_iterator for inspection regardless of the type of container:

vector v;

for (auto i = v.cbegin(); i != v.cend(); ++i) {

     // i is vector::const_iterator


    Visual Studio 2010 adds most of the algorithms proposed in various C++0x papers to the standard library. A subset of the Dinkumware conversions library is now available in the standard library, so now you can do conversions like UTF-8 to UTF-16 with ease. The standard library enables exception propagation via exception_ptr. Many updates have been made in the header . There is a singly linked list named forward_list in this release. The library has a header to improve diagnostics. Additionally, many of the TR1 features that existed in namespace std::tr1 in the previous release (like shared_ptr and regex) are now part of the standard library under the std namespace.

    Concurrent Programming Improvements

    Visual Studio 2010 introduces the Parallel Computing Platform, which helps you to write high-performance parallel code quickly while avoiding subtle concurrency bugs. This lets you dodge some of the classic problems relating to concurrency.

    The Parallel Computing Platform has four major parts: the Concurrency Runtime (ConcRT), the Parallel Patterns Library (PPL), the Asynchronous Agents Library, and parallel debugging and profiling.

    ConcRT is the lowest software layer that talks to the OS and arbitrates among multiple concurrent components competing for resources. Because it is a user mode process, it can reclaim resources when its cooperative blocking mechanisms are used. ConcRT is aware of locality and avoids switching tasks between different processors. It also employs Windows 7 User Mode Scheduling (UMS) so it can boost performance even when the cooperative blocking mechanism is not used.

    PPL supplies the patterns for writing parallel code. If a computation can be decomposed into sub-computations that can be represented by functions or function objects, each of these sub-computations can be represented by a task. The task concept is much closer to the problem domain, unlike threads that take you away from the problem domain by making you think about the hardware, OS, critical sections and so on. A task can execute concurrently with the other tasks independent of what the other tasks are doing. For example, sorting two different halves of an array can be done by two different tasks concurrently.

    PPL includes parallel classes (task_handle, task_group and

    structured_task_group), parallel algorithms (parallel_invoke, parallel_for and parallel_for_each), parallel containers (combinable, concurrent_queue, and concurrent_vector), and ConcRT-aware synchronization primitives (critical_section, event and reader_writer_lock), all of which treat tasks as a first-class concept. All components of PPL live in the concurrency namespace. Task groups allow you to execute a set of tasks and wait for them all to finish. So in the sort example, the tasks handling two halves of the array can make one task group. You are guaranteed that these two tasks are completed at the end of the wait member function call, as shown in the code example of a recursive quicksort written using parallel tasks and lambdas:

void quicksort(vector::iterator first,

    vector::iterator last) {

     if (last - first < 2) { return; }

     int pivot = *first;

     auto mid1 = partition(first, last, [=](int elem) {

     return elem < pivot; });

     auto mid2 = partition( mid1, last, [=](int elem) {

     return elem == pivot; });

     task_group g;[=] { quicksort(first, mid1); });[=] { quicksort(mid2, last); });



    This can be further improved by using a structured task group enabled by the parallel_invoke algorithm. It takes from two to 10 function objects and executes all of them in parallel using as many cores as ConcRT provides and waits for them to finish:


     [=] { quicksort(first, mid1); },

     [=] { quicksort(mid2, last); } );


     [=] { quicksort(first, mid1); },

     [=] { quicksort(mid2, last); } );

    There could be multiple subtasks created by each of these tasks. The mapping between tasks and execution threads (and ensuring that all the cores are optimally utilized) is managed by ConcRT. So decomposing your computation into as many tasks as possible will help take advantage of all the available cores. Another useful parallel algorithm is parallel_for, which can be used to iterate over indices in a concurrent fashion:

parallel_for(first, last, functor);

    parallel_for(first, last, step, functor);

    This concurrently calls function objects with each index, starting with first and

    ending with last.

    The Asynchronous Agents Library gives you a dataflow-based programming model where computations are dependent on the required data becoming available. The library is based on the concepts of agents, message blocks and message-passing functions. An agent is a component of an application that does certain computations and communicates asynchronously with other agents to solve a bigger computation problem. This communication between agents is achieved via message-passing functions and message blocks.

    Agents have an observable lifecycle that goes through various stages. They are not meant to be used for the fine-grained parallelism achieved by using PPL tasks. Agents are built on the scheduling and resource management components of ConcRT and help you avoid the issues that arise from the use of shared memory in concurrent applications.

    You do not need to link against or redistribute any additional components to take advantage of these patterns. ConcRT, PPL and the Asynchronous Agents Library have been implemented within msvcr100.dll, msvcp100.dll and

libcmt.lib/libcpmt.lib alongside the standard library. PPL and the Asynchronous

    Agents Library are mostly header-only implementations.

    The Visual Studio debugger is now aware of ConcRT and makes it easy for you

    to debug concurrency issuesunlike Visual Studio 2008, which had no

    awareness of higher-level parallel concepts. Visual Studio 2010 has a

    concurrency profiler that allows you to visualize the behavior of parallel

    applications. The debugger has new windows that visualize the state of all tasks

    in an application and their call stacks. Figure 1 shows the Parallel Tasks and

    Parallel Stacks windows.

Figure 1 Parallel Stacks and Parallel Tasks Debug Windows

    IntelliSense and Design-Time Productivity

    A brand-new IntelliSense and browsing infrastructure is included in Visual

    Studio 2010. In addition to helping with scale and responsiveness on projects

    with large code bases, the infrastructure improvements have enabled some

    fresh design-time productivity features.

    IntelliSense features like live error reporting and Quick Info tooltips are based

    on a new compiler front end, which parses the full translation unit to provide rich

    and accurate information about code semantics, even while the code files are being modified.

    All of the code-browsing features, like class view and class hierarchy, now use the source code information stored in a SQL database that enables indexing and has a fixed memory footprint. Unlike previous releases, the Visual Studio 2010 IDE is always responsive and you no longer have to wait while compilation units get reparsed in response to changes in a header file.

    IntelliSense live error reporting (the familiar red squiggles) displays compiler-quality syntax and semantic errors during browsing and editing of code. Hovering the mouse over the error gives you the error message (see Figure 2). The error list window also shows the error from the file currently

    being viewed, as well as the IntelliSense errors from elsewhere in the compilation unit. All of this information is available to you without doing a build.

Figure 2 Live Error Reporting Showing IntelliSense Errors

    In addition, a list of relevant include files is displayed in a dropdown while typing #include, and the list refines as you type.

    The new Navigate To (Edit | Navigate To or Ctrl+comma) feature will help you be more productive with file or symbol search. This feature gives you real-time search results, based on substrings as you type, matching your input strings for

symbols and files across any project (see Figure 3). This feature also works for

    C# and Visual Basic files and is extensible.

    Figure 3 Using the Navigate To Feature Call Hierarchy (invoked using Ctrl+K, Ctrl+T or from the right-click menu) lets

    you navigate to all functions called from a particular function, and from all

    functions that make calls to a particular function. This is an improved version of

    the Call Browser feature that existed in previous versions of Visual Studio. The

    Call Hierarchy window is much better organized and provides both calls from

    and calls to trees for any function that appears in the same window.

    Note that while all the code-browsing features are available for both pure C++

    and C++/CLI, IntelliSense-related features like live error reporting and Quick

    Info will not be available for C++/CLI in the final release of Visual Studio 2010.

    Other staple editor features are improved in this release, too. For example, the

    popular Find All References feature that is used to search for references to code

    elements (classes, class members, functions and so on) inside the entire

    solution is now more flexible. Search results can be further refined using a

    Resolve Results option from the right-click context menu.

    Inactive code now retains semantic information by maintaining colorization

    (instead of becoming gray). Figure 4 shows how the inactive code is dimmed but still shows different colors to convey the semantic information.

Report this document

For any questions or suggestions please email