At the BUILD conference in September, we unveiled developer previews of Visual Studio 11, Expression Blend 5, Team Foundation Server 11, and the .NET Framework 4.5.
We’ve been working on these technologies in earnest since we shipped the previous wave of these development tools last year, and we’ve had the opportunity to make significant enhancements, across the IDE, frameworks, libraries, languages, and services. Over my next several blog posts, I’ll be sharing with you some of my thoughts on big steps forward we’ve made in various areas of this development tooling, innovations that help to make developers, testers, and anyone involved in the software application lifecycle far more productive in their daily lives.
For this initial post, I’ll be focusing on programming languages and on the enhancements we’ve made both to their expressivity and supporting tooling. Languages exist at the center of everything developers do. Developers often pride themselves on the quality, style, maintainability, and efficiency of the code they write, and they achieve their goals using one or more of many supported languages. As we build our development tools, we keep this mindset front and center, investing heavily in advancing the state of the art for language tooling and expressivity, and enabling developers to achieve their needs with the best code possible.
The new DOM Explorer window enables digging through the HTML Document Object Model (DOM) to explore and manipulate elements, styles, and more.
As the evaluated expressions apply to the current application’s context, you can even define new functions and use those functions directly from the console window:
C#, Visual Basic, and Asynchrony
Almost a year ago, I blogged about some work done in Developer Division to explore integration of asynchronous programming directly with C# and Visual Basic. I’m excited to say that in the Visual Studio 11 Developer Preview, this is now a part of C# 5 and Visual Basic 11.
It’s long been known that asynchronous programming is how one achieves responsive user interfaces and scalable applications, but such techniques have also been difficult to implement. Such systems remain relatively simple when each operation involves just one asynchronous call, but as our world evolves towards one in which everything is exposed asynchronously, such operations are becoming rare. Instead, developers are forced to write callback after callback of convoluted code in order to navigate even the most trivial of patterns, like one call being made sequentially after another. For years, modern languages have provided us with control flow constructs that were largely unusable when writing asynchronous code. Now, with these new async language features of C# and Visual Basic, developers are able to write asynchronous code as if it were synchronous, all the while using the myriad of control flow constructs provided by these languages, including support for loops, conditionals, short-circuiting, and more.
With these features, we’ve been able to bring Visual Studio’s debugger capabilities along for the ride. For example, when in the debugger we “step over” (F10) a statement containing an await:
it behaves just as you’d expect it to, moving to the subsequent line in the logical control flow, even though that code is likely part of a continuation callback scheduled asynchronously under the covers.
For more information on asynchronous programming support in C# and Visual Basic, I recommend the following talks from BUILD:
- Anders Hejlsberg’s “Future directions for C# and Visual Basic”
- Mads Torgersen and Alex Turner’s “Async made simple in Windows 8, with C# and Visual Basic”
- Stephen Toub’s “The zen of async: Best practices for best performance” and “Building parallelized apps with .NET and Visual Studio”
C++ and Parallelism
Our teams have spent considerable energy in this release improving C++ support in Visual Studio. This includes not only full support for the C++11 standard libraries, improved IDE support (such as reference highlighting and semantic colorization), and support for building fully-native Windows Metro style applications, but also rich new language and library support for parallelism.
I previously blogged about our efforts around C++ AMP. This is an innovative technology new to Visual C++ in Visual Studio 11 that enables C++ developers to easily write code that leverages massively parallel accelerators (mainly GPUs) as part of their C++ projects. In regular C++ code, a developer can use the parallel_for_each method to invoke a lambda that’s been annotated with “restrict(direct3d)”, which will cause the compiler to generate for that lambda code that targets a DirectX accelerator. In the following example, the parallel_for_each is used to iterate through all indices of the output matrix in order to compute the product of the two input matrices
const std::vector<float>& vA,
const std::vector<float>& vB, int M, int N, int W)
array_view<const float,2> a(M, W, vA);
array_view<const float,2> b(W, N, vB);
array_view<writeonly<float>,2> c(M, N, vC);
parallel_for_each(c.grid, [=](index<2> idx) restrict(direct3d)
int row = idx; int col = idx;
float sum = 0.0f;
for(int i = 0; i < W; i++)
sum += a(row, i) * b(i, col);
c[idx] = sum;
Not only are the C++ AMP sections of code directly integrated into the source files and expressed using standard C++ syntax, Visual Studio also provides complete debugging support for these kernels, enabling basics like breakpoints and stepping, but also full support across debugger windows like Watch, Locals, and Parallel Stacks, along with the new GPU Threads and Parallel Watch windows.
C++ AMP isn’t the only parallelism-focused effort for native code in Visual Studio 11. The C++ compiler now also automatically vectorizes loops when it determines doing so is valuable. For example, for the following code, the compiler will attempt to utilize SSE instructions on the CPU to run multiple iterations of the for loop as part of a single operation, significantly speeding up the computation:
float a, b, c;
for(int i=0; i<1000; i++)
c[i] = a[i] + b[i];
The C++ compiler now also features some auto-parallelization in addition to auto-vectorization. And the parallelism libraries included with Visual C++ have been significantly expanded, including additional concurrent data structures, parallel algorithms, and an updated tasking model similar to that used by the Task Parallel Library (TPL) in the .NET Framework.
For more information on GPU computing support in Visual C++, I recommend Daniel Moth’s BUILD talk: “Taming GPU compute with C++ AMP”. And for a look at some of the other innovation in Visual C++, I recommend Herb Sutter’s BUILD talks “Using the Windows Runtime from C++” and “Writing modern C++ code: how C++ has evolved over the years”.
F# and Data Access
Not all languages need to support every domain and every use case equally: if they did so, there would be little need for more than one language. Often languages end up catering to specific domains and specific styles of development, and I’m particularly excited about our investments in a language that’s a great example of this principle: F#. With F# 2.0 in Visual Studio 2010, we provided a language focused on accelerating solutions for computationally-complex problems. With F# 3.0 in Visual Studio 11, we continue the trend of focusing on a particular problem domain by directly integrating support for solving data-complex problems.
F# is a statically-typed language, just as are C# and Visual Basic, and this static typing provides many advantages. It supports an improved development experience by enabling features such as accurate IntelliSense. It can yield better performance due to more optimizations available at compile-time. It can also reduce development and testing costs by eliminating some common categories of bugs.
However, there are also times when static typing leads to needing more code than its dynamic counterpart. As a prime example, the world is extremely information rich, something that we’re experiencing more and more in our daily software lives. All of this data typically enters our programs in a non-strongly-typed way, and it first needs to be parsed and massaged into strongly-typed objects before it’s exposed to the rest of the program. Rather than a developer coding such import routines manually, this problem has been addressed historically by design-time code generation (e.g. a design-time tool to import a Web service description and generate the necessary proxy code). Unfortunately, there are problems with this approach. It interacts poorly with the evolving nature of data sources, such as those on the web. It can lead to very bloated client proxies (types are generated to represent the entire schema and metadata, regardless of whether or not the client program uses them). And it does not have a smooth integration with scripting environments, such as the F# Interactive window in Visual Studio.
With the new Type Provider mechanism in F# 3.0, such data access becomes trivial for F# programs and components. Also, because F# targets the .NET Framework, applications written in C# or Visual Basic (or any other managed language) can utilize this new functionality via an F# component. Using an extensibility mechanism of the F# compiler, type providers in effect provide data access libraries on demand, yielding a computed space of types and methods at design-time and compile-time in a manner that supports IntelliSense and that is extensible. F# 3.0’s libraries include type providers for OData, WSDL, and SQL (via both LINQ to SQL and LINQ to Entities), but custom type providers may also be written to target arbitrary data sources, such as SharePoint lists and WMI providers.
As an example, consider a desire to search the Netflix catalogue for a handful of people that share my “S.” moniker. Netflix exposes an OData feed, which can then be used with the OData type provider:
type netflixCatalog = ODataService<“http://odata.netflix.com/Catalog/”>
let netflix = netflixCatalog.GetDataContext()
for person in netflix.People do
where (person.Name.StartsWith “S. “)
} |> Seq.iter(fun result -> printfn “%s” result.Name)
Not only are we able to concisely import the relevant metadata and express the query, but we’re also able to see IntelliSense throughout the experience:
And such code can be written not only in an F# application, but also directly in the F# Interactive window:
For more information on type providers, I recommend Don Syme’s BUILD talk: “F# 3.0: data, services, web, cloud, at your fingertips”.
As is evident from this glimpse into some of the new language features and associated tooling support in Visual Studio 11, a lot of work has gone into pushing the state of the art for what’s possible in modern development. In future posts, we’ll explore advances in Visual Studio beyond languages, such as in the Visual Studio environment itself.