Porting existing code to .NET Core used to be quite hard because the available API set was very small. In .NET Core 2.0, we already made this much easier, thanks to .NET Standard 2.0. Today, we’re happy to announce that we made it even easier with the Windows Compatibility Pack, which provides access to an additional 20,000 APIs via a single NuGet package.
Who is this package for?
This package is meant for developers that need to port existing .NET Framework code to .NET Core. But before you start porting, you should understand what you want to accomplish with the migration. Just porting to .NET Core because it’s a new .NET implementation isn’t a good enough reason (unless you’re a True Fan).
.NET Core is optimized for building highly scalable web applications, running on Windows, macOS or Linux. If you’re building Windows desktop applications, then the .NET Framework is the best choice for you. Take a look at our documentation for more details on how to choose between .NET Core and .NET Framework.
Demo
For a demo, take a look at this video:
Using the Windows Compatibility Pack
We highly recommend that you plan your migrations as a series of steps instead of assuming you can port the existing code base all at once. If you’re planning to migrate an ASP.NET MVC application running on a local Windows server to an ASP.NET Core application running on Linux in Azure, we’d recommend you perform these steps:
- Migrate to ASP.NET Core (while still targeting the .NET Framework)
- Migrate to .NET Core (while staying on Windows)
- Migrate to Linux
- Migrate to Azure
The order of steps might vary, depending on your business goals and what value you need to accomplish first. For example, you might need to deploy to Azure before you perform the other migration steps. The primary point is that you perform one step at a time to ensure your application stays operational along the way. This reduces the complexity and churn you have to reason about at once. It also allows you to learn more about your code base and adjust your plans as you discover issues.
The Porting to .NET Core from .NET Framework documentation provides more details on the recommended process and which tools you can use.
Before bringing existing .NET Framework code to a .NET Core project, we recommend you first add the Windows Compatibility Pack by installing the NuGet package Microsoft.Windows.Compatibility. This maximizes the number of APIs you have at your disposal.
The Windows Compatibility Pack is currently in preview because it’s still a work in progress. The following table describes the APIs that are already part of the Windows Compatibility Pack or are coming in a subsequent update:
| Component | Status | Windows-Only | Component | Status | Windows-Only |
|---|---|---|---|---|---|
| Microsoft.Win32.Registry | Available | Yes | System.Management | Coming | Yes |
| Microsoft.Win32.Registry.AccessControl | Available | Yes | System.Runtime.Caching | Coming | |
| System.CodeDom | Available | System.Security.AccessControl | Available | Yes | |
| System.ComponentModel.Composition | Coming | System.Security.Cryptography.Cng | Available | Yes | |
| System.Configuration.ConfigurationManager | Available | System.Security.Cryptography.Pkcs | Available | Yes | |
| System.Data.DatasetExtensions | Coming | System.Security.Cryptography.ProtectedData | Available | Yes | |
| System.Data.Odbc | Coming | System.Security.Cryptography.Xml | Available | Yes | |
| System.Data.SqlClient | Available | System.Security.Permissions | Available | ||
| System.Diagnostics.EventLog | Coming | Yes | System.Security.Principal.Windows | Available | Yes |
| System.Diagnostics.PerformanceCounter | Coming | Yes | System.ServiceModel.Duplex | Available | |
| System.DirectoryServices | Coming | Yes | System.ServiceModel.Http | Available | |
| System.DirectoryServices.AccountManagement | Coming | Yes | System.ServiceModel.NetTcp | Available | |
| System.DirectoryServices.Protocols | Coming | System.ServiceModel.Primitives | Available | ||
| System.Drawing | Coming | System.ServiceModel.Security | Available | ||
| System.Drawing.Common | Available | System.ServiceModel.Syndication | Coming | ||
| System.IO.FileSystem.AccessControl | Available | Yes | System.ServiceProcess.ServiceBase | Coming | Yes |
| System.IO.Packaging | Available | System.ServiceProcess.ServiceController | Available | Yes | |
| System.IO.Pipes.AccessControl | Available | Yes | System.Text.Encoding.CodePages | Available | Yes |
| System.IO.Ports | Available | Yes | System.Threading.AccessControl | Available | Yes |
Handling Windows-only APIs
If you plan to run your .NET Core application on Windows only, then you don’t have to worry about whether an API is cross-platform or not. However, if you plan to migrate your application to Linux or macOS, you need to take the platform support into account.
As you can see in the previous table, about half of the components in the Windows Compatibility Pack are Windows-only, the other half works on any platform. Your code can always assume that all the APIs exist across all platforms, but if they are Windows-only they throw PlatformNotSupportedException. This allows you to write code that calls Windows-only APIs after doing a platform check at runtime, rather than having to use conditional compilation using #if. We recommend you to use RuntimeInformation.IsOSPlatform() for platform checks:
private static string GetLoggingPath()
{
// Verify the code is running on Windows.
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
{
using (var key = Registry.CurrentUser.OpenSubKey(@"Software\Fabrikam\AssetManagement"))
{
if (key?.GetValue("LoggingDirectoryPath") is string configuredPath)
return configuredPath;
}
}
// This is either not running on Windows or no logging path was configured,
// so just use the path for non-roaming user-specific data files.
var appDataPath = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);
return Path.Combine(appDataPath, "Fabrikam", "AssetManagement", "Logging");
}
You might wonder how you’re supposed to know which APIs are Windows-only. The obvious answer would be documentation, but that’s not very convenient. This is one of the reasons why we introduced the API Analyzer tool two weeks ago. It’s a Roslyn-based analyzer that will flag usages of Windows-only APIs when you’re targeting .NET Core and .NET Standard. For the previous sample, this looks as follows:
You have three options to deal with Windows-only API usages:
- Remove. Sometimes you might get away with simply deleting the code as you don’t plan to migrate certain features to the .NET Core-based version of your application.
- Replace. Usually, you’ll want to preserve the general feature so you might have to replace the technology with one that is cross-platform. For example, instead of saving configuration state in the registry, you’d use text-based configuration files you can read from all platforms.
- Guard. In some cases, you may want to call the Windows-only API when you’re running on Windows and simply do nothing (or call a Linux-specific API) when you’re running on Linux.
In the previous example, the code is already written in such a way that it provides a default configuration when the setting isn’t found in the registry. So the easiest solution is to guard the call to registry APIs behind a platform check.
The Windows Compatibility Pack is designed as a metapackage, meaning it doesn’t directly contain any libraries but references other packages. This allows you to quickly bring in all the technologies without having to hunt down various packages. But as your port progresses, you may find it useful to reference individual packages instead. This allows you to remove dependencies and ensure newly written code in that project doesn’t take a dependency on it again.
Summary
When you port existing code from the .NET Framework to .NET Core, install the new Windows Compatibility Pack. It provides access to an additional 20,000 APIs, compared to what is available in .NET Core. This includes drawing, EventLog, WMI, Performance Counters, and Windows Services.
If you plan to make your code cross-platform, use the new API Analyzer to ensure you don’t accidentally depend on Windows-only APIs.
But remember that the .NET Framework is still the best choice for building desktop applications as well as Web Form-based web applications. If you’re happy on the .NET Framework, there is also no reason to port to .NET Core.
Let us know what you think!


Will this be available on GitHub?
Most, if not all, is already available on http://github.com/dotnet/corefx/
As Frederik said, the package is produced out of CoreFX which is where all the code lives.
https://github.com/dotnet/corefx/tree/master/pkg/Microsoft.Windows.Compatibility
Do I always have to install the whole compatibility package even if I need just something like System.Drawing?
If you need System.Drawing you can just add a reference to the System.Drawing.Common NuGet package; you don’t need the entire compatibility pack.
Correct. The paragraph right before “Summary” provides more information on why that’s useful.
I’d prefer having a function inside each component to check if it works, instead of checking for a platform since serveral of those marked “windows only” could in principle add linux support later on (e.g. the crypto APIs).
Fair point.
Very fair point indeed. The reason we haven’t done this yet is because we need to support the existing .NET Framework 4.6.1 surface area. But yes, ideally one would write code like this:
if (Registry.IsSupported){
// Use Registry APIs
}
You guys are the experts, but your answer confuses me. You are already extending Core with this pack. Why would extending the feature set to include these checks be an issue?
Also, is this something you are planning for a future release?
That’s a great question. The reason is that it also extends .NET Standard, which unifies the APIs between .NET Framework and .NET Core. So for library writers (who would be primary consumers of runtime light-up) this wouldn’t be useful yet.
But you raise a good point; this doesn’t prevent us from doing this on .NET Core. I’ve filed an issue:
https://github.com/dotnet/corefx/issues/25530
+1
When are you going to add support for Web Forms?
Or System.Xaml. 😉 😉 😉
Nice to see that System.DirectoryServices has made the cut, when System.Xaml has over five times the upvotes. And this issue was created before GitHub had reactions.
https://github.com/dotnet/corefx/issues/5766
Nice to see you listening to listening to your customers there. 😉 😉 😉
Anytime you mention Web Forms, it is just crickets from Microsoft. I think they just want it to die. Honestly, I don’t see why it is such a problem getting it to run on .NET Core. It’s not like it is IE and is integrated into the Windows kernel and hence inseparable from Windows.
LOL… what’s really amazing is the incompetent dysfunction of MSFT’s client “strategy” is (finally) resulting in key departures: https://medium.com/@timsneath/new-beginnings-google-bf849766a497
I for one welcome our new Google overlords. 😉 😉 😉
System.Xaml is tagged for 2.1.0 with netfx-compat-pack. Is it not good enough?
Hey I haven’t met my quota for complaining about something MSFT lately and I had to catch up. I also might have gone a bit overboard. 😉
But you are correct, I am looking at the issue now and it has been tagged as you describe. I haven’t heard/seen anything official on that issue, like a MSFTie saying it was going to be included for sure. Last I saw was a “proposal” and we know how those usually turn out. I’ll feel better once it’s actually out but until then I clearly must SIT DOWN AND KEEP MY MOUTH SHUT! 😉
Turns out no. No, it’s not good enough:
https://github.com/dotnet/corefx/issues/5766#issuecomment-345791608
Why do you want to move to .NET Core?
A previous post shows that .net core’s API are faster than .net framework, and that’s reason enough. ^_^
If performance of your web application is key, I’d suggest Web Forms isn’t the framework you should be using to begin with. You’d be getting a much bigger bang for the buck by using ASP.NET Core on .NET Framework than by using Web Forms on .NET Core.
Is that really a useful answer?
Obviously WebForms users are hoping to change targets, re-compile, run tests, fix a few errors, and get a 15% improvement, rather than re-write the application from scratch — even if that gets them a 300% improvement …
Performance isn’t the issue. The issue is the ability to run on platforms other than Windows, while still having the features of Web Forms. Personally, I am not a fan at all of the new way of writing web apps. I.e. I don’t want to be subjected to JavaScript hell. I want to develop web apps in a straightforward and clean manner and do it on the server-side without having to muck around with JavaScript. I am able to do this with Web Forms and Telerik’s UI controls. If ASP.NET Core added robust controls that could be programmed on the server side, then, I wouldn’t have a problem switching. Until that happens, I don’t see a compelling reason to upgrade. The main reason is to fend off having people I report to who have a bad opinion of Microsoft and Windows pushing me into using Python or something else. Besides that, Microsoft appears to mostly be putting their energy into .NET Core. Once that is done, it would be nice if they could go back to innovating instead of just copying the Linux folks.
Is not the idea of Open Sourcing to allow the community to scratch their own itch. Of course ASP.NET WebForms are slower, but surely the community can improve it using new methodologies as async loading the controls, event based servers, websockets and more modern stuff.
Had ASP.NET WebForms been open sourced a few years ago, it would be in better shape, but I’m talking about full open sourcing the Java Way, not the simulation available at https://github.com/Microsoft/referencesource.
From my experience I can tell that the performance of ASP.NET WebForms can improve (2x or 3x) if we remove several dumb practices that have near to zero impact in compatibility.
Please add full support for System.Diagnostics.Tracing and configuration files. I use this a lot in my apps and it’s one of the last things that I need to get my code running on .NET Core.
The compat pack adds support for System.Configuration. System.Diagnostics.Tracing is already available. What are you missing?
The problem that I ran into was that System.Diagnostics.TraceSource doesn’t read the settings from App.config. It looked to me that the class is there now, but, lots of code was removed from it and it was basically unusable. You could compile against it, but, that was about it.
Yes, most of our components do not support being configured via a configuration file when ran on .NET Core. Instead, the expectation is that you set this up via code. The reason being that System.Configuration ends up tying everything to everything which makes self-contained deployments and AOT unviable.
What is AOT? Aeghwylc Othrum Trywe ??
AOT == ahead-of-time compilation.
Sorry, but, that makes no sense at all. There is a new API for working with JSON configuration files, etc. I don’t see how this is any different than the traditional App.config. I can see not inheriting from machine.config. But, other than that, having a rule against config files seems completely non-sensical if you ask me. As it stands right now, System.Diagnostics.TraceSource is completely unusable. How does including a config file make self contained deployments unviable?
> Sorry, but, that makes no sense at all. There is a new API for working with JSON configuration files, etc. I don’t see how this is any different than the traditional App.config.
The way the existing System.Configuration works is that the runtime will actively instantiate types that represent the different configuration elements. Those types are specific to the configured technology (i.e. networking, XML, etc). This creates a dependency problem that is hard to fix in AOT applications.
ASP.NET Core’s configuration APIs work differently because it’s not strongly typed.
I see that System.ComponentModel.Composition is “coming”… how different is it from “System.Composition” (v1.1.0, the one already on NuGet)?
Is good to see MS investing in .NET stack… keep .NET Rocking…
System.ComponentModel.Composition is MEF v1 while System.Composition is MEF v2. So this means we’ll bring MEFv1 to .NET Core.
mind blown, finally without MEF core was out of the question, still is without the original WCF and System.xaml, but at least a sign of something useful , if I’m gonna have to mark nullables it’s gonna be a never with C#8
While it’s tempting to quickly add more new features and make more .NET Framework APIs available, Microsoft should take more time to improve APIs’ documentation (one-liners don’t tell how a method works…) and provide more samples right where developers would expect them. .NET Core and ASP.NET Core are great, but they lack a good documentation story, if there’s any. While the .NET Framework had a good documentation story (tons of sample, best practices, blog posts, you name it) in the early days, more and more APIs left behind undocumented over time leaving developers behind in trial and error nightmares. There’s no doubt that .NET Core and ASP.NET Core have the potential to make developers’ day, but without improving the documentation story, they have no life.
And yes, Visual Studio needs to be improved as well. Currently, it’s a huge mess and quite a number of features (code editor, build system, etc.) that work well under .NET Framework are broken under .NET Core/ASP.NET Core. The list of issues is simply too long to post here and things certainly will improve over time, but only when Microsoft stops adding new features until the mess is cleaned up. If Microsoft continues to prioritize new fancy features nobody needs, developers have no other choice than leaving the Microsoft bandwagon.
The new approach is to build things in the open, which means not everything will work perfectly at the outset. The advantages far outweigh the disadvantages. If there is any area you think is lacking, feel free to contribute and submit a PR.
Thanks for the feedback! We completely agree that the docs are a key part of the developer experience for using .NET Core and ASP.NET Core and we continue to work on improving them. The docs for .NET Core and ASP.NET Core can be found on docs.microsoft.com at https://docs.microsoft.com/dotnet/core and https://docs.microsoft.com/aspnet/core respectively. The API reference docs for all of .NET can be found at https://docs.microsoft.com/en-us/dotnet/api. The docs are fully open source, so if you find any gaps or issues with them please do let us know by filling issues in the corresponding GitHub repos (https://github.com/dotnet/docs or https://github.com/aspnet/docs) or you can contribute directly by submitting a PR. Often submitting a fix is as easy as clicking the Edit link right on the doc page.
The document is really very important.
How to know what API fit the requirement? The document.
How to use the API? The document.
How can a developer to be a .NET developer. The document.
How can a developer be good at .NET without documents, samples, blog post?
The source code is everything.
How will you know what an API does? Read the source.
How do you use the API? Read the source.
….
What I mean is, obviously docs are great –and they dramatically increase speed of learning– but docs are not the end-all be-all any longer. When you have the source, you can not only figure it out, you can fix it when it’s broken, and you can write your own docs if you have too.
Immo, thanks, it will help a lot some migration scenarios.
The Performance Counter library will be a readonly API, or the full Performance Counter API?
You’ll be able to write to the perf counters but you need to create them outside of .NET Core (usually via the setup).
Can there be a deterministic way for platform compatibility? For example in package/assembly metadata. Currently api analyzer seems hardcoding the api list.
Yes, that’s what we want to achieve. We’re currently hard coding the APIs in the analyzer because we only handle the platform and the platform likely needs hard coding. However, for NuGet packages we’re thinking about using metadata in the assemblies/package. https://github.com/dotnet/platform-compat/issues/9
Does this package allow running EF6 codebases on Mac/Linux?
(ie: ASP.NET Core app currently running on .NET Framework – and stuck that way due to the dependency on EF6)
NOTE: EF7 migration is not feasible for the foreseeable future due to multiple critical blocking issues: ie: lack of TPT support, spatial data, complex query limitations, lifecycle hooks etc.
Marcel, TL;DR: This package is not enough. We believe making EF6 work on .NET Core would require significant changes. We have an issue tracking this idea at https://github.com/aspnet/EntityFramework6/issues/271, but we are currently only monitoring feedback on it and our main focus is on removing adoption blockers for EF Core instead. Any feedback on what specific aspects prevent you from using EF Core might help us advance in the right direction.
To provide more context on this: although the gap between the .NET Framework APIs required by EF6 and what is available in .NET Core 2 and .NET Standard 2 is now very small, it is still significant. Things like ConfigurationManager and CodeDom in the compatibility pack may help reduce that gap, but only a little. Other areas where there are still some things missing are System.Data.Common.DbProviderFactories (there is now a community PR under review for adding this in a future version of .NET Core), various types in System.Runtime.Remoting and System.Reflection.Emit, and some AppDomain related APIs.
Each missing API may require a different approach: In some cases it should be possible to find alternatives, in some other cases we may need to remove functionality, or design new experiences and tooling to satisfy the same requirements.
As an example, in the case of ConfigurationManager, even if the API is now available, the right thing would probably be not to depend on it when running on other platforms, and instead make sure that everything can work with code-based configuration.
Thanks for that detailed write-up Diego.
Perhaps it could be a goal of a subsequent EF6.x release to be able to target a smaller but key feature-set (let’s say code-only config etc.) so that it can run on .NET Core.
The main blockers preventing us from trying to adopt EF Core again (we’ve got burned quite significantly already) are:
– TPT support (https://github.com/aspnet/EntityFrameworkCore/issues/1189)
– Spatial data (https://github.com/aspnet/EntityFrameworkCore/issues/1100)
– Advanced query support (ie: don’t pull anything into memory – how is this a feasible option for ANY web server??)
– And lastly but possibly most importantly, SOME sort of migrations story
The current answer is just such a non-starter: “There isn’t really a feasible way to port existing EF6 migrations to EF Core.” (from docs: https://docs.microsoft.com/en-us/ef/efcore-and-ef6/porting/port-code)
Let alone that our migrations now start on EF6, then we have 10-15 ones that are on EF Core 1.0 (back when we adopted it and realized it was a show-stopper), then they go back to EF6 since then. The ONLY way we can bring on a new dev on our codebase is to literally copy a DB from another dev (as the ‘current golden snapshot’). We have no way of running a fresh migration and it generating a DB from scratch – the whole point of migrations in the first place. The only thing I can think of is for us to take an entire engineering sprint at some point and go back to each snapshot of the codebase back when we built out our codebase using EF Core and re-create each of those in EF 6 – and somehow hope that it will arrive at the same database state that the EF Core migrations did, with possibly then needing to continue and re-generate all subsequent EF6 migrations if that’s not strictly the case (image the nightmare the poor soul who will take this on will go through). Even doing all this, that still puts us on an EF6-only codebase with no migration story to EF Core.
The whole strategy of “abandon EF 6 and hope future codebases will start on EF Core” is a fallacy. You guys MUST make EF6 work on .NET Core and create a sane migration path to EF Core from it. Anything less is failure of the entire Core story – which EF Core is squarely blocking on all fronts right now (both ASP.NET/cloud stories and cross-platform stories). Looking back I wish we would have just stayed on non-.NET Core EVERYTHING – we would have been much better off and apparently would have continued to be for the forseeable roadmap forward.
To translate the current .NET Core story in practical terms: if you are building production-level codebases, hold off on any .NET Core development until no sooner than 2019/2020. Then go back and re-assess. If the state of Swift/Kotlin servers hasn’t overtaken .NET by then, do a few engineering sprints you can afford to lose and assess migrating to .NET. Otherwise…. switch ecosystems to either Node/JS or Swift/Kotlin (depending on whether you are a more web-heavy or mobile-heavy organization).
I hope the brutal honesty is helpful, I wish it wasn’t the case of as much as anybody. But this is simply the raw, honest truth of where .NET stands right now.
sadly your the typical .net core victim, however manages your group and allowed that should be fired.
I’m the one. Thanks for the tip! =)
PS: It was a decision between holding it out with .NET VS going with NodeJS and abandoning .NET altogether. I just wish the .NET team would have been transparent about the state of things with EF Core. The only place it started being apparent was when we got deep into issue requests and realizing just how far from ready things really were. It was mis-advertised as ‘cloud ready’ but then pulling whole tables into memory to perform queries – as if there’s a single web server out there which that could really be feasible for. Not sure how that would have been identified upfront short of taking the dive with it. Point is .NET Core was, and still is, mis-advertised as ready. It isn’t. And it’s on the backs (and pockets) of precious customers that these experiments are being held.
When the team has answers to all the above EF Core might be worthwhile another investigation. @Diego I can’t stress enough the prioritization of backwards support for EF6 on .NET Core above all other ‘abandon-and-rebuild from scratch’ initiatives the team has going on right now. It will at least prevent disasters like this while the future keeps getting baked.
I’ve been wondering if it would be possible to add an EF 6 compatibility mode for the generated SQL to EF Core. Specifically, I’m thinking about when you do a query and Include() collection navigation properties. EF 6 uses a big JOIN to do it. EF Core uses separate queries which seem more efficient, but, in some cases actually turn out to perform worse. Or some queries that used JOINs previously, now do N+1 queries. It gets to the point where I start to question using an ORM if the behavior is constantly changing underneath. Honestly, for a lot of things Dapper is probably easiest. Personally, I am not very happy about EF Core in terms of backwards compatibility. I am stuck using multiple different ways of doing things depending on what the query requirements are. EF 6 is better at some things, EF Core is better at others, Dapper is better at others, and NHibernate is better at others.
It would be great if vendors didn’t have to upgrade their providers every time there is a new version of EF Core. It seems like the SPI must be changing too. And the vendors are bad enough releasing providers. MySQL is what I am thinking of. Oracle is doing a terrible job and don’t even have a 2.0 provider yet.
It seems to me that the Core products in general should probably have been called something else. I.e. they are completely separate products re-written from the ground up. ASP.NET Core is another example of this. Web Forms was completely dropped. And the new is nothing like the old.
I read somewhere that Windows Services are supported in the compat pack. How does that work? I can see how it work as a library supporting .NET standard and being called as well as being in a command line app and being called every so often, but how does this work in a Windows Service? I’m trying to understand. If I’m wrong and it it is not supported, that’s cool to. Thanks.
Not sure I understand. Are you asking how you can register your .NET Core console app as a Windows Service?
Sorry, missed your message. 🙂
So, in .NET, there is a mechanism to create windows services. Is there an example anywhere regarding the steps associated with this? Thanks.
Let me rephrase.
In the Classic .NET framework, I can create a windows service and install a windows service. Is there an example anywhere regarding the steps associated with create a Windows Service in .NET Core? Or, am I misunderstanding what the “windows services” support means? Thanks.
It means we provide ServiceBase class that Windows services use to derive. You still need to register the app as a Windows Service; I don’t have an example handy but I think it requires setting some reg keys. Most installer tools (IS, Wise, WiX) offer built-in support for doing that.
11 days in and you got 375 nuget downloads, are you getting the message ?, ASP.net in general is in meltdown, just fell below 14% market share and losing about 1% /month and core is going no where < 0.1% , reading some of the comment here tells part of the story, the fact that it's been three years and having nothing to show for the effort is mind boggling, the common thread is the new breed of softy, absolutely zero humility, were excited , it's the greatest, look how easy, oh so easy, just watch,
Hello Word!
wala !!
thanks for the laughs!
“The rumors of my meltdown are greatly exaggerated.” — ASP.NET Core, having millions of downloads
Seriously though, new tech, even if super successful, always follows a hockey-stick growth pattern. Especially with porting helpers as they require more commitment from the consumer’s side. I’m neither concerned nor surprised by these numbers.
It’s clear that you’re unhappy about the status quo. If you don’t mind asking me, what would you like to see us doing?
1st of all this is not a game to me, I have 12 years invested in .Net, the ham handed way this effort was handled from the start has been disconcerting , from hanselmans ASP.Net 5 is DEAD post , to all the alpha quality RC release and all the messaging that this is going to be the only way forward .Net and other misinformation leading to victims like Marcel above and many many others, meanwhile the vast majority like myself look on in wonder and dismay at the damage being done to the eco-system, I don’t know you’re background but I suspect it’s Xamarin as seem to have starting the vast number of post on the blog for the last few years promoting UWP and .Net core/Standard so I will allow for the fact you might not be aware of the shabby behavior of Dev div and Windows div since 2010 onward, maybe you not aware how many where burn by WP and Metro/Modern Store and all the various debacles that where sure to be the future so your lack of concerned nor surprised is par for the course, it’s always the same confident swagger with no real engineering behind it.
“Seriously though, new tech,”
this is not new tech, it’s was supposed to be a open sourcing of .net, I seen these scenarios play out again and again, so I saw where you post about MEF v1, the first possibly and the beginning of WCF, don’t see contracts and other peices so prob the terrible Odata garbage anyway, but what the hell, no harm check it out even I I might have to suffer the bugfest that is VS2017, when I saw the number of downloads on nuget, dejau.
I didn’t bother trying to get github numbers but the nuget and W3tech num, ber tell the story, I don’t no where you get millions , maybe nuget and github combined every version, you guy may be hoping for proverbial hockey stick, not me ,
again, I been watching you guys closely for 3 years, at this point I would be galacticly shocked if follow the pattern, on the bright side the 4.7.1 updates have been great, WPF bugs fixed I thought never would be, so it’s a two edged sword, very happy with full .Net (It’s the world standard and a joy to use) and quite unhappy that after three years there is not WCF/MEF/SYSTEM.XAML and many other critical pieces that you ALREADY HAVE THE SOURCE TO!!! I saw the “System.Xaml is tagged for 2.1.0 with netfx-compat-pack” comment above the Mike-eee as usually has to supply the truth is now a GD maybe, the unhappiness quickly becoming complacence ,starting to think custom Java/Swift 4 on my own.
I hope you do guy do succeed, but stop the legacy BS
That is great stuff. It makes my life much easier. Thanks
Glad you like it!
Are Web Forms still Excluded?
See this comment.
Considering all the workloads that .Net Core is handling, it’s disconcerting to see it dismissively summarized as:
> .NET Core is optimized for building highly scalable web applications
Technically, by itself, .NET Core includes a single application model — console apps.
Web Apps are an add-on workload from ASP.NET Core … and are documented that way, repeatedly, e.g. on the landing page at https://docs.microsoft.com/en-us/dotnet/core/
Not to mention other workloads like UWP or IoT or cross-platform PowerShell Core …
Totally fair. I’m (obviously) a huge fan of .NET Core. My goal was to motivate why .NET Core is relevant, assuming you’re already targeting .NET Framework. And you can build both ASP.NET as well as console apps with either platform. Most customer’s flag ship products aren’t console apps but desktop or web apps. In that light, the primary differentiation is how .NET Core compares to .NET Framework with respect to web apps. The good news is: if you disagree with that, you’re most likely correct because it’s all about what value you gain from .NET Core.
Hi,
Where I can ask abou .Net Core 2.0 + Managed C++?
I’ve already asked it here https://social.msdn.microsoft.com/Forums/windows/en-US/b7f6858b-cbd2-4339-a1d6-8055b2f6e975/managed-c-with-net-core-20?forum=clr but it seems to be wrong place.
Let’s start here and see where it goes. I assume your question is whether you can target .NET Core from managed C++? The answer is currently no. Are you interested in this because you have existing managed C++ code or are you asking because you’re trying to solve a problem and believe managed C++ is the best way to solve it?
Hi,
Thank you for you reply. Currently we have large project written with .Net + Manged C++ (managed wrappers for legacy code base). Now we are in process of researching the possibility of moving to .Net Core. And for R&D purposes I’ve create simple solution with two projects: #1 – Managed C++ targeting .Net 4.6. and #2 – .Net Core 2.0 Console application which has reference to project #1. Sample code below:
ManagedCpp:
namespace ManagedCpp {
public ref class ManagedStringManipulator
{
public:
String^ Message;
ManagedStringManipulator(String^ msg)
{
Message = System::String::Format(L”MANAGED MANIPULATION: {0}”, msg);
}
};
}
Program.cs
namespace CoreConsole
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(new ManagedCpp.ManagedStringManipulator(“1111”).Message);
}
}
}
At the point of instantiating of ManagedCpp.ManagedStringManipulator I get two exceptions:
“System.EntryPointNotFoundException: ‘A library name must be specified in a DllImport attribute applied to non-IJW methods.”
.ModuleLoadException: ‘The C++ module failed to load.’
So my question are:
1. Is it possible at all to use Managed C++ dlls with .Net Core apps?
2. If ‘Yes’ what went wrong?
Hey Stanislav, would you mind contacting me at immol at microsoft dot com? I’d like to get more details from you regarding your scenario. Thanks!
Immo, I’ve sent you e-mail at 8th of Dec. Did you received it?
The interesting part is why this was done in a separate package? I suppose it should be part of .NET Core. Another question is I didn’t see the source code of Windows Compatibility Pack: https://github.com/dotnet/corefx/tree/master/pkg/Microsoft.Windows.Compatibility . Is it close source?
The reason it’s a separate package is because (1) many of the APIs are provided for compatibility reasons only. New code shouldn’t depend on them (2) many of the APIs are Windows-only. We don’t want to lead you down a path that makes a cross-platform evolution of your app harder.
Does this pack add support for COM interop, i.e. does it make System.Type.GetTypeFromProgID() function in .NET Core? Some past discussion on this topic: https://github.com/dotnet/standard/issues/55
This package only adds library functionality, not runtime features. Thus, it doesn’t change the level of COM support in .NET Core.
I loved it all!
How about System.Web.UI?