Windows Command-Line: The Evolution of the Windows Command-Line

Rich Turner

Welcome to the second post in this “Windows Command-Line” series. In this post we’ll discuss some of the background & history behind the Windows Command-Line. Specifically, we’ll explore its humble origins in MS-DOS, to its modern-day incarnation supporting tools like PowerShell and Windows Subsystem for Linux.

Posts in the “Windows Command-Line” series

Note: This chapter list will be updated as more posts are published:

  1. Command-Line Backgrounder
  2. The evolution of the Windows Command-Line [this post]
  3. Inside the Windows Console
  4. Introducing the Windows Pseudo Console (ConPTY)
  5. Unicode and UTF-8 Output Text Buffer

In this series’ previous post, we discussed the history and fundamentals of the Command-Line and saw how the architecture of Command-Lines in general has remained largely consistent over time, even while terminals evolved from electro-mechanical teletypes through to modern terminal applications. Our journey now continues along a rather tangled path, starting with early PC’s, winding through Microsoft’s involvement with several Operating Systems, to the newly reinvigorated Command-Line of today:

From humble beginnings – MS-DOS

Back in the early years of the PC industry, most computers were operated entirely by typing commands into the command-line. Machines based on Unix, CP/M, DR-DOS, and others tussled for position and market share. Ultimately, MS-DOS rose to prominence as the de-facto standard OS for IBM PC’s & compatibles, especially in businesses:

Like most mainstream Operating Systems of the time, Microsoft’s MS-DOS’ “Command-Line Interpreter” or “shell” provided a simple, quirky, but relatively effective set of commands, and a command-scripting syntax for writing batch (.bat) files. MS-DOS was very rapidly adopted by businesses large and small, that, combined, created many millions of batch scripts, some of which are still in use today! Batch scripts are used to automate the configuration of users’ machines, setting/changing security settings, updating software, building code, etc.

You may never/rarely see batch or command-line scripts running since many are executed in the background while, for example, logging into a work PC. But hundreds of billions of command-line scripts and commands are executed every day on Windows alone! While the Command-Line is a powerful tool in the hands of those with the patience and tenacity to learn how to make the most of the available commands and tools, most non-technical users struggled to use their Command-Line driven computers effectively, and most disliked having to learn and remember many seemingly arcane/abbreviated commands to make their computers do anything useful. A more user-friendly, productivity-oriented user experience was required.

The GUI goes mainstream

Enter, the Graphical User Interface (GUI), inspired by the work of Xerox’ Alto. Many competing GUI’s emerged rapidly in the Apple Lisa and Macintosh, Commodore Amiga (Workbench), Atari ST (DRI’s GEM), Acorn Archimedes (Arthur/RISC OS), Sun Workstation, X11/X Window System, and many others, including Microsoft Windows: Windows 1.0 arrived in 1985, and was basically an MS-DOS application that provided a simple tiled-window GUI environment, allowing users to run several applications side-by-side:

Windows 1.x, 2.x, 3.x, 95, and 98, all ran atop an MS-DOS foundation and heavily relied upon their MS-DOS foundations.

Note: Windows ME (Millennium Edition) was an interesting chimera! It finally replaced the MS-DOS underpinnings and real-mode support of previous versions of Windows with new OS underpinnings that took advantage of Intel’s 80286 and 80386 processors. WinME also adopted some features from Windows 2000 (e.g. new TCP/IP stack), but tuned to run on home PC’s that often struggled to run full NT. This story might end up being an interesting post in and of itself someday! 😉 (Thanks Bees for your thoughts on this) However, Microsoft knew that they could only stretch the architecture and capabilities of MS-DOS and Windows so far: Microsoft knew it needed a new Operating System upon which to build their future.

Microsoft – Unix Market Leader! Yes, seriously!

While developing MS-DOS, Microsoft was also busy delivering Xenix – Microsoft’s port of Unix version 7 – to a variety of processor and machine architectures including the Z8000, 8086/80286, and 68000.

By 1984, Microsoft’s Xenix had become the world’s most popular Unix variant! However, the US Government’s breakup of Bell Labs – home of Unix – resulted in the spin-off of AT&T which started selling Unix System V to computer manufacturers and end-users. Microsoft felt that without their own OS, their ability to achieve their future goals would be compromised. This led to the decision to transition away from Xenix: In 1987, Microsoft transferred ownership of Xenix to its partner The Santa Cruz Operation (SCO) with whom Microsoft had worked on several projects to port and enhance Xenix on various platforms.

Microsoft + IBM == OS/2 … briefly

In 1985, Microsoft began working with IBM on a new Operating System called OS/2. OS/2 was originally designed to be “a more capable DOS” and was designed to take advantage of some of the modern 32-bit processors and other technology rapidly emerging from OEM’s including IBM.

However, the story of OS/2 was tumultuous at best. In 1990 Microsoft and IBM ended their collaboration. This was due to a number of factors, including significant cultural differences between the IBM and Microsoft developers, scheduling challenges, and the explosive success and growth in adoption of Windows 3.1. IBM continued development & support of OS/2 until the end of 2006.

By 1988 Microsoft was convinced that its future success required a bigger, bolder and more ambitious approach. Such an approach would require a new, modern Operating System which would support the company’s ambitious goals.

Microsoft’s Big Bet – Windows NT

In 1988, Microsoft hired Dave Cutler, creator of DEC’s popular and much respected VAX/VMS Operating System. Cutler’s goal – to create a new, modern, platform-independent Operating System that Microsoft would own, control, and would base much of its future upon.

That new Operating System became Windows NT – the foundation that evolved into Windows 2000, Windows XP, Windows Vista, Windows 7, Windows 8, and Windows 10, as well as all versions of Windows Server, Windows Phone 7+, Xbox, and HoloLens!

Windows NT was designed from the start to be platform independent, having initially been built to support Intel’s i860, then the MIPS R3000, Intel 80386+, DEC Alpha, and PowerPC. Since then, the Windows NT OS family has been ported to support the IA64 “Itanium”, x64, and ARM / ARM64 processor architectures, among others.

Windows NT provided a Command-Line interface via its “Windows Console” terminal app, and the “Command Prompt” shell (cmd.exe). Cmd was designed to be as compatible as possible with MS-DOS batch scripts, to help ease business’ adoption of the new platform.

The Power of PowerShell

While the Cmd shell remains in Windows to this day (and will likely do so for many decades to come), Cmd will receive few changes in the future because its primary purpose is to remain as backward-compatible as possible. Even fixing Cmd’s “bugs” is sometimes difficult if those “bugs” existed in MS-DOS or earlier versions of Windows!

By the early 2000’s, it was clear that the Cmd shell was already running out of steam, and Microsoft and its customers were in urgent need of a more powerful and flexible Command-Line experience. This need fueled the creation of PowerShell (which originated from Jeffrey Snover’s “The Monad Manifesto“).

PowerShell is an object-oriented Shell, unlike the file/stream-based shells typically found in the *NIX world: Rather than handling streams of text, PowerShell processes streams of objects, giving PowerShell script writers the ability to directly access and manipulate objects and their properties, rather than having to write and maintain a lot of script to parse and manipulate text (e.g. via sed/grep/awk/lex/etc.)

Built atop the .NET Framework and Common Language Runtime (CLR), PowerShell’s language & syntax were designed to combine the richness of the .NET ecosystem, with many of the most common and useful features from a variety of other shells scripting languages, with a focus on ensuring scripts are highly consistent, and extremely … well … powerful 😃

To learn more about PowerShell, I recommend reading “PowerShell In Action” (Manning Press), written by Bruce Payette – the designer of the PowerShell syntax and language. The first few chapters in particular provide an illuminating discussion of the language design rationale. PowerShell has been adopted by many Microsoft platform technologies, and partners, including Windows, Exchange Server, SQL Server, Azure and many others, and provides commands to administer, and control practically every aspect of a Windows machine and/or environment in a highly consistent manner.

PowerShell Core is the open-source future of PowerShell, and is available for Windows and various flavors of Linux, BSD, and macOS!

POSIX on NT, Interix, and Services For UNIX

When designing NT, Cutler & team specifically designed the NT kernel and OS to support multiple subsystems – interfaces between user-mode code, and the underlying kernel. When Windows NT 3.1 first shipped in 1993, it supported several subsystems: MS-DOS, Windows, OS/2 v1.3, and POSIX v1.2. These subsystems allowed NT to run applications targeting several Operating System platforms upon the same machine and base OS, without virtualization or emulation – a formidable capability even today!

While Windows NT’s original POSIX implementation was acceptable, it required significant improvements to make it truly capable, so Microsoft acquired Softway Systems and its “Interix” POSIX-compliant NT subsystem.

For the fascinating inside story on the origins, growth, and acquisition of Interix, read Stephen Walli‘s two-part story here: Part 1, and Part 2. For more technical details behind Interix and how it integrated into Windows, read Stephen’s USENIX paper titled “INTERIX : UNIX Application Portability to Windows NT via an Alternative Environment Subsystem”. Interix was originally shipped as a separate add-on, and then later combined with several useful utilities and tools, and released as “Services For Unix” (SFU) in Windows Server 2003 R2, and Windows Vista. However, SFU was discontinued after Windows 8.

And then a funny thing happened…

Windows 10 – a new era for the Windows command-line!

Early in Windows 10’s development, Microsoft opened up a UserVoice page, asking the community what features they wanted in various areas of the OS. The developer community was particularly vociferous in its requests that Microsoft:

  1. Make major improvements to the Windows Console
  2. Give users the ability to run Linux tools on Windows

Based on that feedback, Microsoft formed two new teams:

  1. The Windows Console & command-line team, charged with taking ownership of, and overhauling the Windows Console & command-line infrastructure
  2. A team responsible for enabling genuine, unmodified Linux binaries to run on Windows 10 – the Windows Subsystem for Linux (WSL)

The rest, as they say, is history!

Windows Subsystem for Linux (WSL)

Adoption of GNU/Linux based “distributions” (combinations of the Linux kernel and collections of user-mode tools) had been growing steadily, especially on servers and in the cloud. While Windows had a POSIX compatible runtime, SFU lacked the ability to run many Linux tools and binaries because of the latter’s additional System Calls and behavioral differences vs. traditional Unix/POSIX. Due to the feedback noted received from technical Windows customers and users, along with an increasing demand inside Microsoft itself, Microsoft surveyed several options, and ultimately decided to enable Windows to run unmodified, genuine, Linux binaries!

In Mid 2014, Microsoft formed a team to work on what would become the Windows Subsystem for Linux (WSL). WSL was first announced at Build 2016, and was previewed in Windows 10 Insider builds shortly afterwards. In most Insider builds since then, and in each major OS release since Anniversary Update in fall 2016, WSL’s feature-breadth, compatibility, and stability has improved significantly:

When WSL was first released, it was an interesting experiment, ran several common Linux tools, but failed to run many common developer tools/platforms. The team iterated rapidly, and with considerable help from the community (thanks all!), WSL quickly gained many new capabilities, enabling it to run increasingly sophisticated Linux binaries and workloads.

Today (mid 2018), WSL happily runs the majority of Linux binaries, tools, compilers, linkers, debuggers, etc. Many developers, IT Pro’s, devops engineers, and many others who need to run or build Linux tools, apps, services, etc. enjoy dramatically improved productivity, being able to run their favorite Linux tools alongside all their favorite Windows tools, on the same machine, without needing to dual-boot.

The WSL team continues to work on improving WSL’s ability to execute many Linux scenarios, and improve its performance, and integration with the Windows experience.

The Windows Console Reboot and Overhaul

In late 2014, with the project to build Windows Subsystem for Linux (WSL), in full-swing, and due to an explosion of reinvigorated interest in all things Command-Line, the Windows Console was … well … clearly in need of some TLC, and required many improvements frequently requested by customers and users.

In particular, the Console was lacking many features expected of modern *NIX compatible systems, such as the ability to parse & render ANSI/VT sequences used extensively in the *NIX world for rendering rich, colorful text and text-based UI’s. What, then, would be the point of building WSL if the user would not be able to see and use Linux tools correctly?

Below is an example of what the Console renders in Windows 7 vs. Windows 10: Note that Windows 7’s Console (left) is unable to correctly render the VT generated by the tmux, htop, Midnight Commander and cowsay Linux tools,  whereas they render correctly in Windows 10 (right):

So, in 2014, a new, small, team was formed, charged with the task of unraveling, understanding, and improving the Console code-base … which by this time was ~28 years old – older than the developers working on it!

As any developer who’s ever had to adopt an old, crufty, less-than-optimally-maintained codebase will attest, modernizing old code is generally “tricky”. Doing so without breaking existing behaviors is trickier still. Updating the most frequently launched executable in all of Windows without breaking millions of customers’ scripts, tools, login scripts, build systems, manufacturing systems, analysis and production systems, etc. requires a great deal of “care and patience” 😉 To compound these challenges, the team quickly came to learn how exacting customers’ expectations of the Console are: For example, if the team deviate Console performance by even a percentage point or two, from one build to the next, alarms fire off in the Windows Build team, resulting in … ahem … “swift, and direct feedback” usually demanding immediate fixes. So, when we discuss Console improvements & new features in future articles, remember that there are a few inviolate tenets against which each change is measured, including:

  1. DO NOT introduce/expose new security vulnerabilities
  2. DO NOT break existing customers (internal or external), tools, scripts, commands, etc.
  3. DO NOT regress performance or increase memory consumption / IO (without clear and well communicated reasons)

Over the last 3 years, the Console team has:

  • Massively overhauled the Console’s internals
    • Dramatically simplified, and reduced the volume of code in the Console
    • Replaced several internally implemented collections, lists, stacks, etc. with STL containers
    • Modularized and isolated logical and functional units of code, enabling features to be improved (and on occasion replaced), without “breaking the world”
  • Consolidated several previously separate and incompatible Console engines into one
  • Added MANY reliability, safety, and security improvements
  • Added the ability to parse and render ANSI/VT sequences,  enabling the Console to accurately render rich text output from *NIX and other modern command-line tools & apps
  • Enabled the Console to render 24-bit Colors, up from just 16 colors previously!
  • Improved Console accessibility, enabling Narrator and other UIA apps to navigate the contents of the Console Window
  • Added / improved mouse and touch support

And the work continues! We’re currently wrapping up the implementation of a couple of exciting new features that we’ll discuss in up-coming posts in this series.

So, where are we?

If you read this far, congratulations and thank you! 😀

So why the history lesson?

As I hope you can appreciate from reading the history above, it’s important to understand that the Command-Line has remained a pivotal component of Microsoft’s strategy, platform, and ecosystem. Even while Microsoft promoted the Windows GUI to general end-users, Microsoft and its technical customers/users/partners rely heavily on the Windows Command-Line for a multitude of technical tasks.

In fact, Microsoft literally could not build Windows itself, nor any of its other software products, without a fast, efficient, stable, and secure Console! Throughout the MS-DOS, Unix, OS/2, and Windows eras, the Command-Line has remained as perhaps the most crucial tool in every technical user’s toolbox! Even the many users who rarely/never type commands into a Console themselves use the Console every day: When you build your code in Visual Studio (VS), your build is spawned in a hidden Console window! If you use Exchange Server, or SQL Server’s admin tools, many of those commands are executed via PowerShell in a hidden Console!

In this post, we covered a lot of ground: We reviewed some of Microsoft’s OS history as it pertains to the Command-Line and Windows Console.

We also gained an understanding of the Windows Console’s origins. In the next post, we’ll start digging into the Console’s internals.

Stay tuned for more!

4 comments

Discussion is closed. Login to edit/delete existing comments.

  • Troy Giorshev 0

    I think I found a small error in this great document!  All of the links are offset by one.  The first link, with text “set of commands” points to the history of GUIs, which should probably be with “Graphical User Interface (GUI)”.  And then the link on “Graphical User Interface (GUI)” points to Xerox Alto, etc.

    • Rich TurnerMicrosoft employee 1

      Many thanks. I’ve fixed the links – they should work fine now :)\

  • Subrata Das 0

    The command prompt for Windows is probably one of the most powerful applications in this operating system, but is far from the eyes of most users yet!

    Go beyond the more modern features of Windows 10, such as a revived graphical user interface (GUI), voice commands, and natural processing (NLP), you’ll get a command prompt. It will do what you want, but also the drawback is that it’s not as beautiful or intuitive as other interfaces.

    Despite windows’ new features, the command prompt itself is the same tool that has always been and can still be used if you know where to access it from.

    Not only that, but it can also do things that modern interfaces are still struggling to do. Take a look at the command prompt as windows’ old legacy feature; you need to know it!

    • Rich TurnerMicrosoft employee 1

      Thanks for sharing Subrata.

      The command-line is indeed one of the most fundamental and powerful tools in any computer and any platform – Windows, Linux, MacOS or elsewhere.

      However, it’s important to note that Cmd – the venerable Windows command-line shell – is also very old, rather fragile, less capable, and much more limited than more modern shells, especially PowerShell.

      If you want/need to use the Windows command-line today, I strongly encourage you to learn and use PowerShell, rather than Cmd.

Feedback usabilla icon