A New Generation of Features for Programming Languages

Recently Ted Leung posted a blog entry entitled Linguistic futures where he summarized a number of recent discussions in the blogosphere about potential new features for the current crop of popular programming languages. He wrote

1. Metaprogramming facilities

Ian Bicking and Bill Clementson were the primary sources on this particular discussion. Ian takes up the simplicity argument, which is that metaprogramming is hard and should be limited -- of course, this gets you things like Python 2.4 decorators, which some people love, and some people hate. Bill Mill hates decorators so much that he wrote the redecorator, a tool for replacing decorators with their "bodies". 

2. Concurrency

Tim Bray and Herb Sutter provided the initial spark here. The basic theme is that the processor vendors are finding it really hard to keep the clock speed increases going (that's actually been a trend for all of 2004), so they're going to start putting more cores on a chip... But the big take away for software is that uniprocessors are going to get better a lot more slowly than we are used to. So that means that uniprocessor efficiency matters again, and the finding concurrency in your program is also going to be important. This impacts the design of programming languages as well as the degree of skill required to really get performance out of the machine...

Once that basic theme went out, then people started digging up relevant information. Patrick Logan produced information on Erlang, Mozart, ACE, Doug Lea, and more. Brian McCallister wrote about futures and then discovered that they are already in Java 5.

It seems to me that Java has the best support for threaded programming. The dynamic languages seem to be behind on this, which is must change if these predictions hold up. 

3. Optional type checking in Python

Guido van Rossum did a pair of posts on this topic. The second post is the scariest because he starts talking about generic types in Python, and after seeing the horror that is Java and C# generics, it doesn't leave me with warm fuzzies.

Patrick Logan, PJE, and Oliver Steele had worthwhile commentary on the whole mess. Oliver did a good job of breaking out all the issues, and he worked for quite a while on Dylan which had optional type declarations. PJE seems to want types in order to do interfaces and interface adaptation, and Patrick's position seems to be that optional type declarations were an artifact of the technology, but now we have type inference so we should use that instead. 

Coincidentally I recently finished writing an article about Cω which has integrated both optional typing via type inference and concurrency into C#. My article indirectly discusses the existence of type inference in Cω but doesn't go into much detail. I don't mention the concurrency extensions in Cω in the article primarily due to space constraints. I'll give a couple of examples of both features in this blog post.

Type inference in Cω allows one to write code such as

 public static void Main(){ x = 5; Console.WriteLine(x.GetType()); //prints "System.Int32" }

This feature is extremely beneficial when writing queries using the SQL-based operators in Cω. Type inference allows one turn the following Cω code

 public static void Main(){ struct{SqlString ContactName; SqlString Phone;} row; struct{SqlString ContactName; SqlString Phone;}* rows = select ContactName, Phone from DB.Customers; foreach( row in rows) { Console.WriteLine("{0}'s phone number is {1}", row.ContactName, row.PhoneNumber); } }

to

 public static void Main(){ foreach( row in select ContactName, PhoneNumber from DB.Customers ){ Console.WriteLine("{0}'s phone number is {1}", row.ContactName, row.PhoneNumber); } }

In the latter code fragment the type of the row variable is inferred so it doesn't have to be declared. The variable is now seemingly dynamically typed but really isn't since the type checking is done at compile time. This seems to offer the best of both worlds because the programmer can write code as if it is dynamically typed but is warn of type errors at compile time when a type mismatch occurs.

As for concurrent programming, many C# developers have embraced the power of using delegates for asynchronous operations. This is one place where I think C# and the .NET framework did a much better job than the Java language and the JVM. If Ted likes what exists in the Java world I bet he'll be blown away by using concurrent programming techniques in C# and .NET. Cω takes the support for asynchronous programming further by adding mechanisms for tying methods together in the same way a delegate and its callbacks are tied together. Take the following class definition as an example

 public class Buffer { public async Put(string s); public string Get() & Put(string s) { return s; } }

In the Buffer class a call to the Put() Get() method blocks until a corresponding call to a Get() Put() method is made. Once this happens the parameters to the Put() method are treated as local variable declarations in the Get() method and then the code block runs. Similarly a call to a Get() method blocks until a corresponding Put() method is called. On the other hand a call to a Put() method returns immediately while its arguments are queued as inputs to a matching call to the Get() method. This assumes that each Put() call has a corresponding Get() call and vice versa.  

There are a lot more complicated examples in the documentation available on the Cω website.