Project "Velocity" CTP1 Features



I hope you got a chance to visit our velocity page, download the bits and try it out – We are waiting to hear your feedback and you give it us by commenting on this blog page or by visiting us on our velocity forum.


I want to talk about our CTP1 features in this post.


Our CTP1 features spanned the following areas – I have one line descriptions for these to keep this entry short, if you need to learn more about these areas please read our whitepaper or our documentation that comes with the velocity install


1.       Support for different  cache types

a.       Partitioned Cache

This allows the data to be partitioned among all of the available nodes on which the  named cache is defined.

b.      Local Cache

This allows for very frequently accessed items to be kept in the object form within the application process space. 

2.       Support for different client types

a.       Simple client

Simple client does not know about any routing and always contacts one node which will forward the request as needed

b.      Routing Client

Routing Client knows the routing table so it can contact the node that has the object directly thus saving a network hop.

3.       Deployment topology

a.       Cache Service

In CTP1, you can configure a cluster of servers to host your cache and provide a cache Service. Any number of clients can access this cache service.

4.       Concurrency Models

All cache operations are atomic and we provide the 2 models below when working with updates.

a.       Optimistic concurrency model

“Velocity” supports version based updates and with this model when you retrieve an object you also get its version. When you add the updated object back to the cache, you pass in the version and if it is the same as the version you retrieved, your update succeeds.

b.      Pessimistic locking

“Velocity” also supports for explicitly locking an object to perform updates

5.       Expiration by  TTL & Eviction using LRU

“Velocity” allows you to deal with stale objects by setting expiration policies using TTL and maintain available memory capacity through eviction.

6.       Load Balancing & Dynamic Scaling

With “Velocity”, you can add new nodes to your cluster if you need to either increase the data that you want to cache or increase the throughput or decrease the response time. “Velocity” will do implicit load balancing and new data will be cached in the new machines and existing partitions may also be migrated to the new machines to load balance.

7.       ASP.Net Integration

“Velocity” provides a session store provider that allows you to store your ASP.Net Session object in “Velocity” cache – This enables non-sticky routing allowing scaling your application.

8.       Key based Access

“Velocity” supports simple key-based access. The cache access methods take in a key that uniquely identifies the object.

9.       Tag Based Access

You can also associate tags with each object to describe it and you can retrieve objects based on tag values.




What’s beyond CTP1 –

1.       Availability – support for Failover when machines go down

2.       Replicated Cache – another cache type

3.       Embedded Topology – run the cache embedded within you application instead of as a cache service

4.       Notifications – Get notified when a object in the cache is updated

5.       Consistency Models – Support for both weak and strong consistency when doing reads/writes

6.       Native client access to the  cache service (E.g – PHP, C++ etc)

7.       Manageability & Administration


We are working on our next CTP plans and if there is a feature that is really important to you in the list above or if there are features that you expect to see but is missing from this list, let us know –



Nithya Sampathkumar

Program Manager

Comments (13)

  1. omario says:

    Is it something like memcached and  Alachisoft’s NCache?

  2. omario says:

    Will "Velocity" support System.Transactions?

  3. Microsoft Project Codename "Velocity" : Introducing Project Codename "Velocity" (소식)

  4. says:

    We would be using the memory cache instead of going directly to the database, sort of in a lazy write mechanism.  We are in an environment where a user hits our database for the same state data ever 30 seconds or so, and updates that data.  So, to implement this, we would want to have the web application reading and writing from the memory cache.  We would like a queue mechanism to know what is dirty and still on the memory cache.  We would then have a service that reads from this queue the dirty objects, and writes them out to the database.  It is possible that this service is the notification mechanism you mentioned.

  5. velocityteam says:

    >> We would like a queue mechanism to know what is dirty and

    >> still on the memory cache.  We would then have a service that

    >> reads from this queue the dirty objects, and writes them out to

    >> the database.  It is possible that this service is the notification

    >> mechanism you mentioned.

    Notifications seem like a reasonable mechanism to achieve this. We will also consider – in the future –  a mechanism to identify the set of changes since a point in time


    S Muralidhar (MSFT)

  6. velocityteam says:

    >> Will "Velocity" support System.Transactions?

    As of this CTP, it doesn’t. Currently each operation (Get/Put/Delete) is considered an atomic operation. When we start dealing with multi-object operations, we will consider the use of System.Transaction


    S Muralidhar (MSFT)

  7. velocityteam says:

    >> Is it something like memcached and  Alachisoft’s NCache?

    There is certainly a set of similar functionality, especially with regards to the core.


    S Muralidhar (MSFT)

  8. El equipo de Datos de Microsoft se encuentra trabajando en un proyecto de caching llamado "Velocity"…

  9. says:

    I would like to see full async support for C++, with i/o completion ports.

  10. sugupta00 says:

    Few things that i think would be important developments..

    1) Ability to Write multiple objects in one go instead of writing in a loop.

    2) Extending the search methods that can be used like providing support for SQL query to search objects..something similar to LINQ for Collections

    3) May be extending support for persistence…providing some kind of interface which can be implemented by user to provide OR mapping.

  11. Лети, лети, лепесток, Через запад на восток, Через север, через юг, Возвращайся, сделав круг. Лишь коснешься

  12. john.z says:

    Will velocity provide a output cache provider?

  13. DarthZar says:

    Will 64-bit support be coming?  That’s a dealbreaker in my opinion, to no be able to address more than 2 GB of ram in an instance.  64-bit support would make this world-class, on par with many of the the commercial products costing tens of thousands of $$.

    Thanks for the great work!

Skip to main content