I believe that the best way to learn is to learn from each other, so I’m always on the search for stories of Canadian developers who have either built new applications using Windows Azure services or have migrated existing applications to Windows Azure. This is the story of Druida.
- [04:33] (Mapping on-premises components to Windows Azure):
“For the application server, the Web Role is typically our application server. The file server, also, is Blob storage. On-premise they [customers] have the storage [server] to put the files. Now you have blobs to put the files. It is exactly the same, the only thing is that the application server, the Web Role, processing the information it checks – ‘I’m in Azure, ok, let’s go to Blob Storage. I’m not in Azure, I go to the storage server.’”
- [05:41] (Learning curve of a .NET developer to the Cloud):
“From the developer perspective, the learning curve is kind of easy. The challenge was to make it work in a seamless way for the emulator to put all of the different pieces of Azure together. … It wasn’t the language experience of the developer, it was more the meta stuff, the stuff that goes on top to make Azure work that is more of a steeper curve. … From the point of view of a developer, there was nothing, there’s was not much to learn.”
- [07:45] (Architectural challenges):
“… the config files. Because in Azure, when you put stuff there, the web.config is kind of locked, and it gets deployed to the roles. To make changes, I have to redeploy everything. So what we did (that was one of the biggest changes), we took out the information from the web.config and put it into a blob… that will trigger an event called RoleEnvironment_Changed on the VMs so they know that something changed in the config file and they go and get the new one [Continues]”
- [09:42] (Rationale of going to Windows Azure):
“We had business reasons and technical reasons. On the business side, one is that we are not kind of a company that will be proud of having a lot of servers and a lot of infrastructure to take care of, a lot of databases, so when we had to provide offsite services to our customers, it was not a good idea to grow the company on things that we are not experts. … to have someone, like Microsoft, to take care of all of the infrastructure so the customers don’t need to take care of that, we don’t have to take care of that, so we have this middle area that do the best thing on that area – that was great. … On the technical side, the application was already multi-tenant, multi-lingual, had all of these pieces that matched perfectly onto Azure Web Roles, the blob, so it was not so hard – you don’t have to create a new team to do the Azure stuff, it was just ‘ok, you have the application, you did it in this way, it’s very easy, technically, to move it to Azure. [Continues]”
- [13:15] (Obstacles and things that were not as obvious):
“The configuration stuff was complicated and that was one of the challenges that we had to cope with. The other thing was logging. Logging was hard. Now it is much more mature. So when I speak about what were the challenges, sometimes you speak about something that is already solved. … It is very important to test it [your application] when you put it in Azure and not trust just what you have in the Compute emulator. It does a great job but sometimes there are some things that will not work exactly as they supposed to.”
- [14:45] (Training resources):
“MSDN. We tried to buy books, but actually you find that, at this era, sometimes books are too late. So you get the book and the information you’re reading then is misleading because you have the new SDK that has some changed parameters. … the good thing about the book is the introduction and the first chapter that talks about the architecture, the ideas, but when you get into the details, then the books are not useful. Finding stuff on the web and reading MSDN articles were the main sources of training.”
- [15:45] (Understanding information from the web):
“Looking at fragmented information from the web doesn’t give you the whole picture about the architecture of Azure. I think that it is good to have some kind of key books to read about – these introductory chapters … these first chapters speak a lot about how it is working, and that is very important. And sometimes if you go straight to developing, then you lose time, then you don’t know exactly why this is working this way. … Sometimes the why and the what is there at the beginning and then the how you find more on the Internet. … Sometimes time pressures make suboptimal solutions. So sometimes spending a little bit more time on learning the background allows you to make more informed decisions after.”
Daniel Firka is the founder and CEO of DRUIDA Software & Consulting, chief architect, designer, and developer of the Druida SPAC software. He is an Industrial Engineer and a Sociologist, has obtained a MSc in Biomedical Engineering, and is currently finishing his MSc in Statistics at the University of Toronto. He is also a certified Quality and Reliability Engineer by the American Society for Quality (ASQ). He is a Director at the Argentinean Institute for Quality, and in his spare time, develops software for advanced biomedical applications.
DRUIDA is a company specialized in consulting and software development for Quality Assurance and Statistical Analysis. For almost 20 years it has developed a set of solutions covering several business needs related to Continuous Improvement, all under the flagship product named SPAC. The product discussed in this interview, SPAC IPP, as an auditing tool to evaluate suppliers and internal departments.
Since its creation DRUIDA has leveraged Microsoft technologies to achieve high quality software. The latest innovation is the deployment of SPAC IPP on Windows Azure.