1 Million predictions/sec with Machine Learning Server web service

Microsoft Machine Learning Server Operationalization allows users to publish their R/Python based models and scripts as “Web-Services” and consume them from a variety of client applications in a scalable, fast, secure and reliable way. In this blog, I want to demonstrate how you can score > 1 Million predictions per second with the ‘Realtime’ web-services. Realtime web services…


Amazon’s AWS Lambda and Alexa Skill with Microsoft Machine Learning Server Operationalization

In my previous blog, I talked about creating Enterprise-friendly Java clients for Machine Learning Server web-services. In this blog, we will see how easily we can extend this Java client to (1) work with  Amazon’s AWS Lambda function and (2) call this lambda function from a Amazon’s Alexa skill.   For this demo, I will assume that you have published…


Host chat-bots with Microsoft Machine Learning Server Operationalization

Microsoft Machine Learning Server Operationalization allows users to create remote R or Python sessions, create Machine Learning models using their favorite R/Python packages and publish them as ‘Web-services’. While web-services are perfect for prediction/scoring scenarios, they are inherently stateless in nature and hence may not be suited for a ‘chat-bot’ like use-case which are often stateful. In this blog,…


Enterprise-friendly Java client for Microsoft Machine Learning Server

Machine Learning Server Operationalization allows users to develop powerful R/Python machine learning models and publish them as ‘web-services’. These web-services then can be consumed by different types of clients. Users can use mrsdeploy R package on their client machines to perform ‘remote execution’  to create remote R sessions on a Server with Machine Learning Server installed and develop…