This is the best article about the subject I had ever read. Must read not only for testes, but primarily read by all managers. Congratulations and thanks for the article!
John Allspaw discusses fault tolerance, anomaly detection and anticipation patterns helpful to create highly available and resilient systems. It is a good talk if you are working on alert and monitoring.
Ashish Kumar presents how Google manages to keep the source code of all its projects, over 2000, in a single code trunk containing hundreds of millions of code lines, with more than 5,000 developers accessing the same repository.
Janet Gregory discusses changing a tester’s mindset from “How can I break the software?” to “How can I help deliver excellent software?”, using examples and advising how to apply it on agile projects.
Adrian Smith covers symptoms, root problems and guidance on recommended solutions for avoiding automated testing mistakes
Anne-Marie Charrett advises developing a testing mindset and a tester skillset that helps testers embrace disruption instead of fighting it.
Leslie Lamport makes the case for separating the design details of what a program should do and how it should work from the business of writing code, and discusses how the design process should work. This is not a testing talk, Leslie from Microsoft was the winner of the 2013 Turing Award for imposing clear, well-defined coherence on the seemingly chaotic behavior of distribute
Elisabeth Hendrickson is the Director of Quality Engineering for Cloud Foundry at Pivotal Labs. She is the award winning author of "Explore It!: Reduce Risk and
Increase Confidence with Exploratory Testing" and tweets as @testobsessed.
Emma Armstrong shows how to use Selenium and NUnit to automate web testing for C# applications. The sessions is useful to developers of other languages that Selenium supports – Java, Python, Ruby.
Wojciech Seliga shares from experience how complex it can be to deal with thousands of tests -unit, functional, integration, performance- for Atlassian JIRA and what they did to bring it under control.
Dustin Whittle shares the latest performance testing tools and insights into why your team should add performance testing to an agile development process. Learn how
to evaluate performance and scalability with MultiMechanize, Bees with Machine Guns, and Google PageSpeed.
Michael Dowden introduces JMeter and explains how to develop a data-driven methodology to determine some of the limits of a web application: max number of concurrent
users, bottlenecks, etc.
Configuration management is the foundation that makes modern infrastructure possible. Tools that enable configuration
management are required in the toolbox of any operations team, and many development teams as well. Although all the tools aim to solve the same basic
set of problems, they adhere to different visions and exhibit different characteristics. The issue is how to choose the tool that best fits each
On October 28th and 29th, GTAC 2014, the eighth GTAC (Google Test Automation Conference), was held at the beautiful Google
Kirkland office. The conference was completely packed with presenters and attendees from all over the world (Argentina,
Australia, Canada, China, many European countries, India, Israel, Korea, New Zealand, Puerto Rico, Russia, Taiwan, and many US states), bringing with
them a huge diversity of experiences. Below are a list of talks I watched or planned to watch
- Never Send a Human to do a Machine’s Job: How Facebook uses bots to manage tests
Roy Williams (Facebook)
Facebook doesn’t have a test organization, developers own everything from writing their code to testing it to shepherding it into
production. That doesn’t mean we don’t test! The way that we’ve made this scale has been through automating the lifecycle of tests to keep signal high and
noise low. New tests are considered untrusted and flakiness is quickly flushed out of the tree. We’ll be talking about what’s worked and what hasn’t to build
trust in tests.
- Opening Keynote - Move Fast & Don't Break Things
Ankit Mehta (Google)
- Scalable Continuous Integration - Using Open Source
Vishal Arora (Dropbox)
Many open source tools are available for continuous integration (CI). Only a few operate well at large scale. And almost none are built to
scale in a distributed environment. Come find out the challenges of implementing CI at scale, and one way to put together open source pieces to
quickly build your own distributed, scalable CI system.
- I Don't Test Often ... But When I Do, I Test in Production
Gareth Bowles (Netflix)
Every day, Netflix has more customers consuming more content on an increasing number of client devices. We're also constantly innovating to
improve our customers' experience. Testing in such a rapidly changing environment is a huge challenge, and we've concluded that running tests in our
production environment can often be the most efficient way to validate those changes. This talk will cover three test methods that we use in production:
simulating all kinds of outages with the Simian Army, looking for regressions using canaries, and measuring test effectiveness with code coverage analysis
- Free Tests Are Better Than Free Bananas: Using Data Mining and Machine Learning To Automate Real-Time Production Monitoring
Celal Ziftci (Google)
There is growing interest in leveraging data mining and machine learning techniques in the analysis, maintenance and testing of software
systems. In this talk, Celal will discuss how we use such techniques to automatically mine system invariants, use those invariants in monitoring our
systems in real-time and alert engineers of any potential production problems within minutes.
The talk will consist of two tools we internally use, and how we combine them to provide real-time production monitoring for engineers almost for free:
- A tool that can mine system invariants.
- A tool that monitors production systems, and uses the first tool to automatically generate part
of the logic it uses to identify potential problems in real-time.