The DevOps Handbook (a free excerpt is accessible here) is now available.
Development and QA are leveraging production-like environments in their everyday work by following the proper concepts and patterns. We successfully integrate and execute our code into a production-like environment for every accepted feature, verifying all changes into version control.
However, if we uncover and correct faults in a separate test phase, carried out by a separate QA department only after all development has, we will likely receive undesirable results. Furthermore, if Testing a few times a year, developers discover their faults months after making the modification that caused the issue. By then, the relationship between cause and effect has most likely disappeared, addressing the problem necessitates firefighting and archaeology, and, worst of all, our capacity to learn from the error and incorporate it into our future work has.
Another significant and troubling issue by automated Testing. “Without automated testing, the more code we develop, the more time and money is necessary to test our code—in most situations, this is a completely unscalable economic model for any technological firm,” says Gary Gruver.
Although Google demonstrates a culture that embraces automated TestingTesting at scale, this was not always the case.
When Mike Bland joined the organization in 2005, deploying to Google.com was frequently tricky, particularly for the Google Web Server (GWS) team.
The effects were severe—search results might include mistakes or become excessively delayed, affecting thousands of Google.com search requests.
The potential outcome was not simply a loss of client trust.
Bland explains how it impacted developers who were releasing modifications. “Fear took over as the mind-killer. Because they didn’t comprehend the system, new team members were afraid to make changes. Fear, on the other hand, prevented experienced individuals from altering things since they understood it all too well.”
Bland was a committee member who was motivated to solve this dilemma.
(Band went on to say that having so many excellent coders at Google resulted in “imposter syndrome,” a phrase invented by psychologists to characterize people who are unable to comprehend their successes.) According to Wikipedia, “Despite external evidence of their skill, persons exhibiting the condition remain persuaded that they are frauds and do not deserve their success.” Proof of accomplishment as a product of chance, timing, or tricking people into thinking they are more bright and capable than they perceive themselves to be.”)
Bharat Mediratta, the GWS team’s leader, considered that automated TestingTesting would be beneficial.
“They drew a hard line: no modifications would be accepted into GWS without accompanying automated testing,” Bland writes. They put up a continuous build and made sure it passed every time. They implemented test coverage monitoring to verify that their degree of test coverage increased over time. They created policy and testing guidelines and required that all contributors, both inside and outside the team, adhere to them.”
The team wished to spread these techniques throughout the organization. As a result, the Testing Grouplet.
They were a loosely knit group of engineers that aspired to improve automated testing techniques throughout the firm. Over the following five years, they assisted in spreading this culture of automated TestingTesting throughout Google. (The story of how they did this in Part 5: over the next three years, they created training programs, pushed the famous TestingTesting on the Toilet newsletter [which posted in the bathrooms], the Test Certified roadmap and certification program, and led multiple “fix-it” days [i.e., improvement blitzes] to help teams improve their automated testing processes so they could replicate the fantastic results that the GWS team was able to achieve.)