Java developers need to monitor and trouble-shoot Java applications from time to time. It’s usually straightforward to do that if an applications is running as a Java processes on your local machine. But when the Java application runs inside a Docker container on a Docker host, it becomes challenging to monitor them using tools running locally, even when the Docker host is just a virtual machine running on your desktop. In this blog post, I will describe a couple of ways to monitor such Java applications.
Connect VisualVM to an Application through JMX agents
VisualVM is a GUI tool that monitors and profiles a JVM. It comes with JDK and can be start by running “jvisualvm” command. You can also install it as a separate application. The tool connects to local Java processes directly, but in order to connect to a Java application running inside a Docker container, it needs to connect through a JMX port.
To enable remote JMX connection, you need to run your Java application in the Docker container with JVM options like these:
-Dcom.sun.management.jmxremote.rmi.port=9090
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=9090
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.local.only=false
-Djava.rmi.server.hostname=192.168.99.100
Where java.rmi.server.host.namespecifies the host name or IP address that your JMX client uses to connect to the target Java application.
Make sure to publish container’s port 9090 as the Docker host port 9090 when starting the Docker container: $ docker run -p 9090:9090 {your_docker_image}
If your Docker host runs behind a firewall, you could use a SSH tunnel to connect to port 9090 of your Docker host. This time, you may need to set your RMI server host name to be localhost: -Djava.rmi.server.hostname=localhost
Then ssh to Docker host using an SSH tunnel: $ ssh -L 9090:localhost:9090 {your_docker_host}
This will forward the connection from your local port 9090 to localhostport 9090 on your Docker host server.
As a best practice, these JVM arguments should be passed as an environment variable that could be used by an entry script to start the Java application. For example: $ docker run -p 9090:9090 -e MY_JAVA_OPTS=”…” {your_docker_image}
Once your application is running with the the right options, you can connect your VisualVM to the application with the following steps:
1. Run VisualVM $ jvisualvm
2. Add a remote host
3. Add a JMX connection
3. Click on your JMS connection to connect to your application
Running Java Mission Control and Java Flight Recorder
Java Mission Control (JMC) is a monitoring and performance tool offered by Oracle as a commercial feature of JDK 7 and 8. A key feature of JMC is Java Flight Recorder (JFR) that can be used to record event history for performance diagnosis and tuning. JMC is free to use for development.
To enable Java Mission Control, specify the following JVM options when starting your Java application in the Docker container, in addition to the same JMX-related options described above:
-XX:+UnlockCommercialFeatures
-XX:+FlightRecorder
Here are the steps to connect your JMC GUI client to a Java application that that has JMC/JFR feature enabled:
1. Run the Java Mission Control GUI that comes with Java JDK 7 or 8: $ jmc
2. Set up a remote connection
4. Click on 192.168.99.100:9090, then “MBean Server”, to monitor the application, or click on “192.168.99.100:9090” to start a flight recording session.
New Relic is a hosted solution for application monitoring. To enable New Relic for your Java application, you need to have a New Relic account and install New Relic Java agent with your application. Here are the key steps to set it up. For more details, see the New Relic Documentation.
2. Unzip the downloaded file. The unzipped file folder should contain files newrelic.jar and newrelic.yml
3. Edit newrelic.ymlfile to configure your license key and application name
…
license_key: ‘{your_newrelic_license_key}’
…
app_name: ‘{your_app_name}’
…
4. Include the unzipped folder in your Docker image
5. Start your Java application inside the container with the following JVM option: -javaagent:/path/to/newrelic.jar
6. Log in to your New Relic account and your should see your application shows up in the application list.
Debug Java Application Remotely
To debug your Java application remotely using your favorite IDE, use the following JVM option to start your application assuming the debugging agent is running at port 5005:
Make sure your publish port 5005 to your Docker host when starting your Docker container.
As an example, if you use IntelliJ as your Java IDE to debug, you can start by opening the project that contains the source code of your Java application.
Then, use “Run/Debug Configuration” Dialog to create a remote run/debug configuration.
Now you can run “MyRemoteApp” application configuration to start remote debugging.
Summary
In this blog post, I have described how to configure VisualVM, JMC, New Relic, and Java remote debugging to monitor and profile your Java applications that runs inside Docker containers. I hope you find this information helpful!
Learn more about how Jama Connect streamlines tracking and tracing requirements.
In the beginning, there is a simple code base written by a few developers. The code’s deficiencies are easily kept in the brains of developers creating it and they most likely know what needs to be fixed and were trouble can be found. Then the code grows, more developers are hired, features are added, and the code base evolves. Suddenly, its authors no longer easily retain the mind-map of the code and its faults, and the code base becomes a mysterious source of bugs, performance problems and exhibits remarkable resistance to change. This is legacy code.
Your code base presents challenges – technical debt accumulates, new features demand the existing code to evolve, performance issues surface, and bugs are discovered. How do you meet these challenges? What proactive steps can you take to make your legacy code more adaptable, performant, testable, and bug free? Code forensics can help you focus your attention on the areas of your code base that need it most.
Adam Tornhill introduced the idea of code forensics in his book Your Code as a Crime Scene. (The Pragmatic Programmers, 2015). I highly recommend his book and have applied his ideas and tools to improve the Jama code base. His thesis is that criminal investigators and programmers ask many of the same open-ended questions while examining evidence. By questioning and analyzing our code base, we will not only identify offenders (bad code we need to improve), but also discover ways in which the development process can be improved, in effect eliminating repeat offenders.
For this blog post, I focus on one forensic tool that will help your team find the likely crime scenes in your legacy code. Bugs and tech debt can exist anywhere, but the true hot spots are to be found wherever you find evidence of three things:
• Complexity
• Low or no test coverage
• High rate of change
Complexity
Complexity of a class or method can be measured several ways, but research shows that simply counting the lines of code is good enough and closely predicts complexity just as well as more formal methods (Making Software: What Really Works chapter 8: Beyond Lines of Code: Do we need more complexity metrics by Israel Herraiz and Ahmed E. Hassan. O’Reilly Media, Inc).
Another quick measure of complexity: indentation. Which of these blocks of code looks more complex?The sample on the left has deep indentation representing branching and loops. The sample on the right has several short methods with little indentation, and is less complicated to understand and to modify. When looking for complexity, look for long classes and methods and deep levels of indentation. It’s simple, but it’s a proven marker of complexity.
Test Coverage
Fast-running unit tests covering every line of code you write are a requirement for the successful continuous delivery of high quality software. It is important to have a rigorous testing discipline like Test Driven Development, otherwise testing might be left as a task to be done after the code is written, or is not done at all.
The industry average bug rate is 15 to 50 bugs in every 1000 lines of code. Tests do not eliminate all the bugs in your code, but they do ensure you find the majority of them. Your untested legacy code has a high potential bug rate and it is in your best interest to write some tests and find these bugs before your users find them.
High rate of change
A section of code that is under frequent change is signaling something. It may have a high defect rate requiring frequent bug fixes. It may be highly coupled to all parts of your system and has to change whenever anything in the system changes. Or, it may be just the piece of your app that is the focus of new development. Whatever the source of the high rate of change, evidence of a specific section of code getting modified a lot should draw your investigative attention.
Gathering evidence
How do you find which parts of your system are complex, untested, and undergoing lots of change? You need tools like a smart build system integrated with a code quality analyzer, and a source code repository with an API that allows for scripted analysis of code commits. At Jama, we are very successful using Team City coupled with SonarQube as our continuous integration server and code quality analyzer. Our source code repository is git.
Here is an example analysis of complexity and test coverage produced by Sonar. Each bubble represents a class and the size of the bubble represents the number of untested lines of code in that class. In other words, the larger the bubble, the more untested lines it has.
In this example, there are several giant bubbles of tech debt and defects floating high on the complexity scale.
Both Team City and Sonar report on the test coverage per class so with every build you not only know what code is the least tested, but you know the overall trend for coverage.
Using these tools, you now know where your complexity and untested code lives, but you need to know which parts of the suspect code are undergoing churn. This is where forensic analysis of your source code repository comes in.
Code repositories like git produce detailed logs, which can be analyzed by scripts. A command-line tool for doing this analysis is provided by Adam Tornhill to accompany his book and is available on his web site. This tool will do complexity analysis as well as change analysis.
When looking at the results of your change analysis, you are searching for not only what is changing the most, but also what code tends to change together. Classes and modules that are frequently appearing together in code commits are evidence of a large degree of coupling. Coupling is bad.
What other forensic tools does your code repository offer? You can analyze commit messages and produce word clouds to see what terms are dominating change descriptions. You would prefer to see terms like “added”, “refactored”, “cleaned”, and “removed” to red flag terms like “fixed”, “bug”, and “broken”. And of course commit messages dominated by swearing indicate real problems.
Another useful data point is which parts of your codebase are dominated by which developers. If you have classes or modules that are largely written and maintained by one or two devs, you have potential bus factor issues and need to spread the knowledge of this code to the wider team.
Pulling it all together
After the above analysis is complete, you have an ordered list of the most untested and most complex code undergoing the highest rate of change. The offenders that appear at the top of the list are the prime candidates for refactoring.
All software systems evolve and change over time and despite our best efforts tech debt sneaks in, bugs are created, and complexity increases. Using forensic tools to identify your complex, untested, and changing components lets you focus on those areas at the highest risk for failure and as a bonus can help you study the way your teams are working together.
/media/jama-logo-primary.svg00Ken Richards/media/jama-logo-primary.svgKen Richards2016-05-11 08:30:432023-01-12 16:55:35Improve Your Code with Code Forensics
A few weeks ago I had a friend Grace reach out to me and ask me if I could speak to my experience with the DevOps movement from an engineering management perspective. Grace is one of the organizers of the Portland DevOps Groundup meetup group. Their goal is to educate others and discuss topics having to do with DevOps. I agreed to speak as well as host the event at Jama (one of the very cool things that we do as an organization is to host such community events).
Grace asking me to speak was timely as I have been doing a lot of thinking lately about the culture of DevOps and how it is applied here at Jama.
The term DevOps did not use to be widely known, now it has become a fairly common term. With that wide adoption also comes misuse and misunderstanding. People are using the term for all sorts of things as well as it being buzzword for catchall job titles. To me, DevOps is all about collaboration, communication and integration. I titled my talk “DevOps is dead, long live DevOps” on purpose to gain a reaction from people (which I definitely did get reactions from some of the recruiters in attendance). My point in picking that title was that the term has become diluted and misused and is becoming irrelevant.
I focused my talk on my personal history in software development coming from an operations background. I’m no expert, this was just me sharing my experiences as a manager of technical people and how I’ve tried to build highly collaborative teams that enjoy working together and solving tough problems. I really enjoyed being able to share three separate work experiences with a large group of people and discuss how I’ve learned from each job and applied those learnings in an effort to improve upon the process each time. I spoke at length to my most current experience here at Jama and how we are working as a group to better integrate the practices and principals of DevOps into all of engineering instead of it being a single team called “DevOps” that is tasked with the work. This cultural shift is starting to happen and that is a good thing for all of Jama engineering.
I spoke for the better part of an hour and received some really thoughtful questions at the end of the talk around how people can work to affect change in culture and gain business adoption of these practices. DevOps in some ways is still mysterious for people or they think of it only in terms of tools and technologies, my hope is that my talk made it less of a mystery and starting more people thinking in terms of collaboration, communication and integration across the company culture.
/media/jama-logo-primary.svg00Jama Software/media/jama-logo-primary.svgJama Software2016-05-06 07:00:482023-01-12 16:55:36DevOps is Dead, Long Live DevOps
This is an open and honest account of the road we took that finally came out as a new deployment model for Jama, based on Docker, which is (spoiler alert) being released around the time of writing as Jama 8.0. It’s a story about the struggles that we had on this long road, but it’s mostly a tale of how we overcame those struggles, learned a lot, and built an awesome product.
Jama Debating Scalability
Like many maturing companies Jama found itself in a situation where their monolithic software architecture prohibited scaling. Scalability is here a catch-all for many quality attributes such as maintainability across a growing team, performance, and true horizontal scalability. The solution was simple — on paper. Our software had to be split up. We are talking late 2013, micro-services are taking off, and a team starts carving out functions of the monolith into services that could then be deployed separately in our emerging SaaS environment. We are a SaaS company, after all. Or we are a SaaS company first. Or, well, we are a SaaS company which deeply cares about those on-premises customers that don’t move to the cloud… yet… for a variety of reasons, whether we like it or not.
Planning our Strategy
Will we keep on delivering the full monolith to on-premises customers, including those parts we deploy separately in SaaS? That would be a pretty crappy economic proposition for us, as we’d essentially be building, then testing everything twice. On-premises customers would not benefit any of the scaling benefits of the services architecture, nor can the engineering team really depart from the monolithic approach that is slowing them down. (On a side-note, as a transitional solution we’ve used this approach for a little while, and be assured that there’s little to love there.)
Then, will we deliver a monolith to on-premises customers that’s lacking a growing number of features, having those as a value add in SaaS perhaps? That works… up to a point… we currently have services like SAML, OAuth, and centralized monitoring in our SaaS environment, that aren’t available to our on-premises customers. They let us get away with that. But there is only so many services you can really carve out, before hitting something that’s mission critical to on-premises customers.
2014: Scribbling our options on a whiteboard
The only solution that makes sense: bring the services to the on-premises customers. (For completeness sake: there was this one time someone proposed not supporting on-premises installations anymore. They were voted off the island.)
So, services are coming to an on-premises near you.
Implications of Services
Huge. The implications are huge, in areas such as the following:
Strategy. Since 2010 we have been attempting to focus on our SaaS model and in turn driving our customers to our hosted environment. The reality is that our customers are slow to adopt and requires us to refocus back to the on-premises deployment. That is okay, and there’s no reason we can’t do both, but it’s sobering to pull yourself back after so much focus went into “being more SaaS” (which came with the good hopes of the gradual transition of (all) customers to the cloud).
Architecture. Our SaaS environment has a lot of bells and whistles that make no sense for on-premises customers, and it relies on a plethora of other SaaS providers to do its work, and this needs to be scaled down. Scaled down in a way that keeps the components still usable for both on-premises customers and in the SaaS environment.
Usability. Coming from WAR deployments, where a single WAR archive is distributed, and loaded in a standardized application server (specifically Apache Tomcat), which is all relatively easy. We are now moving to a model with multiple distribution artifacts, which then also need to be orchestrated to run together as one Jama application.
Culture. There is a lot of established thinking that had to be overcome, in fairly equal parts by ourselves and by our customers. I mean, change, there’s plenty of books on change, and on how it’s typically resisted.
Within Engineering (which is what I’ll continue to focus on), I’ve been involved in ongoing discussions about a deployment model for services, going back to 2014. One of the early ideas was to just bake a scaled down copy of our SaaS environment into a single virtual machine. (And expect some flavors with multiple virtual machines to support scalability.) Too many customers just outright reject the notion of loading into their environment a virtual machine that is not (fully) under their control. A virtual machine would be unlikely to follow all the IT requirements of our customers, and lead to a lot of anxiety around security and the ability to administrate this alien. So, customers end up running services on their machines.
That quickly leads to another constraint. The administrators at our customers traditionally needed one skill: be able to manage Apache Tomcat, running Jama’s web archive file (WAR). While we have an awesome team of broadly-skilled, DevOps-minded engineers working on our SaaS environment, we can’t expect such ultra-versatility from every lone Jama administrator in the world. We were in need of a unified way across our different services to deploy them. This is an interesting discussion to have at a time where your Engineering team still mostly consists of Java developers, and where DevOps was still an emerging capability (compared to the mindset of marrying development and operations that is now more and more being adopted by Jama Engineering). We had invested in a “services framework”, which was entirely in Java, using the (may I say: amazing) Spring Boot, and “service discovery” was dealt with using configuration files inside the Java artifacts (“how does service A know how and where to call service B”). It was a culture shift to collectively embrace the notion that a service is not a template of a Java project, but it’s a common language of tying pieces of running code together.
Docker and Replicated
In terms of deployment of services we discussed contracts of how to start/stop a service (“maybe every service needs a folder with predefined start/stop scripts”). We discussed standardized folder structures for log files and configuration. Were we slowly designing ourselves into Debian deb packages (dpkg, apt) or RPM (yum) packages, the default distribution mechanism for the respective Linux distributions? What could Maven do here for us? (Not a whole lot, as it turns out.) And how about this new thing…
This new thing… Docker. It is very new (remember, this was 2014, Docker’s initial release was in 2013, the company changed its name to Docker Inc. only as recent then as October of 2014). We dismissed it, and kept talking in circles until the subject went away for a while.
Early 2015, coincidentally roughly around the time we created the position of DevOps Manager, we got a bunch of smart people in a room to casually speak about perhaps using Docker for this. There was nothing casual about the meeting, and it turned out that we weren’t prepared to answer the questions that people would have. We were mostly talking from the perspective of the Java developer, with their Java build, trying to produce Docker images at the tail end of the Java build, ready for deployment. We totally overlooked the configuration management involved outside of our world of Java, and the tremendous amount of work there, that we weren’t seeing. And in retrospect, we must have sounded like the developer stereotype of wanting to play with the cool, new technology. We were quickly cornered by what I will now lovingly refer to as an angry mob: “there is not a single problem [in our SaaS environment] that Docker solves for us”. I’m way cool about it now, but that turned out to be my worst week at Jama, with a distance. Things got better. We were able to create some excitement by using Docker to improve the way we were doing continuous automated system testing. We needed some help from the skeptics, which gave them a chance to start adjusting their views. We recruited more DevOps folk, with Docker in mind while hiring. And we did successful deployments with Docker for some of our services. We were adopting this new technology. But more importantly, we were slowly buying into the different paradigm that Docker offers, compared to our traditional deployment tools (WAR files, of course, and we used a lot of Chef).
We were also telling our Product Management organization about what we were learning. How Docker was going to turn deployments into liquid gold. How containers are different than virtual machines (they are). They started testing these ideas with customers. And toward the second half of 2015 the lights turned green. Or… well… some yellowish, greenish kind of color. Scared for the big unknown: will we be able to harden it for security, is it secure, will customers believe it is secure? But also: will it perform as well as we expect? How hard will it be to install?
One of the prominent questions still also was around the constraint that I mentioned earlier, how much complexity are we willing to incur onto our customers? Even today, Docker is fairly new, and while there is a growing body of testimony around production deployments, all of our customers aren’t necessarily on that forefront. First of all, Docker means Linux, whereas we had traditionally also supported Windows-based deployments. (I believe we even supported OS X Server at some point in time.)
Secondly, the scare was that customers would end up managing a complex constellation of Docker containers. We had been using Docker Compose a bit for development purposes now, and that let us at least define the configuration of Docker containers (which I like to refer to as orchestration), and we’d have to write some scripts (a lot?) to do the rest. Around that time, we were introduced to Replicated, which we did some experiments with, and a cost-benefit analysis. It let us do the orchestration of Docker containers, manage the configuration of the deployment, all through a user interface, web-based, but installed on-premises. Not only would it offer a much more user-friendly solution, it would take care of a lot of the orchestration pain, and we decided to go for it.
Past the Prototype
The experiments were over, and I formally rolled onto the actual project on November 11th 2015. We were full steam ahead with Docker and Replicated. Part of the work was to turn our proof of concept into mature production code. This turned out not to be such a big deal. We know how to write code, and Docker is just really straightforward. The other part of the work was to deal with the lack of state. Docker containers are typically stateless, which means that any kind of persisted state has to go outside of the container. Databases, data files, even log files, need to be stored outside of the container. For example, you can mount a folder location of the host system into a Docker container, so that the container can read/write that folder location.
Then the realization snuck up to us that customers had been making a lot of customizations to Jama. We had anticipated a few, but it turns out that customers have hacked our application in all sorts of ways. Sometimes as instructed by us, sometimes entirely on their own. It was easy enough to look inside the (exploded) WAR file and make a few changes. They have changed configuration files, JavaScript code, even added completely new Java files. With Docker that would not be possible anymore, and we dealt with many such customizations, coming up with alternative solutions for all the customizations that we knew of. Some configuration files can again be changed, storing them outside of the container; some options have been lifted into the user interface that a root user in Jama has for configuring the system, storing them in the database; and sometimes we decided that a known customization was undesired, and we chose not to solve it. By doing all that, we are resetting and redefining our notion of what is “supported”, and hopefully have a better grasp, going forward, on the customizations that we support. And with it, we ended up building a lot, a lot of the configuration management that was initially underappreciated.
Ready for the Next Chapter
Meanwhile, we are now past an Alpha program, a Beta program, and while I’m writing this we are code complete and in excited anticipation of the General Availability release of Jama 8.0. We have made great strides in Docker-based configuration management, and learned a lot, which is now making its way back into our SaaS environment, while the SaaS environment has seen a lot of work on horizontal scalability that will be rolled into our on-premises offering in subsequent releases — the pendulum constantly swinging. While I’m probably more of a back-end developer, and while “installers” probably aren’t the most sexy thing to be working on, it was great to work on this project: we are incorporating an amazing technology (Docker), and I’m sure that our solution will be turning some heads!
/media/jama-logo-primary.svg00Jama Software/media/jama-logo-primary.svgJama Software2016-05-02 07:00:372023-01-12 16:55:37The Long Road to Docker
On a chance bus ride down MLK to our Jama office a few months ago I happened to share a seat with a colleague in our Engineering Department, Bryant Syme. He had only been working for Jama for a few months and to be perfectly honest I hadn’t spoken to him much yet. We talked a lot about recent events in the office, but also talked about some of his previous work experiences. This is the first time I had ever heard about Mob Programming and the many potential benefits it can bring to a team of engineers. It planted the seed for me to introduce it to my own team and eventually start evangelizing it to the rest of our department.
What is it?
Mob Programming is a style of paired programming, but with the entire team involved instead of two developers. Every person involved in the story should be in the Mob Programming session and actively contributing, including Product Managers, DevOps and QA Engineers.
Think of Mob Programming as a tool for getting through larger, more obtuse stories and epics. The team will crowd around a single screen with one person driving and will talk through everything from acceptance criteria and design decisions, to implementation of the code and even test cases.
Mob Programming has many benefits:
Shared ownership over decisions.
Better quality code.
Ability to break through large tasks easily.
Team bonding through working together.
A great way to teach other team members various skills.
This style of work doesn’t need to be limited to programming. It could also be great to work on any project, from writing a document to planning for future work, to doing performance testing.
The tenets of Mob Programming
The main tenets of mob programming that everyone should follow are:
Use one keyboard and screen
Use a private room
Select a time keeper to rotate who is on the keyboard every 15 or 30 minutes.
Everyone gets time at the keyboard, even non-programmers.
Take a story from start to finish, or in other words: from planning to coding, to testing, to done.
Take breaks when you want.
A session should span an entire workday.
Each of these tenets are flexible and should be discussed with the group before starting. One thing I’ve had a lot of luck with so far is pausing the timer to do whiteboard planning, for instance. We also usually take however much time we need at the beginning of the session to sketch a rough plan of what we are going to do, in order to stay on task as people switch around.
One keyboard and screen
This allows the team to concentrate without the distraction of e-mail, chat applications or other work. Team members may come convinced that they will need to work on other activities since there won’t be enough to help with when they aren’t at the keyboard. I had such an encounter with one of my teammates who was certain that there would not be enough for him to do. You will need to remind them that this is not a normal meeting and that you need their full attention. In the case of my teammate, I conceded that he could bring his PC as long as he kept his attention on the task at hand. He agreed and ended up being so engaged that he rarely, if ever, looked at his own screen.
One rule you can bend here is that research on one screen can be boring for the team to watch and help with. This is an appropriate time for other team members to use their own PCs to help do research (as long as everyone is staying on task).
Use a private room
This moves the team to another space both physically and mentally, and also prevents outside distractions. Other teams should respect that you have shut the doors and should not interrupt you. But if you are interrupted, team members should volunteer to chat with that person outside of the room to allow others to keep working.
Rotate who is on the keyboard every 15 or 30 minutes
Decide on a good time interval at the beginning of the meeting. I recommend 15 or 30 minutes depending on how many people are in the group, but other time increments are also fine. I’ve found that a group of 4 or less people works best with 30 minute intervals, wheras 5 or more works best with 15 minute intervals. Its just enough time to get some work done, but also enough for everyone to rotate through in the large group.
Bring a timer with a loud alarm. I usually use the Clock App on my iPhone and turn the sound way up. When the alarm goes off, whoever is at the keyboard should immediately take their hands off and let the next person rotate in, even if they were in the middle of typing. The thing to remember here is that it’s not about one person working while the others watch, as it is about everyone working on the same thing. Whoever else rotates in should easily be able to pick up where the last one left off.
A clock that resets itself is also ideal, since you don’t want to forget to start the timer.
Everyone gets time at the keyboard, even non-programmers
Whoever is helping should have a chance at the keyboard, even if they are in a QA, PM or DevOps role. Remember that everyone is working on the same task and watching and directing what the driver is doing, and it should not matter much who is on the wheel. It’s ok to be a backseat driver in this situation.
Participation also keeps everyone at full attention! Keeping the same person or only developers will become boring for others in the room if they never get a chance to participate.
Take a story from start to finish
Even when coded, the story isn’t finished, it still needs to be tested! Work on your test cases as a team. Personally, I am a QA engineer and getting other team members to help work on making quality test cases is very validating and helps us be less black box.
Whatever is required to get that story into the “Done” column should be done during this session. In addition to getting higher quality code, test cases and automation, this also tears a lot of walls down between roles. A lot of our developers often don’t have much of an idea for what DevOps or QA engineers “do”. This is a perfect chance to get cross-team collaboration and boost how your team works together!
People are allowed to take breaks when they want
Bathroom breaks, coffee breaks, lunch breaks should not be discouraged, but be warned: people will want to keep working, so mandatory breaks may be needed!
Mob programming can also be exhausting, if someone needs a few minutes to take a breather, they should be allowed to simply leave and come back when needed.
A session should span an entire workday
This one has been difficult to schedule a lot of times. So far we have managed to schedule one full day and several half days of mob programming. Most literature I’ve seen on the topic so far recommends the full day, if possible, though. If individuals need to leave for meetings or other commitments, there should still be enough people left to absorb their absence.
Conclusion
Mob Programming is a great tool that can be used to effectively chop down and complete large stories and epics. Remember if you are trying this, review the tenets with your group, such as sticking to one screen and one keyboard, as much as possible.
This is also great for bringing other team members up-to-speed with certain design patterns or tools. Someone who never uses the command-line or has never dealt with a certain language before will likely get a chance to learn a lot.
Everyone in the room should be involved, don’t limit it to just programmers, or others will get bored and not be as engaged. Remember to invite everyone in your team to the session, including the Product Managers, QA and DevOps Engineers.
And of course remember to have fun! Odds are your team will have a blast and work just a little better together than before the experience.
/media/jama-logo-primary.svg00Jama Software/media/jama-logo-primary.svgJama Software2016-03-23 08:01:192023-01-12 16:55:41Seven Tenets of Mob Programming