Tag: tools

Monitoring Tomcat

Recently, I faced an issue with an application which is deployed in a Tomcat web container: In one of the environments it started to consume a lot of CPU cycles and memory.

My first attempt to investigate, is to run things locally and have a look at the Activity Monitor that comes with macOS. Often enough it’s enough to notice which app uses a lot of memory or CPU cycles.

This time though, I wanted some deeper insight and I already knew what I was looking for. Some searching in the net brought up Glowroot.

Even though I’m not experienced in using Tomcat, it was pleasantly easy and quick to get going:

  • Download a zip file and unpack it.
  • Copy the folder to a place where it’s convenient. I copied it into an existing folder I’m using to keep configuration info for Tomcat anyway.
  • Set up a way for Tomcat to find and use the Glowroot jar file.
  • Start Tomcat
  • Open a Browser at http://localhost:4000

The details for these steps are nicely explained on the GitHub Wiki of this project.

An example graph is this, taken from the projects demo site at https://demo.glowroot.org/

The my project this (so far) only confirmed that memory consumption can increase a lot when the system gets some traffic. Using well over 10GB of memory within a few minutes is a lot, especially when that memory isn’t released any time soon.

Are you using a tool that was particularly easy to use and provided what you were looking for? Which one?

Tools for Testing — 1

I recently took the excellent Black Box Software Testing Foundation Course offered by the AST (Association for Software Testing).

The course came with a lot of extra material in addition to the primary deck of slides & videos: Articles published by a number of people, blog posts, exam guides. I used Papers, which helps to organise, well, papers. It manages meta information about the papers, such as publishing date, title and authors etc. and allows highlighting text, adding notes and more. I also liked using the iPad version of the program (there’s a way to sync your library on your computer and iPad), because I don’t always work at my desk.

The other tool I found useful is GraphViz, an extremely powerful program to describe and draw graphs. The graphs are described in a text file (there’s a complete language that describes graphs, but for many purposes you’ll only need a small set).

digraph example_path {
graph [ dpi = 150 ];
 rankdir = LR;
 fontname="DejaVu Sans";
 fontsize = 20;
 node[shape=circle,fontname="DejaVu Sans"];

 1 -> 2 -> 5 -> 6 -> 7;
 2 -> 4 -> 5;
 2 -> 3 -> 4;
 3 -> 6;
 6 -> 2;
 label = "\nBranches & Loops";
}

GraphViz comes with a command line tool dot. If in the same directory as the file that holds the description above, one can use

dot -Tpng file_name_here -O

to generate a 150 dpi PNG file, which looks like this:

An Example Path
An Example Path

GraphViz can create much more complex images (see the GraphViz Gallery for examples), however I find just a short text file and a simple terminal command covers a lot of ground and there’s no need to create ‘hand made’ images anymore.

Both programs Papers and GraphViz are available for Windows, OX S and the iPad/iPhone (GraphViz is called Instaviz on the iOS devices).

What non-typical tools do you use in testing?

A Surprising Use of an Interface

This post relates to my short talk at the Agile Testing Days in Potsdam, Germany 2012.

Earlier this year I tweeted this (there’s another blog post about this as well):

Automating tests/checks, seems more valuable than having automated tests—like planning being more valuable than having the plan.

In my case the automating certainly led to other interesting results — and that’s the topic of this post.

Some context

Let’s assume we’re using Cucumber to automate tests for some application and follow the typical Cucumber setup: A folder features containing the feature files and inside of that another folder (step_definitions) containing, well, the step definitions. A support folder inside the feature directory keeps supporting files, such as environment configuration(s) etc.

Keeping it clean

The way I started to work is to start simple: At first I put everything inside the step definitions. Pretty soon though, I extract code chunks into methods of their own. And then move those method definitions into their own class (or module) – an in fact a file of their own.

That way, I end up with two more levels of abstraction below the feature descriptions and the step definitions: Classes (or modules) containing method definitions (see picture).

feature files, step defs, classes/modules, methods

Here I follow Uncle Bob Martin‘s advice of “Extract till you drop” (see his book ‘Clean Code‘, and episode 3 (Functions) of the video series at cleancoders.com): Extract methods until there’s nothing left to extract (and extract that code into the right place).

A first result: A high level interface to the system

One of the results of doing this, is a really high level and clean API to the system at hand. In addition to that I find that step definitions are much easier to understand, since they (mostly) contain a business level description of how the system is used and accompanying assertions to check whether the system behaves as it should.

Enter Pry

Pry is a command line tool for Ruby, like the interactive ruby shell irb that comes with Ruby. However, Pry is a lot more powerful and offers to display method definitions (even those defined in the C sources of internal Ruby classes) and a whole lot more. For a good introduction see the Pry Screencast by Josh Cheek on vimeo.

A trivial and contrived example

Let’s assume we’re trying to find stuff on the web (I’m on a Mac; there are hints about how to achieve this on Windows in the code comments):

require 'safariwatir' # Windows: require 'watir'

class GoogleSearch
 def initialize
 @browser = Watir::Safari.new # Win: Watir::Browser.new
 end

 def search_for search_term
 @browser.goto 'http://wellknown-search-engine.example.com/'
 @browser.text_field( :name, 'q' ).set search_term
 @browser.button( :id, 'gbqfb' ).click
 @browser.ol( :id, 'rso' ).links.map(&:text)
 end
end

Given that interface and Ppy, the creation of a console app borders on being trivial (yet, it took me a long while to get to this point):

require_relative 'google_search'
require 'pry'

GoogleSearch.new.pry

Think about this for a moment: Take Pry and your API and in 3 lines of code you get a console app for your system. The console can be used like this:

[10] stephan@nibur … #ruby google_search_console.rb
[1] pry(#<GoogleSearch>):1> search_for 'pry video josh'
=> ['Pry Screencast on Vimeo',
# ...

As mentioned above, you can even display the method definition (including lines of code, file name etc.)

[8] pry(<GoogleSearch>)> show-method search_for

From: /Users/stephan/…/google_search.rb @ line 8:
Number of lines: 6
Owner: GoogleSearch
Visibility: public

def search_for search_term
 @browser.goto 'http://google.de/'
 @browser.text_field( :name, 'q' ).set search_term
 @browser.button( :id, 'gbqfb' ).click
 @browser.ol( :id, 'rso' ).links.map(&:text)
end
[9] pry(#<GoogleSearch>)>…

Leaving thoughts

It seems that doing something can be more valuable that having the result. Notwithstanding, we still need to produce something of value for our business.

I came to realise that clean code matters more than ‘only’ providing a code base that’s easy to understand and change: Without the high level API, I might not even have noticed the possibility to create a ‘business level console’.

Having a console app like this allows to prepare starting points for exploratory testing (that might be ‘automation assisted exploring). Or short: I think a clean and high-level interface plus Pry is a cool tool for exploration.

Navigation

%d bloggers like this: