Wednesday, March 30, 2011

OMEGAMON DB2 Extended Insight deployment options

As I've indicated on my prior posts, I think the Extended Insight function of OMEGAMON XE For DB2 PE is a very nice feature. It provides some very useful and interesting performance analysis information down to the SQL level.

Now the question some may have is, what is the required infrastructure to run Extended Insight? In the examples I posted earlier I show Extended Insight data displayed within the Tivoli Portal. That being said, you don't necessarily need the TEP to be able to see Extended Insight data. Extended Insight runs using infrastructure from Optim. Technically, when you are displaying this data in the TEP, you are pulling the data from the OPM server.

If you are interested in the deployment options, here is a link to a write up that describes the options:

OMINBus and EIF probe considerations

The EIF interface is one of the mechanisms you can use to integrate ITM monitoring with OMNIBus. You can use the EIF interface, for example, to send situation alerts to OMNIBus.

Version 10 of the OMNIbus EIF Probe now supports whats called a master rules file. The master rules file is designed to reference product specific rules files from integrating products. ITM 6.2.2 FP02 and later users who would like to upgrade to the new version of the OMNIbus EIF Probe will need the ITM include file to support this upgrade.

For more information on this, here's a URL link:

Friday, March 25, 2011

OMEGAMON XE for zOS V420:From Installation to Troubleshooting

Here's a link to a nice course on installing OMEGAMON XE For z/OS V4.20. You can link to the course and replay it as your time allows.

The course is in two parts course covering Omegamon XE for zOS V420 from Installation to troubleshooting. The first part will cover Installation and Configuration hints and five scenarios of real usage. The second part will show additional four scenarios to complete the part showing how you may use the product and will last with some troubleshooting hints.

Here's a link for more information:

CPU overhead in the XDEV Classic command

You see this issue periodically in the XDEV command in OMEGAMON z/OS Classic interface. The XDEV command collects device and I/O for DASD devices.

In this particular APAR you see high cpu in the Omegamon Classic collector task and missing data in the output of the XDEV display due to excessive program checks in the XDEV command.

Here's a link to more info on the APAR:

Articles on monitoring MQ on z/OS

A colleague of mine, Wayne Bucek, has written a set of articles on "Monitoring WebSphere MQ in Large zOS Environment" . Wayne discusses such things as monitoring multiple Queue managers, and monitoring large numbers of objects in this environment. It's a two part article.

Here is a link to part 1:

Here is a link to part 2:

Wednesday, March 23, 2011

Interesting white paper on monitoring IMS Connect

One of the things I tell customers about often is the advantage of the portal concept as enabled by the Tivoli Enterprise Portal. If the necessary data is in the portal, then you can use it to solve a problem, regardless of which monitoring agent it may come from. Here's a good example of what I'm talking about.

If you run both OMEGAMON XE for Mainframe Networks and OMEGAMON XE for IMS on z/OS, you can use custom workspaces, filtering, and situations to monitor the connections associated with the IMS Connect Address Space using the OMEGAMON XE for Mainframe Networks. Here is a link to a white paper that goes into detail on how to do this.

Wednesday, March 16, 2011

Using correlated situations - the rest of the story

As I implied in the first post on this topic, there is one additional step you need to perform to have the correlated situation fire in the Portal. You need to 'Associate' the situation, to tell the Portal where the situation is to appear when it is true.

In the example I include here, I do a right click on the situation, and then select 'Associate'. By doing this, this tells the Portal where the situation will appear when it is true. In the example, I associate the situation, and then lo and behold, the critical alert appears (because the underlying not found situations are true, as well).

Once it's all said and done this works well. You get a warning if FTP is not found on one or the other of the LPARs. If it's missing on both, you get the addition of a critical alert.

Using correlated situations to make alerts more relevant

I was working with a customer recently who had an interesting requirement. In their shop they wanted an availability alert to track the status of FTP on their z/OS LPARs. Starting out we created a basic situation alert that used the 'Not Found' function to check to see if the FTP task was active on the system. If not there, then the situation would fire.

But the customer wanted to add a little more logic. If FTP was down on one system, that was a warning level alert. However, if FTP was down on both systems, that constituted a critical problem, so they wanted to see a critical level alert in that scenario. I considered various ways to get this accomplished, then I proceeded with the following, which I felt was the easiest way to get the job done.

First, we started with the basic situation check for the FTP task, and distributed that alert to each z/OS managed system (as I show in the example). These situations used the 'Not Found' function to see if FTPD1 (the FTP task) was not active on the system. We set the level for this situation to warning. The basic situations are shown here in the first part of the example.

We then added a second correlated situation. This second correlated situation would check to see if the basic not found situation was true on both systems. If it was true on both, then the correlated situation would fire, and show an alert level of critical. Here I show an example of the correlating situation, as well. Note that the correlated situation is distributed to the TEMS, where it needs to be evaluated.

There's a little bit more magic that has to happen to make the situation fire, but these are the first steps. I'll provide the additional detail shortly.

Tuesday, March 15, 2011

New webcast on creating management dashboards

I will be doing a webcast next month on how to "Create an effective management dashboard using
IBM Tivoli solutions". In this webcast I will cover how to effectively design and deploy management dashboards using Tivoli solutions. Starting with examples using the Dashboard Edition function of the Tivoli Enterprise Portal, and continuing with examples built around Tivoli Business Service Manager, I will discuss optimal design considerations and the most effective usage scenarios for management dashboards.

The event will be April 7th at 11 AM Eastern Time.

Here is a link to register for the event:

Java issue - denial of service exposure

Many of you may have gotten information on this already. In case you haven't, here is some information on a vulnerability that can cause the Java Runtime Environment to go into a hang, infinite loop, and/or crash resulting in a denial of service exposure. This has the potential to impact quite a few IBM Tivoli products that use Java. Here is a link for more detail and the list (a long one) of affected products.

Tuesday, March 8, 2011

Good presentation on OMEGAMON Maniframe Networks

At Share last week Dean Butler of IBM did a presentation on identifying and solving common mainframe network performance issues. It was a very detailed presentation with several practical examples of the analysis process, and how to apply OMEGAMON XE For Mainframe Networks to address issues.

Here's a link to a version of the presentation he did at the previous Share in Boston:

Friday, March 4, 2011

Mangement GUIs, GUIs, and more GUIs

A lot of the focus for the Share conference this week has been, naturally, on z196 and on z/OS V1.12. There's a lot of powerful new feature/function in both z196 and in V 1.12. And needless to say there have been quite a few interesting sessions covering many aspects of both areas this week.

A couple of sessions gave me an opportunity to get some real "hands-on" with some of the new GUI management technologies, zOSMF and the zManager GUI for Unified Resource Manager. The zManager exercise was interesting because it provided an opportunity to see just how one goes through the process of setting and allocating resources for the zBX blades in the new zEnterprise environment.

There were also quite a few sessions on zOSMF. zOSMF is the GUI extension to the original WLM ISPF based dialogs. It does provide a much needed face lift to the process of setting up and managing your WLM environment. Plus it provides other functions. For an overview of zOSMF as of z/OS V1.12, here is a link to a Share presentation from last summer that covers it well:

Tuesday, March 1, 2011

Yet another useful OMEGAMON DB2 V5.10 enhancement

I've mentioned quite a few enhancements that have come about as part or OMEGAMON XE for DB2 PM/PE V5.10. There is one more I want to mention.

The Application Trace Facility (ATF) is an important part of the the OMEGAMON tool, and has been for many years. It remains one of the most consistently useful pieces of the tool. One aspect, or quirk if you will, of ATF in prior releases has been if you want to see SQL text in the trace, you can see the actual text for Dynamic SQL, but for static SQL, all you got was the statement number. You would then have to look somewhere else, referencing the statement number, to see the actual text for static calls. In V5.10 this is changed. You now have the ability to see the actual SQL text in the ATF trace regardless of whether it is a dynamic or static SQL call. Much more convenient since more often than not, the root of the problem may be poorly designed SQL code.

zEnterprise discussion at Share

Needless to say there will be quite a few sessions on the new zEnterprise technology this week at the Share conference in Anaheim. This will include general discussions of the overall technology (hardware and software), and the ability of the platform to handle very diverse workloads.

The ability of zEnterprise to operate diverse workloads is clearly one of it's strengths. Another aspect of this came up while I was talking at the trade expo with a customer who had brought in the new technology to their shop at the end of 2010. Their justification: the ability to be able to backup with integrity critical business data from a variety of operating system platforms with consistency from a single consistent coherent platform. For them being able to stream line and consolidate this process was imperative, and it stands to reason that consistent backups are a part of any data processing operation.