In August and September there will be a series of events covering the zEnterprise technology. The events are complimentary.
In the 1/2 day event you can get an overview of the new system, with all its hardware and software innovations. And you can find out how zEnterprise incorporates advanced industry and workload strategies to reduce your total cost of ownership/acquisition. You will be also able to put questions to IBM and industry experts from many areas, including zEnterprise, Business Intelligence and Tivoli.
Here is a list of cities and dates:
Minneapolis, MN August 17
Houston, TX August 19
Detroit, MI September 14
Montreal, QC September 14
Ottawa, ON September 15
Columbus, OH September 16
Hartford, CT September 16
Toronto, ON September 21
Jacksonville, FL September 21
Atlanta, GA September 22
Los Angeles, CA September 22
Here is a link to register (the price is right, the event is free):
https://www-950.ibm.com/events/wwe/grp/grp017.nsf/v16_events?openform&lp=systemzbriefings&locale=en_US
Friday, July 30, 2010
Wednesday, July 28, 2010
More efficient reporting in OMEGAMON DB2 PM/PE
This was an interesting issue that I helped a customer with recently in the area of OMEGAMON DB2 reporting.
The customer wanted to create a DB2 Accounting summary report, but just for a select plan, in this case DSNTEP2. Out of several million accounting records in SMF, there were, on a typical day, a few hundred relevant accounting records. When the customer ran the report, the batch job would run for almost 2 hours, and use a fair amount of CPU considering the amount of report data being generated. Even though the job was reading through several million accounting records to create the report, this seemed like an excessively long run time.
I reviewed the OMEGAMON DB2 reporter documentation (SC19-2510), and noted an entry that referred to the fact using the GLOBAL option would help reporter performance. The doc says: "Specify the filters in GLOBAL whenever you can, because only the data that passes through the GLOBAL filters is processed further. The less data OMEGAMON XE for DB2 PE needs to process, the better the performance."
So for example if all you want is a specific plan/authid try the following:
GLOBAL
INCLUDE (AUTHID (USERID01))
INCLUDE (PLANNAME (DSNTEP2)) .....
The net result for this user was that the run time for the same report dropped to just a few minutes, and CPU usage of the batch job dropped dramatically, as well. Take advantage of this option, when you can.
The customer wanted to create a DB2 Accounting summary report, but just for a select plan, in this case DSNTEP2. Out of several million accounting records in SMF, there were, on a typical day, a few hundred relevant accounting records. When the customer ran the report, the batch job would run for almost 2 hours, and use a fair amount of CPU considering the amount of report data being generated. Even though the job was reading through several million accounting records to create the report, this seemed like an excessively long run time.
I reviewed the OMEGAMON DB2 reporter documentation (SC19-2510), and noted an entry that referred to the fact using the GLOBAL option would help reporter performance. The doc says: "Specify the filters in GLOBAL whenever you can, because only the data that passes through the GLOBAL filters is processed further. The less data OMEGAMON XE for DB2 PE needs to process, the better the performance."
So for example if all you want is a specific plan/authid try the following:
GLOBAL
INCLUDE (AUTHID (USERID01))
INCLUDE (PLANNAME (DSNTEP2)) .....
The net result for this user was that the run time for the same report dropped to just a few minutes, and CPU usage of the batch job dropped dramatically, as well. Take advantage of this option, when you can.
zEnterprise and Tivoli
The new zEnterprise systems are bigger, faster, better. More power and superior integration are important benefits of the technology.
So the question may be, where do IBM Tivoli solutions fit into this new computing capability? The short answer is that the Tivoli approach fits very well within this paradigm. One of the strengths of the Tivoli approach is integration and flexibility. The exsiting Tivoli suite of solutions (for example your OMEGAMON monitoring technology) will continue to run and provide value in this environment.
As details emerge, I will do more posts on how Tivoli will continue to exploit the new zEnterprise technology. Stay tuned.
So the question may be, where do IBM Tivoli solutions fit into this new computing capability? The short answer is that the Tivoli approach fits very well within this paradigm. One of the strengths of the Tivoli approach is integration and flexibility. The exsiting Tivoli suite of solutions (for example your OMEGAMON monitoring technology) will continue to run and provide value in this environment.
As details emerge, I will do more posts on how Tivoli will continue to exploit the new zEnterprise technology. Stay tuned.
Friday, July 23, 2010
About the new zEnterprise system
Yesterday was the announcement of the new zEnterprise system. Bigger, faster, better. That seems to be the bottom line.
Here are some of the specs from the IBM announcement page:
"At its core is the first model of the next generation System z, the zEnterprise 196 (z196). The industry’s fastest and most scalable enterprise system has 96 total cores running at an astonishing 5.2 GHz, and delivers up to 60% improvement in performance per core and up to 60% increase in total capacity."
Here is a link to the announcement:
http://www-03.ibm.com/systems/z/news/announcement/20100722_annc.html
Here are some of the specs from the IBM announcement page:
"At its core is the first model of the next generation System z, the zEnterprise 196 (z196). The industry’s fastest and most scalable enterprise system has 96 total cores running at an astonishing 5.2 GHz, and delivers up to 60% improvement in performance per core and up to 60% increase in total capacity."
Here is a link to the announcement:
http://www-03.ibm.com/systems/z/news/announcement/20100722_annc.html
Wednesday, July 21, 2010
Reminder - Upcoming webcasts tomorrow
Tomorrow, July 22nd, there will be two webcasts that you may find interesting.
The first is my webcast, "Top 10 Problem Solving Scenarios using IBM OMEGAMON and the Tivoli Enterprise Portal". This event is at 11 AM Eastern time. Here's a link to register and attend:
http://www.ibm.com/software/systemz/telecon/22jul
The second event is a major new technology announcement for System z. That event happens from 12 PM to 2 PM Eastern time. Here is a link for this event:
http://events.unisfair.com/rt/ibm~wos?code=614comm
The first is my webcast, "Top 10 Problem Solving Scenarios using IBM OMEGAMON and the Tivoli Enterprise Portal". This event is at 11 AM Eastern time. Here's a link to register and attend:
http://www.ibm.com/software/systemz/telecon/22jul
The second event is a major new technology announcement for System z. That event happens from 12 PM to 2 PM Eastern time. Here is a link for this event:
http://events.unisfair.com/rt/ibm~wos?code=614comm
Take advantage of RSS feeds to stay up to data
I was working with a customer recently, and we were talking about ways to stay current on what is happening in terms of issues and available fixes for their IBM Tivoli products.
IBM Tivoli support provides RSS feeds on all its relevant support pages. RSS feeds are a great way to keep track of what is happening, and it's easy to set up and use. All you need is a link to where the RSS feeds are, and a RSS reader (I use Google reader, it's free).
Here is a screen shot of one of the main RSS feeds pages. It has links for all the relevant IBM Tivoli products. Here is the URL for the page:
Thursday, July 15, 2010
Analyzing the CPU usage of OMEGAMON
OK. As I suggested last week, you look at your SMF data, or something comparable for a typical 24 hour period, and now you have an idea of which OMEGAMON address spaces use how much CPU. In general, you will find that some tasks will use more CPU resource than others. What's normal? As the saying goes, it depends. The next step is to get of list of how much the tasks use, and look for some patterns.
For example:
High CPU usage in the CUA and Classic task for a given OMEGAMON. Maybe an autorefresh user in CUA that is driving the Classic as well. Could also be OMEGAVIEW sampling at too frequent a rate, thereby driving the other tasks (check your OMEGAVIEW session definition).
CUA is low, but Classic interface is high. Now you can ignore autorefresh in CUA or OMEGAVIEW session definition. But, you still could have a user in Classic doing autorefresh (remember .VTM to check). This could be automation logging on to Classic to check for excpetions. This could also be history collection. Near term history in OMEGAMON DB2 and IMS have costs. Epilog in IMS has cost. Also, CICS task history (ONDV) can store a lot of information in a busy environment.
Classic and CUA are low, but TEMA (agent) tasks are high: Start looking at things like situations distributed to the various TEMAs. Look at the number of situations, the situation intervals, and are there a lot of situations with Take Actions.
TEMS is high: This could be many things. DASD collection. Enqueue collection. WLM collection. Sysplex proxy (if defined on this TEMS). Situation processing for the z/OS TEMA (which runs inside the TEMS on z/OS). Policy processing (if policies being used). Just to name a few things to check.
The above is not an exhaustive list, but it is a starting point in the analysis process. The best strategy is to determine certain tasks to focus on, and then begin your analysis there.
For example:
High CPU usage in the CUA and Classic task for a given OMEGAMON. Maybe an autorefresh user in CUA that is driving the Classic as well. Could also be OMEGAVIEW sampling at too frequent a rate, thereby driving the other tasks (check your OMEGAVIEW session definition).
CUA is low, but Classic interface is high. Now you can ignore autorefresh in CUA or OMEGAVIEW session definition. But, you still could have a user in Classic doing autorefresh (remember .VTM to check). This could be automation logging on to Classic to check for excpetions. This could also be history collection. Near term history in OMEGAMON DB2 and IMS have costs. Epilog in IMS has cost. Also, CICS task history (ONDV) can store a lot of information in a busy environment.
Classic and CUA are low, but TEMA (agent) tasks are high: Start looking at things like situations distributed to the various TEMAs. Look at the number of situations, the situation intervals, and are there a lot of situations with Take Actions.
TEMS is high: This could be many things. DASD collection. Enqueue collection. WLM collection. Sysplex proxy (if defined on this TEMS). Situation processing for the z/OS TEMA (which runs inside the TEMS on z/OS). Policy processing (if policies being used). Just to name a few things to check.
The above is not an exhaustive list, but it is a starting point in the analysis process. The best strategy is to determine certain tasks to focus on, and then begin your analysis there.
Wednesday, July 14, 2010
More on Autorefresh
I posted earlier on the costs of Autorefresh when over-using it in the Classic interface. Which then brings us to the question: how do you determine if someone is using Autorefresh, and what are they likely using as a refresh interval?
There is one very handy classic command that can tell you a lot about what is going on. The .VTM command will show information on sessions connected to the Classic task. Most important, it will show the last time the session was updated, either by the user hitting enter, or the screen being refreshed via Autorefresh.
Here is an example of the .VTM command in use. In the example you see three sessions. The middle session is my own session, and I can infer this because the Last Update time stamp matches the time on the top line of the screen. The bottom line is probably a session between OMEGAVIEW and OMEGAMON. The top line is the one of interest. If I refresh my screen a few times, I will see that the Last Update column for the session in question increments on a fairly regular basis. By watching that number you can infer that the top line is a session in Autorefresh mode, and what the interval is (in this example it is at the Classic default of 5 seconds).
Once you know that Autorefresh is being used, the next step is to locate the user in question, and ask them to either set the interval to a higher number, or to discontinue using Autorefresh mode.
Wednesday, July 7, 2010
Understanding the CPU usage of OMEGAMON
OMEGAMON installs as a set of started tasks on z/OS. Which started tasks get installed depends, of course, upon which OMEGAMONs from the suite you are running, and what features of OMEGAMON you may have configured. For example, if you are running OMEGAMON XE for DB2 or OMEGAMON XE for CICS, the agent task for the XE GUI interface (aka the TEMA), is something that you may or may not run, depending upon whether or not you use the Tivoli Portal GUI interface. The TEMA is not required if all you are doing is classic 3270 interface.
When you are looking at optimizing OMEGAMON, the first thing to understand is the CPU usage of each of the OMEGAMON address spaces. I suggest doing something like looking at SMF30 record output (or an equivalent), for each of the OMEGAMON started tasks, and generate a report that will let you see a summary of CPU usage by task. Look at the data for a selected 24 hour period, and look for patterns in the data. For example, first look for which tasks use the highest CPU of the various OMEGAMON tasks. Different installations may have different tasks that use more CPU, when compared to the other tasks. It really will depend upon what OMEGAMONs you have installed, and what you have configured. Once you have identified which OMEGAMON started tasks are using the most cycles relative to the other OMEGAMON tasks, that will provide a starting point for where to begin your analysis.
When you are looking at optimizing OMEGAMON, the first thing to understand is the CPU usage of each of the OMEGAMON address spaces. I suggest doing something like looking at SMF30 record output (or an equivalent), for each of the OMEGAMON started tasks, and generate a report that will let you see a summary of CPU usage by task. Look at the data for a selected 24 hour period, and look for patterns in the data. For example, first look for which tasks use the highest CPU of the various OMEGAMON tasks. Different installations may have different tasks that use more CPU, when compared to the other tasks. It really will depend upon what OMEGAMONs you have installed, and what you have configured. Once you have identified which OMEGAMON started tasks are using the most cycles relative to the other OMEGAMON tasks, that will provide a starting point for where to begin your analysis.
Subscribe to:
Posts (Atom)