OMEGAMON z/OS users will often use OMEGAMON to track and analyze resource usage and status inside the z/OS coupling facility (i.e. the CF). There has been a recent chnage to how this information is calculated and displayed in OMEGAMON z/OS. With UA67458 applied, the Utilized Storage Size now shows the 'in use' percent of the total allocated objects (entries and data elements) for the structure.
Here's a link to a technote that discusses this a little more detail.
http://www-01.ibm.com/support/docview.wss?uid=swg21618059&myns=swgtiv&mynp=OCSS2JNN&mync=R
Friday, December 28, 2012
Wednesday, December 19, 2012
Using the Tivoli Portal for topology views
Thanks to my esteemed colleague, Ernie Gilman, for this example.
I've shown quite a few examples over various blog posts of how to use plots, charts, and various other graphic capabiliities within the Tivoli Enterprise Portal. But, did you know you could use the Tivoli Portal to build topology views that show the inter-relationship of various monitored resources?
Here's a very nice example created by Ernie that shows how you can use the topology view to leverage the data that comes from OMEGAMON for zLinuz and zVM. Here we see the inter-relationship of the various Linux OS's running within zVM. It's a nice way to be able to see the big picture view, with drill downs for more detail. Nice job, Ernie!
I've shown quite a few examples over various blog posts of how to use plots, charts, and various other graphic capabiliities within the Tivoli Enterprise Portal. But, did you know you could use the Tivoli Portal to build topology views that show the inter-relationship of various monitored resources?
Here's a very nice example created by Ernie that shows how you can use the topology view to leverage the data that comes from OMEGAMON for zLinuz and zVM. Here we see the inter-relationship of the various Linux OS's running within zVM. It's a nice way to be able to see the big picture view, with drill downs for more detail. Nice job, Ernie!
Documentation on techniques for analyzing OMEGAMON CPU usage
If you want to get a better understanding of the CPU usage of the OMEGAMON address spaces running on z/OS, there are a variety of techniques. I've blogged extensively on the topic of OMEGAMON cost of monitoring considerations in prior posts.
Let's say you are looking at a particular OMEGAMON task, and would like to know more about what that particular task is doing. There are some useful commands available to help better understand OMEGAMON CPU usage (such as the PEEK command).
Here's a link to a document that shows examples of some of these commands and how to use them:
http://www-01.ibm.com/support/docview.wss?uid=swg21607775&myns=swgtiv&mynp=OCSSXS8U&mynp=OCSS2JNN&mynp=OCSS2JFP&mynp=OCSSLSDR&mync=R
Let's say you are looking at a particular OMEGAMON task, and would like to know more about what that particular task is doing. There are some useful commands available to help better understand OMEGAMON CPU usage (such as the PEEK command).
Here's a link to a document that shows examples of some of these commands and how to use them:
http://www-01.ibm.com/support/docview.wss?uid=swg21607775&myns=swgtiv&mynp=OCSSXS8U&mynp=OCSS2JNN&mynp=OCSS2JFP&mynp=OCSSLSDR&mync=R
Friday, December 14, 2012
Technote on OMEGAMON Mainframe Networks V5.1 issues and workarounds
OMEGAMON For Mainframe Networks V5.1 became available in October. At the time it came out there was also a technote published that listed various known issues and workarounds for V5.1. If you are installing V5.1 of Mainframe Networks it's worth taking a look at this technote.
Here's a link to the technote:
http://www-01.ibm.com/support/docview.wss?uid=swg21614258&myns=swgtiv&mynp=OCSS2JL7&mync=R
Here's a link to the technote:
http://www-01.ibm.com/support/docview.wss?uid=swg21614258&myns=swgtiv&mynp=OCSS2JL7&mync=R
Thursday, December 13, 2012
New 3270 displays in OMEGAMON XE For Mainframe Networks V5.1
Like the other OMEGAMON V5.1's, OMEGAMON XE For Mainframe Networks V5.1 offers support for the new enhanced 3270 user interface. So you get a whole new set of screens and displays for networks analysis.
If you recall the "Top Consumer" display in OMEGAMON z/OS, that screen shows the top consumers of CPU, memory, and I/O across the z/OS environment. It's a good starting point for a top down idnetification and analysis of potential issues.
The new KN3TAP0 screen serves a similar fiunction. From this screen you can see a network application summary view, and also see network exceptions, such as connections with segments out of order, segment retransmission, and connections in backlog status. These can all be indicators of potential issues that may require additional analysis.
Here's an example of the KN3TAP0 display:
If you recall the "Top Consumer" display in OMEGAMON z/OS, that screen shows the top consumers of CPU, memory, and I/O across the z/OS environment. It's a good starting point for a top down idnetification and analysis of potential issues.
The new KN3TAP0 screen serves a similar fiunction. From this screen you can see a network application summary view, and also see network exceptions, such as connections with segments out of order, segment retransmission, and connections in backlog status. These can all be indicators of potential issues that may require additional analysis.
Here's an example of the KN3TAP0 display:
Friday, December 7, 2012
Tivoli Enterprise Portal Support for DB2 V10
If you are installing the Tivoli Enterprise Portal Server (TEPS) on a platform such as Windows, the support matrix shown in documentation lists DB2 V9.7 as the hightest level release that works with the TEPS.
The question that has been put to me recently is, when will the TEPS infrastructure work with DB2 10? The answer is IBM Tivoli Monitoring (ITM) 6.23 Fixpack 2. So if you want to run DB2 10, for example, on your Windows TEPS server, you need to be at the ITM 6.23 FP2 level.
The question that has been put to me recently is, when will the TEPS infrastructure work with DB2 10? The answer is IBM Tivoli Monitoring (ITM) 6.23 Fixpack 2. So if you want to run DB2 10, for example, on your Windows TEPS server, you need to be at the ITM 6.23 FP2 level.
Tivoli APAR notification page
I was looking up some APAR informationr recently and came upon a useful looking web page, the Tivoli APAR notification page. The Tivoli APAR notification page contains a list of some open or recently closed APARs that may impact you. There are links for many of the products of interest, including the various OMEGAMONs and Systme Automation.
Here's a link to the page:
http://www-01.ibm.com/support/docview.wss?uid=swg21177831&myns=swgtiv&mynp=OCSS9NU9&mynp=OCSS6V4G&mynp=OCSSLKT6&mynp=OCSSSHZA&mynp=OCSSSHTQ&mynp=OCSSSHRK&mynp=OCSSPLFC&mynp=OCSSXS8U&mynp=OCSS2JNN&mynp=OCSS2JFP&mynp=OCSS2JL7&mynp=OCSSGSPN&mynp=OCSSGSG7&mync=
Here's a link to the page:
http://www-01.ibm.com/support/docview.wss?uid=swg21177831&myns=swgtiv&mynp=OCSS9NU9&mynp=OCSS6V4G&mynp=OCSSLKT6&mynp=OCSSSHZA&mynp=OCSSSHTQ&mynp=OCSSSHRK&mynp=OCSSPLFC&mynp=OCSSXS8U&mynp=OCSS2JNN&mynp=OCSS2JFP&mynp=OCSS2JL7&mynp=OCSSGSPN&mynp=OCSSGSG7&mync=
Thursday, November 29, 2012
How to know what maintenance level you are running
I had an interesting question from an OMEGAMON user recently. If you apply maintenance to OMEGAMON and recycle the tasks, is there an easy way to tell if the new maintenance is being executed?
Well, fortunately there are some easy to find eye-catchers in the OMEGAMON RKLVLOG/SYSOUT output. Logon on TSO and go to SDSF. Depending upon the type of OMEGAMON task you may need to look in a different place. For a TEMS or a TEMA task you would look at the RKLVLOG for the respective agents. For a classic collector task you would look at SYSPRINT or JES messages. Just go to the respective task and do a find on 'BUILD". That's it.
Here's a z/OS example from the RKLVLOG of the TEMS (here you can see the TEMS is running at ITM 6.23 and has support for fixpack 1 included):
KDS Server Version: 623 Build: 12044 Driver: 'tms623fp1:d2050'
Here's an IMS example from the JES messages of the OMEGAMON IMS classic collector task:
OID100: OMEGAMON FOR IMS v510 (Build Level=20120723.1505) STARTED
Here's a DB2 example from the SYSPRINT of the OMEGAMON DB2 classic collector
FPEV0114I PTF BUILD IS 12.223 13:02:07
Well, fortunately there are some easy to find eye-catchers in the OMEGAMON RKLVLOG/SYSOUT output. Logon on TSO and go to SDSF. Depending upon the type of OMEGAMON task you may need to look in a different place. For a TEMS or a TEMA task you would look at the RKLVLOG for the respective agents. For a classic collector task you would look at SYSPRINT or JES messages. Just go to the respective task and do a find on 'BUILD". That's it.
Here's a z/OS example from the RKLVLOG of the TEMS (here you can see the TEMS is running at ITM 6.23 and has support for fixpack 1 included):
KDS Server Version: 623 Build: 12044 Driver: 'tms623fp1:d2050'
Here's an IMS example from the JES messages of the OMEGAMON IMS classic collector task:
OID100: OMEGAMON FOR IMS v510 (Build Level=20120723.1505) STARTED
Here's a DB2 example from the SYSPRINT of the OMEGAMON DB2 classic collector
FPEV0114I PTF BUILD IS 12.223 13:02:07
Wednesday, November 28, 2012
An example of the Network Extended workspace
I've posted earlier about the Network Extended workspaces. These are workspaces that you can download and deploy in your Tivoli Portal if you have OMEGAMON Mainframe Networks.
Here's an example of the Network Extended workspaces. In this example you see an integrated view of TCIP performance metrics (both real time and a plot chart) across mutliple z/OS LPARS and TCPIP stacks. On the left you see several drill down options to look at connecttions, backlogged connections, FTP, OSA, and much more. There is also an option for the FIND capability that I will discuss on a later post.
Here's an example of the Network Extended workspaces. In this example you see an integrated view of TCIP performance metrics (both real time and a plot chart) across mutliple z/OS LPARS and TCPIP stacks. On the left you see several drill down options to look at connecttions, backlogged connections, FTP, OSA, and much more. There is also an option for the FIND capability that I will discuss on a later post.
Monday, November 26, 2012
An upcoming webcast on z/OS storage tools
Out of control storage growth is putting a heavy load on many storage administrators. If you are interested z/OS storage management you will want to check out this webcast, "Improve System z storage management with integrated storage suite".
This webcast will discuss the consolidation of various IBM IBM z/OS storage management tools, and will discuss ICF catalog management, space management, auditing, and automatic correction of errors in a DFSMS environment.
The speaker is Kevin Hosozawa, Tivoli System z Storage Management Product Manager, IBM Software Group. The event will be December 5, 2012 at 11:00 a.m. Eastern Standard Time.
To register for the event, click on the following link:
http://ibm.co/PQrxC4
This webcast will discuss the consolidation of various IBM IBM z/OS storage management tools, and will discuss ICF catalog management, space management, auditing, and automatic correction of errors in a DFSMS environment.
The speaker is Kevin Hosozawa, Tivoli System z Storage Management Product Manager, IBM Software Group. The event will be December 5, 2012 at 11:00 a.m. Eastern Standard Time.
To register for the event, click on the following link:
http://ibm.co/PQrxC4
Thursday, November 15, 2012
Check out the Network Extended workspaces
If you have OMEGAMON Mainframe Networks, you have an opportunity to add a very nice set of field developed workspaces to your Tivoli Portal. My colleague, Ernie Gilman, created these screens, and they have quite a few nice views and features. The Network Extended workspaces provide a very powerful set of views of your z/OS network, plus adds a new FIND capability in the Portal.
It's easy to enable the screens. You download the code, in the form of an XML file, and then issue a TACMD command to import the navigator item.
Here's a link to download the Network Extended screens (the link provides both the XML code and documentation:
http://www-304.ibm.com/software/brandcatalog/ismlibrary/details?catalog.label=1TW10OM1K
If you are interested in a demo of the Network Extended workspaces, here's a link to a Youtuve video:
http://www.youtube.com/watch?v=jVjonG6Zfrw
It's easy to enable the screens. You download the code, in the form of an XML file, and then issue a TACMD command to import the navigator item.
Here's a link to download the Network Extended screens (the link provides both the XML code and documentation:
http://www-304.ibm.com/software/brandcatalog/ismlibrary/details?catalog.label=1TW10OM1K
If you are interested in a demo of the Network Extended workspaces, here's a link to a Youtuve video:
http://www.youtube.com/watch?v=jVjonG6Zfrw
Friday, November 9, 2012
An intereresting Tivoli Portal chart technique
The Tivoli Enterprise Portal offers quite a few plot chart and other charting options. The graphics can be nice, and you have quite a few different ways to present the information.
As with any screen of data, room on the screen may be precious. When you are plotting multiple sets of data, the legend of the plot chart is important to show what metrics are being plotted. However, the legend may take up quite a bit of room on the screen. How can you condense this? One way is to use a collapsible legend to show and then hide the chart legend.
Here's an example (this example is taken from Ernie Gilman's excellent Networks_Extended workspaces). Here we see a plot chart for the network byte rate for multiple TCPIP stacks on z/OS.
As with any screen of data, room on the screen may be precious. When you are plotting multiple sets of data, the legend of the plot chart is important to show what metrics are being plotted. However, the legend may take up quite a bit of room on the screen. How can you condense this? One way is to use a collapsible legend to show and then hide the chart legend.
Here's an example (this example is taken from Ernie Gilman's excellent Networks_Extended workspaces). Here we see a plot chart for the network byte rate for multiple TCPIP stacks on z/OS.
If you look at the bottom left corner of the chart you see the legend which shows which plot line is for which TCPIP stack. To make the legend go away, click on the triangle pointed at by the arrow.
How do you enable the option? If you look at the properties for the chart and click the style tab, you see an option "Place legend in collapsible panel". Check this option to enable the collapsible legend.
It's an interesting technique to save a few pixels of space on the screen.
Tuesday, November 6, 2012
Added a new link for a blog
Dave Ellis works for IBM in the area of OMEGAMON R&D. Dave also maintains a very informational blog on OMEGAMON and related topics. I added a link to his blog under "Useful Links".
Sending SNMP traps from OMEGAMON agents
SNMP (Simple Network Management Protocol) traps are a commonly used technique to forward alerts to alert management monitors. There are a variety of methods to send SNMP traps from OMEGAMON running on z/OS to an SNMP alert reciever. One common way is to use the SNMP alert emitter option that comes as part of ITM policy automation. Another technique is to have OMEGAMON send messages to z/OS console automation, and then in turn use console automation (usually via REXX code) to send the alert in the form of an SNMP trap.
But, there's another way you may not be aware of. With ITM 6.22, IBM Tivoli Monitoring introduced the capability of agent autonomy. With this support, you can create a trap configuration XML file that enables an agent to emit SNMP alerts directly to the event receiver with no routing through the monitoring server (meaning the TEMS). To do this you need to place an SNMP trap configuration member, in the form of XML, in the RKANDATV library.
Above is an example. In this example, we are sending alerts from the OMEGAMON DB2 agent. So we added a member, KDPTRAPS (note - each agent type such as z/OS, CICS, etc will have it's own XML member). Within the member we specified the address and port number for the SNMP receiver. We then add the situation that will generate a trap, in this example DB2_Demo_Alert. So in this example, if DB2_Demo_Alert is true, then agent will send an SNMP trap directly to the SNMP reciever. If you want to add this for additional situations, you add additonal lines for situation name and target.
If you want more information on this, here is a link to the documentation:
http://www-01.ibm.com/support/docview.wss?uid=swg21422088
Also, here's a link to another good blog post that describes the function:
http://davideellis.wordpress.com/2012/05/10/snmp-traps-from-ibm-tivoli-monitoring-itm/
But, there's another way you may not be aware of. With ITM 6.22, IBM Tivoli Monitoring introduced the capability of agent autonomy. With this support, you can create a trap configuration XML file that enables an agent to emit SNMP alerts directly to the event receiver with no routing through the monitoring server (meaning the TEMS). To do this you need to place an SNMP trap configuration member, in the form of XML, in the RKANDATV library.
Above is an example. In this example, we are sending alerts from the OMEGAMON DB2 agent. So we added a member, KDPTRAPS (note - each agent type such as z/OS, CICS, etc will have it's own XML member). Within the member we specified the address and port number for the SNMP receiver. We then add the situation that will generate a trap, in this example DB2_Demo_Alert. So in this example, if DB2_Demo_Alert is true, then agent will send an SNMP trap directly to the SNMP reciever. If you want to add this for additional situations, you add additonal lines for situation name and target.
If you want more information on this, here is a link to the documentation:
http://www-01.ibm.com/support/docview.wss?uid=swg21422088
Also, here's a link to another good blog post that describes the function:
http://davideellis.wordpress.com/2012/05/10/snmp-traps-from-ibm-tivoli-monitoring-itm/
Thursday, November 1, 2012
ITM 6.23 Fixpack 2 is available
ITM 6.23 Fix pack 2 is now available. This is a cumulative fix pack for ITM 6.2.3, and supersedes ITM 6.23 Fix pack 1. The fix pack contains over 90 APAR fixes as well as a set of new features.
One thing to keep in mind is the new OMEGAMON releases (OMEGAMON z/OS, CICS, IMS, Storage. Mainframe Networks V5.1, OMEGAMON DB2 V5.11, and OMEGAMON Messaging V7.1) all will require ITM 6.23 support for the enhanced 3270 user interface. I still see quite a few customers running ITM 6.22, so if you want to move to the new OMEGAMONs, now is the time to start upgrading your ITM infrastructure.
Here's a link for more information on ITM 6.23:
http://www-01.ibm.com/support/docview.wss?uid=swg2403242
One thing to keep in mind is the new OMEGAMON releases (OMEGAMON z/OS, CICS, IMS, Storage. Mainframe Networks V5.1, OMEGAMON DB2 V5.11, and OMEGAMON Messaging V7.1) all will require ITM 6.23 support for the enhanced 3270 user interface. I still see quite a few customers running ITM 6.22, so if you want to move to the new OMEGAMONs, now is the time to start upgrading your ITM infrastructure.
Here's a link for more information on ITM 6.23:
http://www-01.ibm.com/support/docview.wss?uid=swg2403242
Thursday, October 25, 2012
Want to learn more about the new release of OMEGAMON Mainframe Networks
OMEGAMON Mainframe Networks V5.1 is becoming generally available. If you want to learn more about what is in the new version, you may want to check out this webcast: "Improved Network Monitoring with OMEGAMON for Mainframe Networks"
Is network performance and availability important to your business? Can you quickly find and fix network problems before they become outages? IBM has redesigned OMEGAMON XE for Mainframe Networks based on customer requirements to provide significant new network visibility. The redesigned OMEGAMON XE for Mainframe Networks V5.1 helps subject matter experts resolve network issues across the enterprise using fewer screens and keystrokes.
The speaker is Kirk Bean, Tivoli System Automation Product Manager. The date and time of the event is November 15, 2012 at 11:00 a.m. Eastern Standard Time.
To sign up here's a link:
http://ibm.co/ODhjVn
Is network performance and availability important to your business? Can you quickly find and fix network problems before they become outages? IBM has redesigned OMEGAMON XE for Mainframe Networks based on customer requirements to provide significant new network visibility. The redesigned OMEGAMON XE for Mainframe Networks V5.1 helps subject matter experts resolve network issues across the enterprise using fewer screens and keystrokes.
The speaker is Kirk Bean, Tivoli System Automation Product Manager. The date and time of the event is November 15, 2012 at 11:00 a.m. Eastern Standard Time.
To sign up here's a link:
http://ibm.co/ODhjVn
Here it comes - OMEGAMON Mainframe Networks V5.1
Planned availability is tomorrow, October 26th. With the availability of OMEGAMON Mainframe Networks V5.1, the OMEGAMON suite has achieved an important milestone: all the core OMEGAMONs on z/OS now offer support for the the new enhanced 3270 user interface.
As with the other OMEGAMONs, the new enhanced 3270 UI offers new function, improved navigation and integration, but with the speed of native 3270.
If you are interested in more information, here is a link to the announcement:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=iSource&supplier=897&letternum=ENUS212-429
As with the other OMEGAMONs, the new enhanced 3270 UI offers new function, improved navigation and integration, but with the speed of native 3270.
If you are interested in more information, here is a link to the announcement:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?subtype=ca&infotype=an&appname=iSource&supplier=897&letternum=ENUS212-429
Thursday, October 18, 2012
Useful link on OMEGAMON maintenance and service levels
So, you're in the process of applying maintenance to OMEGAMON. Or better yet, you are in the process of upgrading to V5.1. Where is there an integrated page to get information on the latest maintenance and service levels?
Try this URL:
https://www-304.ibm.com/support/docview.wss?uid=swg21290883#kc5420
Try this URL:
https://www-304.ibm.com/support/docview.wss?uid=swg21290883#kc5420
Tuesday, October 16, 2012
New IBM System z blog
IBM has started a new blog site with a focus on System z. IBM Mainframe Insights will be discussing many aspects of z/OS and the mainframe in general.
To check it out, here's the URL:
www.ibm.com/blogs/systemz
To check it out, here's the URL:
www.ibm.com/blogs/systemz
Friday, October 12, 2012
Troubleshooting OMEGAMON enhanced 3270 displays
You've logged on to the enhanced 3270 user interface and you see some displays that are mssing data. What's next? What are your trouble shooting steps?
Here's a link to a a handy document that may help:
http://www-01.ibm.com/support/docview.wss?uid=swg21610269&myns=swgtiv&mynp=OCSS2JNN&mync=R
Here's a link to a a handy document that may help:
http://www-01.ibm.com/support/docview.wss?uid=swg21610269&myns=swgtiv&mynp=OCSS2JNN&mync=R
Important. Do not apply PTFs UA66217, UA66505, UA66218 and UA66506
PTFs UA66217, UA66505, UA66218 and UA66506 have been marked PE. These PTFs can cause storage overlays and system outages.
APAR OA40497 has been created to fix this problem. This APAR has been marked HIPER and is the fix for the PE.
Please note - If you already have any of these ptfs applied, APAR OA40497 has zaps that can be applied to resolve the problems until the fixing PTF can be applied.
Here's a link to the note:
http://www-01.ibm.com/support/docview.wss?uid=swg21610372&myns=swgtiv&mynp=OCSS2JNN&mync=R
APAR OA40497 has been created to fix this problem. This APAR has been marked HIPER and is the fix for the PE.
Please note - If you already have any of these ptfs applied, APAR OA40497 has zaps that can be applied to resolve the problems until the fixing PTF can be applied.
Here's a link to the note:
http://www-01.ibm.com/support/docview.wss?uid=swg21610372&myns=swgtiv&mynp=OCSS2JNN&mync=R
Friday, October 5, 2012
More OMEGAMON test drive events coming your way!
If you want a chance to take the new OMEGAMON enhanced 3270 user interface for a test drive, IBM will be having events in several cities in the next few weeks.
Here's the upcoming lineup of cities:
November 6, 2012
San Francisco
IBM Technical Exploration Center
425 Market Street
20th Floor - Room 380
November 8, 2012
Sacramento
IBM Sacramento Office
2710 S. Gateway Oaks Dr.
2nd Floor - Room 220
November 13, 2012
Costa Mesa
IBM Technical Exploration Center
600 Anton Blvd
2nd Floor - Room 208
November 15, 2012
Phoenix
IBM Technical Exploration Center
2929 N Central Avenue
5th Floor - Room 534
If you are interested in attending, please contact my colleague, Tony Anderson (andersan@us.ibm.com or call (415) 545-2478)
Here's the upcoming lineup of cities:
November 6, 2012
San Francisco
IBM Technical Exploration Center
425 Market Street
20th Floor - Room 380
November 8, 2012
Sacramento
IBM Sacramento Office
2710 S. Gateway Oaks Dr.
2nd Floor - Room 220
November 13, 2012
Costa Mesa
IBM Technical Exploration Center
600 Anton Blvd
2nd Floor - Room 208
November 15, 2012
Phoenix
IBM Technical Exploration Center
2929 N Central Avenue
5th Floor - Room 534
If you are interested in attending, please contact my colleague, Tony Anderson (andersan@us.ibm.com or call (415) 545-2478)
Tuesday, October 2, 2012
Setting the threshold highlighting in the enhanced 3270 user interface
So you have just installed OMEGAMON V5.1 and you have logged onto the enhanced user interface for the first time. You notice that there may be some fields on the various panels that are highlighted in RED, YELLOW or BLUE, for example. This highlighting probably represents the default threshold levels provided by the tool.
The good news is it is easy to set the highlighting to levels more specific to your installation and your requirements. For each of the products there is a default member provided that sets the default threshold levels - hilev.RKANPAR(kppTHRSH). The pp refers to the specific product. For example OMEGAMON z/OS is KM5THRSH, OMEGAMON CICS is KCPTHRSH, and OMEGAMON DB2 is KDPTHRSH.
The following is an example of how the thresholds are set for the KM5TOPC panel.
The good news is it is easy to set the highlighting to levels more specific to your installation and your requirements. For each of the products there is a default member provided that sets the default threshold levels - hilev.RKANPAR(kppTHRSH). The pp refers to the specific product. For example OMEGAMON z/OS is KM5THRSH, OMEGAMON CICS is KCPTHRSH, and OMEGAMON DB2 is KDPTHRSH.
The following is an example of how the thresholds are set for the KM5TOPC panel.
Thursday, September 27, 2012
OMEGAMON Messaging command interface
As in the prior example of OMEGAMON IMS V5.1, the new release of OMEGAMON Messaging provides an easy to use command facility integrated into the enhanced 3270 user interface. Here's an example:
In the above example we see how you can issue commands, such as MQ channel commands seen here.
In the above example we see how you can issue commands, such as MQ channel commands seen here.
Wednesday, September 19, 2012
OMEGAMON IMS V5.1 expands command support
In prior releases of OMEGAMON IMS the Type 2 IMS commands were not supported. Now with OMEGAMON IMS V5.1 the tool finally provides support for Type 2 IMS commands. Here's an example:
Tuesday, September 18, 2012
Powerful new 3270 displays with OMEGAMON Storage V5.1
OMEGAMON Storage V5.1 includes support for the new enhanced 3270 user interface. The implementation of enhanced UI for OMEGAMON Storage is robust, and includes quite a few powerful new displays.
One example of a very useful display that has been translated from the Tivoli Portal to enhanced 3270 UI is the Dataset Atttributes screen. The Dataset Attributes Database is a feature of OMEGAMON Storage that helps you manage data sets. The database maintains attributes related to data set space, DCB, and catalog issues. The database feature collects data regarding all data sets on all volumes in the z/OS environment (except for volumes that you have excluded) so that you can do analysis from a single point of control. You can accomplish such tasks as identifying exception conditions regarding data sets throughout the environment, seeing an installation-wide view of data set space utilization, exceptional conditions, and summary statistics, and identifying resources that require attention, such as data sets that have excessive unused space, extents, or CA/CI splits.
Here is an example of the Dataset Attributes screen in the enhanced 3270 interface. Note how the display aggregates quite a bit of information together.
One example of a very useful display that has been translated from the Tivoli Portal to enhanced 3270 UI is the Dataset Atttributes screen. The Dataset Attributes Database is a feature of OMEGAMON Storage that helps you manage data sets. The database maintains attributes related to data set space, DCB, and catalog issues. The database feature collects data regarding all data sets on all volumes in the z/OS environment (except for volumes that you have excluded) so that you can do analysis from a single point of control. You can accomplish such tasks as identifying exception conditions regarding data sets throughout the environment, seeing an installation-wide view of data set space utilization, exceptional conditions, and summary statistics, and identifying resources that require attention, such as data sets that have excessive unused space, extents, or CA/CI splits.
Here is an example of the Dataset Attributes screen in the enhanced 3270 interface. Note how the display aggregates quite a bit of information together.
Friday, September 14, 2012
OMEGAMON Messaging V7.1 provides many powerful new displays
It's exciting now that OMEGAMON Messaging V7.1 provides a robust 3270 interface. With V7.1 there are many powerful new displays inclduing the Health overview panel.
The Health Overview panel gives the overall health values for queue manager, queues and channels, and the worst health status on the display will be sorted to the top of list. Among the various values used in determining health include Queue Manager status (Is it active? Are the channel initiator and command servers active? Are there any connections?), Queue performance indicators (Are there queues with high depth? Do XMIT queues have messages? Are there messages on the DLQ? Are there get inhibited or put inhibited queues?), and Channel status (Is the percent (of max) current channels or active channels too high? Are there current channels not in running state? Are there in-doubt channels?)
There's a lot of information on the display, and it's a good place to start for problem analysis. Here's an example of the screen:
The Health Overview panel gives the overall health values for queue manager, queues and channels, and the worst health status on the display will be sorted to the top of list. Among the various values used in determining health include Queue Manager status (Is it active? Are the channel initiator and command servers active? Are there any connections?), Queue performance indicators (Are there queues with high depth? Do XMIT queues have messages? Are there messages on the DLQ? Are there get inhibited or put inhibited queues?), and Channel status (Is the percent (of max) current channels or active channels too high? Are there current channels not in running state? Are there in-doubt channels?)
There's a lot of information on the display, and it's a good place to start for problem analysis. Here's an example of the screen:
Thursday, September 13, 2012
OMEGAMON IMS V5.1 adds support for the enhanced 3270 user interface
With the release of OMEGAMON IMS V5.1, we now have another OMEGAMON in the suite that has support for the enhanced 3270 user interface. The big benefits of the enhanced 3270 user are consistency and integration. With the enhanced user interface you can logon to one UI and access any of the OMEGAMONs from a central point.
Here's an example of what the OMEGAMON IMS emhanced user interface looks like.
Just like the DB2 monitor, you get a default panel that can show both IMSplex level information, and the individual IMS subsystems. You can use the "/" navigation to drill in for detail, in the similar manner to how the other OMEGAMONs operate.
Here's an example of what the OMEGAMON IMS emhanced user interface looks like.
Just like the DB2 monitor, you get a default panel that can show both IMSplex level information, and the individual IMS subsystems. You can use the "/" navigation to drill in for detail, in the similar manner to how the other OMEGAMONs operate.
Wednesday, September 12, 2012
New versions of OMEGAMON IMS, Storage and Messaging announced
Yesterday IBM announced new versions of OMEGAMON IMS, OMEGAMON Storage, and OMEGAMON for Messaging. These new versions continue the process that was begun earlier this year of expanding the integration and consolidation of the OMEGAMON 3270 interface capablities. And, as always, and new version of OMEGAMON is always an exciting event.
IBM Tivoli OMEGAMON XE for IMS on z/OS V5.1, IBM Tivoli OMEGAMON XE for Storage on z/OS V5.1, and IBM Tivoli OMEGAMON XE for Messaging on z/OS V7.1 use a new architecture that simplifies installation (PARMGEN) and includes integrated enhanced 3270 monitors (the new enhanced 3270 ui). These products integrate related data to help subject matter experts solve problems encountered on a day-to-day basis.
New capabilities of IBM Tivoli OMEGAMON XE for IMS on z/OS V5.1, IBM Tivoli OMEGAMON XE for Storage on z/OS V5.1, and IBM Tivoli OMEGAMON XE for Messaging on z/OS V7.1 are designed to:
For more information here is a link:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS212-308
IBM Tivoli OMEGAMON XE for IMS on z/OS V5.1, IBM Tivoli OMEGAMON XE for Storage on z/OS V5.1, and IBM Tivoli OMEGAMON XE for Messaging on z/OS V7.1 use a new architecture that simplifies installation (PARMGEN) and includes integrated enhanced 3270 monitors (the new enhanced 3270 ui). These products integrate related data to help subject matter experts solve problems encountered on a day-to-day basis.
- Improve problem resolution efficiency by requiring fewer steps to isolate root cause performance impact in real time, and, therefore, providing greater availability.
- Improve visibility, control, and automation with a new more comprehensive 3270-based user interface capable of viewing the entire enterprise-wide environment from a single 3270 screen.
- Reduce the time required for installation, configuration, and maintenance by leveraging enhanced IBM Tivoli Monitoring functions and a new PARMGEN configuration tool.
- Provide an enterprise end-to-end view of performance monitoring and availability on a common technology for both IBM System z® and open systems.
For more information here is a link:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS212-308
Friday, September 7, 2012
Using Tivoli Data Warehouse for trending analysis
Because it is snapshot type data, Tivoli Data Warehouse (TDW) data works well for trending and analysis. TDW data is not necessarily equivalent to SMF data, which tends to be more interval or event driven in nature. TDW is not equivalent to CICS SMF 110's or DB2 Accountig trace data. But, if you want to be able to do things like trend resource usage (such as CPU, DASD, or memory) over time, TDW can work well in that context. Also, when doing trending analysis, the TDW summarization process can add some interesting information to the data.
Here's an example of what I'm talking about. In this example, I'm trending z/OS CPU over time using CPU TDW snapshot data summarized by hour. Note that in addition to average CPU usage you also get a Min and Max CPU usage for the same time interval, and here I show how easy it is to chart that information on a single graph.
Here's an example of what I'm talking about. In this example, I'm trending z/OS CPU over time using CPU TDW snapshot data summarized by hour. Note that in addition to average CPU usage you also get a Min and Max CPU usage for the same time interval, and here I show how easy it is to chart that information on a single graph.
Tuesday, August 28, 2012
Learn about what's new in zEnterprise
Today there will be a virtual event to introduce the next generation of zEnterprise, the world’s fastest, most scalable and secure enterprise system with the ability to integrate resources for operational analytics, trusted resilience, and efficiency at an enterprise scale.
The virtual doors open at 10:45 Eastern Time. Here's a link to attend:
http://bit.ly/NRtwF6
The virtual doors open at 10:45 Eastern Time. Here's a link to attend:
http://bit.ly/NRtwF6
Friday, August 24, 2012
An example of analyzing issues with Tivoli Data Warehouse
I had an interesting project this week to get Tivoli Data Warehouse (TDW) operational for a customer. With the assistance of an IBM colleague (thank you Andrew), we got TDW fully configured and (seemingly) connected. The Warehouse proxy and Summarization/Pruning agents were started. The Warehouse proxy log showed that it was connecting to the TEMS. The Warehouse Proxy configuration workspace showed that everything was green and connected. Real time monitoring was working great. We could get 24 hours worth of history, but nothing more. So why were we still not seeing any data being written to the TDW?
We looked at the Warehouse proxy log on the TDW box. There were no obvious error messages. But there also were none of the eyecatchers we noted in earlier posts (such as that TDW database objects were being created). So if the Warehouse Proxy had no errors, what was the issue?
In this shop, the agents were mainframe TEMAs. The next step was to look at the RKLVLOGs of the agents running on z/OS. When I looked through the logs I noticed messages, such as the following, that would occur every hour. Since we had set the warehouse interval for one hour for this collection, this seemed like more than just a coincidence.
+4EBF9D4E.0002 ERROR MESSAGE: "RPC Error"
(4EBF9D4E.0003-13B4:khdxdacl.cpp,577,"routeExportRequest") Export for object (table POOLS appl KIP) failed in createRouteRequest, Status = 8.
(4EBF9D63.0000-13B4:kdcc1sr.c,460,"rpc__sar") Connection failure: "ip.pipe:#xx.xxx.x.xxx:63358", 1C010001:1DE00045, 21, 100(3), FFFF/4119,
The conclusion was that this message seemed to indicate that the TEMA was attempting to connect to the warehouse infrastructure to send the data to the TDW, but was getting a connection error. And this error was happening every hour.
The problem? In this case, there was an internal firewall that was blocking off the port we needed (in this example 63358 - see above). Although real time monitoring was working, and the TDW configuration panel showed green, we still needed this port to be able to send data to the TDW. Once the port was opened, the TDW worked perfectly.
The moral of the story? When analyzing these types of issues it is important to look at things from multiple perspectives, look at all the logs, and don't forget about things like firewalls and other security challenges.
We looked at the Warehouse proxy log on the TDW box. There were no obvious error messages. But there also were none of the eyecatchers we noted in earlier posts (such as that TDW database objects were being created). So if the Warehouse Proxy had no errors, what was the issue?
In this shop, the agents were mainframe TEMAs. The next step was to look at the RKLVLOGs of the agents running on z/OS. When I looked through the logs I noticed messages, such as the following, that would occur every hour. Since we had set the warehouse interval for one hour for this collection, this seemed like more than just a coincidence.
+4EBF9D4E.0002 ERROR MESSAGE: "RPC Error"
(4EBF9D4E.0003-13B4:khdxdacl.cpp,577,"routeExportRequest") Export for object (table POOLS appl KIP) failed in createRouteRequest, Status = 8.
(4EBF9D63.0000-13B4:kdcc1sr.c,460,"rpc__sar") Connection failure: "ip.pipe:#xx.xxx.x.xxx:63358", 1C010001:1DE00045, 21, 100(3), FFFF/4119,
The conclusion was that this message seemed to indicate that the TEMA was attempting to connect to the warehouse infrastructure to send the data to the TDW, but was getting a connection error. And this error was happening every hour.
The problem? In this case, there was an internal firewall that was blocking off the port we needed (in this example 63358 - see above). Although real time monitoring was working, and the TDW configuration panel showed green, we still needed this port to be able to send data to the TDW. Once the port was opened, the TDW worked perfectly.
The moral of the story? When analyzing these types of issues it is important to look at things from multiple perspectives, look at all the logs, and don't forget about things like firewalls and other security challenges.
Wednesday, August 22, 2012
Upcoming OMEGAMON webcasts in September
There are a couple webcast events covering OMEGAMON in September.
"Increase System z storage visibility with OMEGAMON Monitoring" will cover what's new and exciting in the area of OMEGAMON Storage and System z. The redesigned OMEGAMON XE for Storage V5.1 allows subject matter experts to resolve storage issues using fewer screens and keystrokes. OMEGAMON along with IBM’s Storage Management Suite can provide end-to-end visibility, control and automation for good storage management. The speaker is Kevin Hosozawa, Product Manager, OMEGAMON for Storage, IBM Software Group .
The webcast is September 13, 2012, 11 a.m., EDT. Here's a link to sign up:
http://ibm.co/N30pbq
Later in September there will be webcast on OMEGAMON IMS.
"Improve IMS monitoring with new OMEGAMON V5." Get the details about what’s new in OMEGAMON for IMS V5.1, including easier region navigation, reduced MIPS usage and the capability to issue IMS commands from OMEGAMON. Learn how it reduces potential delays or outages by reporting coupling facility structure statistics and other critical IMS attributes
The speaker is Mike Goodman, OMEGAMON for IMS Product Manager, IBM Software Group.
The webcast is September 27, 2012, 11 a.m., EDT. Here's a link to sign up:
http://ibm.co/NveTXc
"Increase System z storage visibility with OMEGAMON Monitoring" will cover what's new and exciting in the area of OMEGAMON Storage and System z. The redesigned OMEGAMON XE for Storage V5.1 allows subject matter experts to resolve storage issues using fewer screens and keystrokes. OMEGAMON along with IBM’s Storage Management Suite can provide end-to-end visibility, control and automation for good storage management. The speaker is Kevin Hosozawa, Product Manager, OMEGAMON for Storage, IBM Software Group .
The webcast is September 13, 2012, 11 a.m., EDT. Here's a link to sign up:
http://ibm.co/N30pbq
Later in September there will be webcast on OMEGAMON IMS.
"Improve IMS monitoring with new OMEGAMON V5." Get the details about what’s new in OMEGAMON for IMS V5.1, including easier region navigation, reduced MIPS usage and the capability to issue IMS commands from OMEGAMON. Learn how it reduces potential delays or outages by reporting coupling facility structure statistics and other critical IMS attributes
The speaker is Mike Goodman, OMEGAMON for IMS Product Manager, IBM Software Group.
The webcast is September 27, 2012, 11 a.m., EDT. Here's a link to sign up:
http://ibm.co/NveTXc
Friday, August 17, 2012
More OMEGAMON V5.1 test drive events coming your way
Want a chance to take the new OMEGAMON V5.1 for a test drive? You can try it out for yourself on a live z/OS environment. There are more test drive events coming your way. Here's a list:
Chicago - Sept 11th
IBM Chicago TEC
71 South Wacker Drive, 6th Floor
Springfield, IL - Sept 25th
IBM Springfield TEC
3201 West White Oaks Drive, Suite 204
Minneapolis, MN - Oct 18th
IBM Minneapolis TEC
650 3rd Avenue South
Omaha, NE - Nov 7th
IBM Omaha Office
1111 N. 102nd Court, Suite 231
REGISTER Now! Please send an email to IBM Representative Cliff Koch at
cjkoch@us.ibm.com or call (314)-409-2859.
Chicago - Sept 11th
IBM Chicago TEC
71 South Wacker Drive, 6th Floor
Springfield, IL - Sept 25th
IBM Springfield TEC
3201 West White Oaks Drive, Suite 204
Minneapolis, MN - Oct 18th
IBM Minneapolis TEC
650 3rd Avenue South
Omaha, NE - Nov 7th
IBM Omaha Office
1111 N. 102nd Court, Suite 231
REGISTER Now! Please send an email to IBM Representative Cliff Koch at
cjkoch@us.ibm.com or call (314)-409-2859.
My presentations at Share
At the Share confernce last week in Anaheim I did a couple presentations.
One topic was somewhat in keeping with one of the themes of the conference, Big Data. In "Predictive Analytics and IT Service Management" I cover the concepts of predictive analysis, consider common systems management challenges, and look at how predictive analysis concepts may be applied to address IT Service Management needs. Here's a link:
https://share.confex.com/share/119/webprogram/Session11479.html
Another session that I did covered the topic of "Understanding The Impact Of The Network On z/OS Performance". This was an updated verison of a presentation I did at Share in Atlanta this past spring. In the presentation I discuss how to create a more integrated, end to end monitoring strategy that includes relevant network information. I cover understanding the impact of the network on z/OS performance, how to analyze typical z/OS workloads and network performance, analyzing performance using available tools (such as OMEGAMON), and recommended approaches and management strategies. Here's a link:
https://share.confex.com/share/119/webprogram/Session11900.html
One topic was somewhat in keeping with one of the themes of the conference, Big Data. In "Predictive Analytics and IT Service Management" I cover the concepts of predictive analysis, consider common systems management challenges, and look at how predictive analysis concepts may be applied to address IT Service Management needs. Here's a link:
https://share.confex.com/share/119/webprogram/Session11479.html
Another session that I did covered the topic of "Understanding The Impact Of The Network On z/OS Performance". This was an updated verison of a presentation I did at Share in Atlanta this past spring. In the presentation I discuss how to create a more integrated, end to end monitoring strategy that includes relevant network information. I cover understanding the impact of the network on z/OS performance, how to analyze typical z/OS workloads and network performance, analyzing performance using available tools (such as OMEGAMON), and recommended approaches and management strategies. Here's a link:
https://share.confex.com/share/119/webprogram/Session11900.html
Wednesday, August 15, 2012
Some very interesting Tivoli monitoring sessions
There were quite a few good Tivoli related monitoring and management sessions at Share last week in Anaheim. Some of the best ones were done by Mike Bonett of IBM.
"Using NetView for z/OS for Enterprise-Wide Event Management and Automation" was a very interesting session that covered numerous ways that you exploit and use NetView in your enterprise. Here's a link:
https://share.confex.com/share/119/webprogram/Session11905.html
"The IBM Tivoli Monitoring Infrastructure on System z and zEnterprise" is one of the most comprehensive and concise reviews I've seen of the Tivoli infrastructure. If you want a good explanation of all the Tivoil terminology, this is the best one I've seen. Here's a link:
https://share.confex.com/share/119/webprogram/Session11524.html
"Forecasting Performance Metrics using the IBM Tivoli Performance Analyzer" is a good overview of the capabilities of this very interesting feature of the Tivoli Portal. Here's a link:
https://share.confex.com/share/119/webprogram/Session11523.html
"Getting Started with the Unified Resource Manager (zManager) APIs for zEnterprise Monitoring and Discovery" looks at the support for zManager now available in the Tivoli Portal. If you are not familiar with this support, it is very much worth looking at this presentation. Here's a link:
https://share.confex.com/share/119/webprogram/Session11630.html
"Using NetView for z/OS for Enterprise-Wide Event Management and Automation" was a very interesting session that covered numerous ways that you exploit and use NetView in your enterprise. Here's a link:
https://share.confex.com/share/119/webprogram/Session11905.html
"The IBM Tivoli Monitoring Infrastructure on System z and zEnterprise" is one of the most comprehensive and concise reviews I've seen of the Tivoli infrastructure. If you want a good explanation of all the Tivoil terminology, this is the best one I've seen. Here's a link:
https://share.confex.com/share/119/webprogram/Session11524.html
"Forecasting Performance Metrics using the IBM Tivoli Performance Analyzer" is a good overview of the capabilities of this very interesting feature of the Tivoli Portal. Here's a link:
https://share.confex.com/share/119/webprogram/Session11523.html
"Getting Started with the Unified Resource Manager (zManager) APIs for zEnterprise Monitoring and Discovery" looks at the support for zManager now available in the Tivoli Portal. If you are not familiar with this support, it is very much worth looking at this presentation. Here's a link:
https://share.confex.com/share/119/webprogram/Session11630.html
Wednesday, August 8, 2012
New Share presentation on OMEGAMON optimization
It's Share conference week in Anaheim, California (home of Mickey and Minnie). I'll be providing some links on interested presentations I attended over the course of the week.
One presentation I attended yesterday was done by the always interesting and informative Don Zeunert (aka Dr Z). Don did a presentation on "Tuning Tips To Lower System z Costs with OMEGAMON Monitoring". This was a very informative presentation that covered quite a few tuning tips for OMEGAMON.
Here's a link to download the presentation:
https://share.confex.com/share/119/webprogram/Session11791.html
One presentation I attended yesterday was done by the always interesting and informative Don Zeunert (aka Dr Z). Don did a presentation on "Tuning Tips To Lower System z Costs with OMEGAMON Monitoring". This was a very informative presentation that covered quite a few tuning tips for OMEGAMON.
Here's a link to download the presentation:
https://share.confex.com/share/119/webprogram/Session11791.html
Friday, August 3, 2012
Monitoring Summarization and Pruning activity
We've looked at specifying history collection, and as part of that process defining summarization and pruning. Summarization and pruning should happen on a regular basis as defined in the setup and configuration for the summarization and pruning agent. Here's an example of where you would specify the time of day that summarization and pruing takes place.
Once the summarization and pruning is set up, you can then monitor the summarization and pruning activity on an ongoing basis using a product provided workspace (see example below). The statistics workspace under summarization and pruing shows interesting information, such as tables being sumamrized, and if there were any tables that saw errors during summarization/pruing.
It's recommended that if you are collecting TDW history, and are using the summarization and pruning facility, that you check this workspace periodically to make sure you do not have any issues.
Once the summarization and pruning is set up, you can then monitor the summarization and pruning activity on an ongoing basis using a product provided workspace (see example below). The statistics workspace under summarization and pruing shows interesting information, such as tables being sumamrized, and if there were any tables that saw errors during summarization/pruing.
It's recommended that if you are collecting TDW history, and are using the summarization and pruning facility, that you check this workspace periodically to make sure you do not have any issues.
Monday, July 30, 2012
Upcoming webcast on the new OMEGAMON DB2 V5.11
There is a new webcast event upcoming on the new release of OMEGAMON DB2, OMEGAMON V5.11. This webcast will cover the capabilities of the tool including: advanced problem determination using focused scenarios designed by customers, combining OMEGAMON DB2 information with CICS and z/OS in a new enhanced 3270 workspace, improved efficiency with fewer screen interactions to find root cause performance impact in real time, and additional end-to-end response time measurement capability.
The speaker is Steve Fafard, Product Manager, OMEGAMON for DB2. The event will be August 9th, 2012, 11 a.m., EDT.
The price is right. The webcast is a free event. Here's a link to sign up:
http://ibm.co/Nt3HIq
The speaker is Steve Fafard, Product Manager, OMEGAMON for DB2. The event will be August 9th, 2012, 11 a.m., EDT.
The price is right. The webcast is a free event. Here's a link to sign up:
http://ibm.co/Nt3HIq
Friday, July 27, 2012
Tivoli Data Warehouse - Summarization And Pruning considerations
We' ve covered quite a bit on the Tivoli Data Warehouse (TDW) and what happens when you initialize history collection for a given table.
Once you have had history collecting for an interval of time, you may want to look at using summarization and pruning to control the quantity of data being retained, and to also take advantage of the ability to use summarized data for trending purposes.
Below is an example of the configuration options for summarization and pruning. As part of the earlier exercise we enabled TDW history collection for Common Storage. Now we are going to enable summarization and pruning for this data.
In the above example, we are enabling pruning on detailed data by specifying that we will keep 30 days of detail. Detail beyond that point will be removed by the Summarization and pruning agent process. Also, the detail data will be summarized, both on an hourly and on a daily basis. We will be keeping 90 days of hourly data, and one year of daily data.
By specifying summarization and pruning, we can control the quantity of data being retained within the TDW. This can be very important, especially for high volume tables of data or tables being collected for a large number of managed systems.
Another consideration to be aware of is that when you specify summarization, the summarization agent will create another set of tables. In the above example, the agent will have DB2 to create a table for hourly data and another table to hold daily data.
This is an area of confusion for some uers. I have encountered some shops who thought they were going to save space right away by turning on summarization. However, when they enabled the function, the TDW database ended up requiring more space. Why? Because the summarization agent had to create the additional tables to store the summarized data.
Once you have had history collecting for an interval of time, you may want to look at using summarization and pruning to control the quantity of data being retained, and to also take advantage of the ability to use summarized data for trending purposes.
Below is an example of the configuration options for summarization and pruning. As part of the earlier exercise we enabled TDW history collection for Common Storage. Now we are going to enable summarization and pruning for this data.
By specifying summarization and pruning, we can control the quantity of data being retained within the TDW. This can be very important, especially for high volume tables of data or tables being collected for a large number of managed systems.
Another consideration to be aware of is that when you specify summarization, the summarization agent will create another set of tables. In the above example, the agent will have DB2 to create a table for hourly data and another table to hold daily data.
This is an area of confusion for some uers. I have encountered some shops who thought they were going to save space right away by turning on summarization. However, when they enabled the function, the TDW database ended up requiring more space. Why? Because the summarization agent had to create the additional tables to store the summarized data.
Wednesday, July 25, 2012
Some things to be aware of when installing OMEGAMON DB2 V5.11
In addition to introducing support for the new enhanced 3270 user interface, there are some infrastructure considerations you should be aware of with OMEGAMON DB2 V5.11. OMEGAMON DB2 V5.11, in a smiliar fashion to OMEGAMON z/OS and CICS V5.1, introduces support for self describing agents (SDA). SDA support eliminates the need for the application support media when defining the agent to the Tivoli monitoring infrastructure.
Another thing to be aware of is that the OMEGAMON DB2 agent (i.e. the TEMA) has been re-architected in V5.11. Because of this, there are some additional steps you may need to be aware of when deploying OMEGAMON DB2 V5.11. It is important that you review the the documentation in the V5.11 PSP bucket, and look at relevant readme information. Here's a link to the PSP information:
http://www-304.ibm.com/support/docview.wss?uid=isg1_5655W37_HKDB511
If you overlook this step, you may be missing some relevant information, in particular data sharing related information.
Another thing to be aware of is that the OMEGAMON DB2 agent (i.e. the TEMA) has been re-architected in V5.11. Because of this, there are some additional steps you may need to be aware of when deploying OMEGAMON DB2 V5.11. It is important that you review the the documentation in the V5.11 PSP bucket, and look at relevant readme information. Here's a link to the PSP information:
http://www-304.ibm.com/support/docview.wss?uid=isg1_5655W37_HKDB511
If you overlook this step, you may be missing some relevant information, in particular data sharing related information.
Thursday, July 19, 2012
OMEGAMON DB2 V5.11 features the new enhanced 3270 user interface
OMEGAMON DB2 V5.11 was recently released. There were several interesting enhancements and additions to the tool. One of the more interesting new features is support for the new enhanced 3270 user interface. With the addition of OMEGAMON DB2 to the fold, this means that three core OMEGAMONs (z/OS, CICS, DB2) now support the enhanced 3270 ui.
One of the nice aspects of the enhanced ui is the ability to do your monitoring using an integrated interface that enables you to monitor from a single point of view. Here is an example of the default KOBSTART panel, now showing DB2 information on the display.
From this central screen, as the example shows, you can easily navigate to z/OS, CICS or DB2.
From the main panel (KOBSTART), you can then drill down into DB2 specific detailed views, and drill in on commonly used displays, such as DB2 thread views. Here is an example of how you would drill in on DB2 threads.
From this you you can drill in on detail for each thread executing on the system, and do analysis of what the threads are doing and where they are spedning their time.
One of the nice aspects of the enhanced ui is the ability to do your monitoring using an integrated interface that enables you to monitor from a single point of view. Here is an example of the default KOBSTART panel, now showing DB2 information on the display.
From this central screen, as the example shows, you can easily navigate to z/OS, CICS or DB2.
From the main panel (KOBSTART), you can then drill down into DB2 specific detailed views, and drill in on commonly used displays, such as DB2 thread views. Here is an example of how you would drill in on DB2 threads.
From this you you can drill in on detail for each thread executing on the system, and do analysis of what the threads are doing and where they are spedning their time.
Thursday, July 12, 2012
More TDW under the covers
Now that it appears data on z/OS Common Storage utilization is flowing to the TDW, be aware that there is information readily available to track on an ongoing basis the amount of data flowing to the TDW, and to show indications of potential issues with TDW history collection.
In the Tivoli Portal there are workspaces that show the status of the Warehouse Proxy, and also track history row processing counts and history collection error counts. The example below shows the Warehouse Proxy statistics workspace.
The rows statistics chart on the workspace shows the number of rows sent to the Warehosue Proxy and the number of rows inserted by the Warehouse Proxy to the TDW database. Note there is a marked difference in this example between the number of rows sent and the number of rows inserted. This indicates a potential issue where data is being sent to the Warehouse Proxy, but due to some issue the data is not making it into the TDW database. Note also in the Failures/Disconnections chart there is a count showing failures with the TDW.
Now that we see there is a potential issue with data making its way into the TDW database, how do you get more information on what the problem may be? First, you can look at the log for the Warehouse Proxy process. Also, you can look at the Warehouselog table in the TDW database (this is assuming you have it enabled - be aware of an earlier blog post I made on this not being on by default in ITM 6.23).
I the following example I show information selected from the Warehouselog table in the TDW. The Warehouselog table shows row statistics and error message information (if applicable) for each attempted insert to the TDW.
In the example we see that the Common Storage table has 16 rows received and 16 rows inserted. However, the Address Space CPU Utilization table seems to have an issue with 1 row received and one row skipped, and also an error message. Analysis of the error message would give some indication as to why the TDW is having an issue.
As you can see there is quite a bit of information available to track the status and activity of TDW, and it is worth checking this information on an ongoing basis to ensure history collection is proceeding normally.
In the Tivoli Portal there are workspaces that show the status of the Warehouse Proxy, and also track history row processing counts and history collection error counts. The example below shows the Warehouse Proxy statistics workspace.
The rows statistics chart on the workspace shows the number of rows sent to the Warehosue Proxy and the number of rows inserted by the Warehouse Proxy to the TDW database. Note there is a marked difference in this example between the number of rows sent and the number of rows inserted. This indicates a potential issue where data is being sent to the Warehouse Proxy, but due to some issue the data is not making it into the TDW database. Note also in the Failures/Disconnections chart there is a count showing failures with the TDW.
Now that we see there is a potential issue with data making its way into the TDW database, how do you get more information on what the problem may be? First, you can look at the log for the Warehouse Proxy process. Also, you can look at the Warehouselog table in the TDW database (this is assuming you have it enabled - be aware of an earlier blog post I made on this not being on by default in ITM 6.23).
I the following example I show information selected from the Warehouselog table in the TDW. The Warehouselog table shows row statistics and error message information (if applicable) for each attempted insert to the TDW.
In the example we see that the Common Storage table has 16 rows received and 16 rows inserted. However, the Address Space CPU Utilization table seems to have an issue with 1 row received and one row skipped, and also an error message. Analysis of the error message would give some indication as to why the TDW is having an issue.
As you can see there is quite a bit of information available to track the status and activity of TDW, and it is worth checking this information on an ongoing basis to ensure history collection is proceeding normally.
Tuesday, July 10, 2012
TDW under the covers - continued
The TDW database itself may reside on a number of platforms, including Linux, UNIX, Windows and z/OS. The other required infrastructure is the Tivoli Warehouse Proxy process and the Summarization and Pruning agent process. These processes may run on Linux, UNIX, or Windows. Note that these processes do not run on native z/OS (although they could run on Linux on z). That means if you run the TDW on DB2 on z/OS, you will also need to have the Warehouse Proxy and Summarization/Pruning agent running on a separate platform, such as Linux, UNIX, or Windows.
So we've looked at what happens within the agent (TEMA) task when historical collection is started. You should see messages indicating collection has begun, and be able to see if data is being gathered, and for which tables. The next question is how does the historical data get from the TEMA to the Tivoli Data Warehouse (TDW)? That is the job of the Warehouse Proxy process.
When you define history collection you also specify if the history data is to go to the TDW, and if so, how often that data should be sent to the TDW (anywhere from once per day to every 15 minutes). Sending data to the TDW on a regular interval is usually preferable to sending large quantities of data only once or twice per day. This would avoid large bursts of activity to the TDW.
It's the job of the Warehouse Proxy process to gather data from the various agents, as specified by the TDW interval, and send the data to the TDW database. The TDW database consists of tables that correlate to the respective TEMA tables being gathered. If the required table does not already exist in the database, it's the job of the Warehouse Proxy to create the table. Here we see an example of the messages from the Warehouse Proxy log that document table creation for our Common Storage (COMSTOR) table:
(Friday, July 6, 2012, 10:17:31 AM-{1684}khdxdbex.cpp,1716,"createTable") "Common_Storage" - Table Successfully Created in Target Database
(Friday, July 6, 2012, 10:17:31 AM-{1684}khdxdbex.cpp,1725,"createTable") "Common_Storage" - Access GRANTed to PUBLIC
Once the Warehouse Proxy process has the table defined, you should be able to display the tables in DB2 (using tools like the DB2 Control Center), and issue SQL selects to see the data in the tables, as shown below:
In the above example we see the Common Storage table in DB2, we see the cardinality column which indicates the number of rows in the table, and we can also run SQL Selects against the table to display the contents. By doing this we can verify that the data is now flowing to the TDW.
So we've looked at what happens within the agent (TEMA) task when historical collection is started. You should see messages indicating collection has begun, and be able to see if data is being gathered, and for which tables. The next question is how does the historical data get from the TEMA to the Tivoli Data Warehouse (TDW)? That is the job of the Warehouse Proxy process.
When you define history collection you also specify if the history data is to go to the TDW, and if so, how often that data should be sent to the TDW (anywhere from once per day to every 15 minutes). Sending data to the TDW on a regular interval is usually preferable to sending large quantities of data only once or twice per day. This would avoid large bursts of activity to the TDW.
It's the job of the Warehouse Proxy process to gather data from the various agents, as specified by the TDW interval, and send the data to the TDW database. The TDW database consists of tables that correlate to the respective TEMA tables being gathered. If the required table does not already exist in the database, it's the job of the Warehouse Proxy to create the table. Here we see an example of the messages from the Warehouse Proxy log that document table creation for our Common Storage (COMSTOR) table:
(Friday, July 6, 2012, 10:17:31 AM-{1684}khdxdbex.cpp,1716,"createTable") "Common_Storage" - Table Successfully Created in Target Database
(Friday, July 6, 2012, 10:17:31 AM-{1684}khdxdbex.cpp,1725,"createTable") "Common_Storage" - Access GRANTed to PUBLIC
Once the Warehouse Proxy process has the table defined, you should be able to display the tables in DB2 (using tools like the DB2 Control Center), and issue SQL selects to see the data in the tables, as shown below:
In the above example we see the Common Storage table in DB2, we see the cardinality column which indicates the number of rows in the table, and we can also run SQL Selects against the table to display the contents. By doing this we can verify that the data is now flowing to the TDW.
Friday, July 6, 2012
Understanding what is happening under the covers with TDW
When you are starting up historical data collection for TDW there are several things that happen under the covers that you may need to be aware of.
Here I show an example of starting history collection for z/OS Common Storage (CSA) utilization. In this example we start by specifying the agent (in this example z/OS) and the attribute group to collect (meaning the table of information to gather). You will then specify the collection interval, and how often the data is to be sent to the Tivoli data warehouse (TDW). You then click on the distribution tab and select one or more managed systems to collect data from. When you click Apply, collection should begin and the icon to the left of the collection definition should turn green.
Once the collection definition is distributed, collection should be active and you should see a message in the RKLVLOG of the TEMA (meaning the agent address space). You should see a reference to the starting of a UADVISOR for the table you are collecting (in this example UADVISOR_KM5_COMSTOR). At this point collection should begin, assuming all the other required infrastructure is in place and operational.
To validate that collection is occuring at the TEMA level, there are some useful commands. One command to try is /F taskname, KPDCMD QUERY CONNECT (the taskname would be the address space where the TEMA is running). This command will show the status of the various tables that could be collected by the agent, and how much data has been collected. The information will appear in the RKPDLOG output for the TEMA task (see example below):
QUERY CONNECT
Appl Table Active Group
Name Name Records Name Active Dataset Name
-------- ---------- -------- -------- ---------------------------
KM5 ASCPUUTIL 31019 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASCSOWN 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASREALSTOR 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASRESRC2 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASSUMRY 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASVIRTSTOR 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 BPXPRM2 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 CHNPATHS 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 COMSTOR 28 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
Note that data is being collected for the COMSTOR table (as indicated by the active record count). The next step in the process is to understand what is happening at the TDW and DB2 level. We will look at examples of this in later posts on this topic.
Here I show an example of starting history collection for z/OS Common Storage (CSA) utilization. In this example we start by specifying the agent (in this example z/OS) and the attribute group to collect (meaning the table of information to gather). You will then specify the collection interval, and how often the data is to be sent to the Tivoli data warehouse (TDW). You then click on the distribution tab and select one or more managed systems to collect data from. When you click Apply, collection should begin and the icon to the left of the collection definition should turn green.
Once the collection definition is distributed, collection should be active and you should see a message in the RKLVLOG of the TEMA (meaning the agent address space). You should see a reference to the starting of a UADVISOR for the table you are collecting (in this example UADVISOR_KM5_COMSTOR). At this point collection should begin, assuming all the other required infrastructure is in place and operational.
To validate that collection is occuring at the TEMA level, there are some useful commands. One command to try is /F taskname, KPDCMD QUERY CONNECT (the taskname would be the address space where the TEMA is running). This command will show the status of the various tables that could be collected by the agent, and how much data has been collected. The information will appear in the RKPDLOG output for the TEMA task (see example below):
QUERY CONNECT
Appl Table Active Group
Name Name Records Name Active Dataset Name
-------- ---------- -------- -------- ---------------------------
KM5 ASCPUUTIL 31019 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASCSOWN 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASREALSTOR 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASRESRC2 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASSUMRY 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASVIRTSTOR 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 BPXPRM2 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 CHNPATHS 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 COMSTOR 28 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
Note that data is being collected for the COMSTOR table (as indicated by the active record count). The next step in the process is to understand what is happening at the TDW and DB2 level. We will look at examples of this in later posts on this topic.
Thursday, July 5, 2012
A handy use for the TACMD command
TACMD is a handy tool with a variety of options for managing IBM Tivoli Monitoring (ITM) configuration, setup and infrastructure. TACMD may be seen as a Swiss army knife type of tool, with lots of options and uses. For example you can use TACMD to manage, edit. delete, start, stop, export and import situations. You can use TACMD to start/stop agents, manage managed system lists, and more.
You can also use TACMD to list out, export, and import workspaces defined to a Tivoli Portal. Let's say, for example, you need to list out all the workspaces defined to the Tivoli Portal. You may want to do that to determine what, if any, user defined workspaces have been added to the Tivoli Portal, and what they are called. The TACMD LISTWORKSPACES command allows you you to list out all the workspaces defined, and provides options to filter the display by application type or userid.
Here's an example of how this is done, and what the output may look like. In the example I show how you specify what portal server to access, and how you pass the userid and password (the one you would use to logon to the TEP) as part of the listworkspaces command. There's one more trick I show in the example. By adding >workspaces.txt to the end of the command, the command will direct it's output to a text file called workspaces.txt. This is convenient since you can then search and edit the output of the command.
TACMD LISTWORKSPACES is a convenient tool that allows you to identify what workspaces have been added and then plan for what workspaces you may need to backup, or import/export.
For more on TACMD, here's a link:
http://pic.dhe.ibm.com/infocenter/tivihelp/v15r1/index.jsp?topic=%2Fcom.ibm.itm.doc%2Fitm_cmdref35.htm
You can also use TACMD to list out, export, and import workspaces defined to a Tivoli Portal. Let's say, for example, you need to list out all the workspaces defined to the Tivoli Portal. You may want to do that to determine what, if any, user defined workspaces have been added to the Tivoli Portal, and what they are called. The TACMD LISTWORKSPACES command allows you you to list out all the workspaces defined, and provides options to filter the display by application type or userid.
Here's an example of how this is done, and what the output may look like. In the example I show how you specify what portal server to access, and how you pass the userid and password (the one you would use to logon to the TEP) as part of the listworkspaces command. There's one more trick I show in the example. By adding >workspaces.txt to the end of the command, the command will direct it's output to a text file called workspaces.txt. This is convenient since you can then search and edit the output of the command.
TACMD LISTWORKSPACES is a convenient tool that allows you to identify what workspaces have been added and then plan for what workspaces you may need to backup, or import/export.
For more on TACMD, here's a link:
http://pic.dhe.ibm.com/infocenter/tivihelp/v15r1/index.jsp?topic=%2Fcom.ibm.itm.doc%2Fitm_cmdref35.htm
Friday, June 29, 2012
Some interesting upcoming webcasts
There are a couple interesting performance and systems management related webcasts coming up in July.
"DB2 10 Performance and Scalability – Top Tips to reduce costs “Out of the Box! " discusses the performance benefits of going to DB2 10. Learn how you can save considerable CPU resources and enjoy numerous I/O improvements with DB2 10. Most of these improvements are enabled even in Conversion Mode, and available right “out of the box".
The speaker is Jeffery Berger, DB2 Performance, Senior Software Engineer, IBM Software Group
Broadcast Date: July 10, 2012 at 11:00 a.m. Eastern Daylight Time
Here's a link to sign up: http://ibm.co/N3c13W
"Increase enterprise productivity with enhanced System Automation for z/OS" covers recent enhancements in System Automation for z/OS.
The speaker is Uwe Gramm, Tivoli System Automation Product Manager, IBM Software Group.
Broadcast Date: July 12, 2012 at 11:00 a.m. Eastern Daylight Time
Here's a link to sign up: http://ibm.co/MnlUGC
"DB2 10 Performance and Scalability – Top Tips to reduce costs “Out of the Box! " discusses the performance benefits of going to DB2 10. Learn how you can save considerable CPU resources and enjoy numerous I/O improvements with DB2 10. Most of these improvements are enabled even in Conversion Mode, and available right “out of the box".
The speaker is Jeffery Berger, DB2 Performance, Senior Software Engineer, IBM Software Group
Broadcast Date: July 10, 2012 at 11:00 a.m. Eastern Daylight Time
Here's a link to sign up: http://ibm.co/N3c13W
"Increase enterprise productivity with enhanced System Automation for z/OS" covers recent enhancements in System Automation for z/OS.
The speaker is Uwe Gramm, Tivoli System Automation Product Manager, IBM Software Group.
Broadcast Date: July 12, 2012 at 11:00 a.m. Eastern Daylight Time
Here's a link to sign up: http://ibm.co/MnlUGC
Thursday, June 21, 2012
OMEGAMON DB2 and DB2 10
This is a question that has come up multiple times with customers of mine just in the past week or so. The question? If I'm running OMEGAMON XE for DB2 PM/PE V4.20 will it support DB2 10? The answer is no. To support DB2 10 you need to install OMEGAMON DB2 V5.1 or the newly released V5.1.1. V4.20, while still a supported release, will not be retro-fitted to support DB2 10.
Until recently this has not been much of an issue, but now customers are starting to make the move to DB2 10. So it's important to realize that if DB2 10 is in your plans, now is the time to start planning to upgrade your OMEGAMON DB2 to either the V5.1 or V5.1.1 level (either one will do).
Until recently this has not been much of an issue, but now customers are starting to make the move to DB2 10. So it's important to realize that if DB2 10 is in your plans, now is the time to start planning to upgrade your OMEGAMON DB2 to either the V5.1 or V5.1.1 level (either one will do).
Wednesday, June 20, 2012
More chances to take OMEGAMON V5.1 for a test drive!
If you haven't yet had a chance to get your hands on the new OMEGAMON
enhanced 3270 user interface, here's another chance to try it out. We will be doing the OMEGAMON V5.1 test drive events in some more cities.
It will be a chance for you to get hands-on usage of the tool in a live
z/OS environment.
Here's the agenda for the events:
09:00 Registration & Breakfast
09:15 What’s New in OMEGAMON with Enhanced 3270 User Interface
10:00 Exploring OMEGAMON with hands-on Exercises on Live System z
Five Lab Test Drives at your choice:
e-3270 Intro, OM zOS v510, OM CICS v510, TEP, TEP-Advanced
11:45 Q&A, Feedback Summary
12:00 Lunch & Learn – Next Generation Performance Automation
13:00 Closing
The OMEGAMON test drives will happen in the following cities:
San Francisco - 425 Market St (20th flr) - June 26th
Sacramento - 2710 S Gateway Oaks Dr (2nd flr) - June 28th
Costa Mesa - 600 Anton Blvd (2nd flr) - July 10th
Phoenix - 2929 N Central Ave (5th flr) - July 12th
Seattle - 1200 Fifth Ave (9th flr) - July 19th
Whether you are a current OMEGAMON customer or just want to learn more about OMEGAMON, feel free to attend. The event is free. To attend, email my colleague Tony Anderson andersan@us.ibm.com or call (415)-545-2478.
Here's the agenda for the events:
09:00 Registration & Breakfast
09:15 What’s New in OMEGAMON with Enhanced 3270 User Interface
10:00 Exploring OMEGAMON with hands-on Exercises on Live System z
Five Lab Test Drives at your choice:
e-3270 Intro, OM zOS v510, OM CICS v510, TEP, TEP-Advanced
11:45 Q&A, Feedback Summary
12:00 Lunch & Learn – Next Generation Performance Automation
13:00 Closing
The OMEGAMON test drives will happen in the following cities:
San Francisco - 425 Market St (20th flr) - June 26th
Sacramento - 2710 S Gateway Oaks Dr (2nd flr) - June 28th
Costa Mesa - 600 Anton Blvd (2nd flr) - July 10th
Phoenix - 2929 N Central Ave (5th flr) - July 12th
Seattle - 1200 Fifth Ave (9th flr) - July 19th
Whether you are a current OMEGAMON customer or just want to learn more about OMEGAMON, feel free to attend. The event is free. To attend, email my colleague Tony Anderson andersan@us.ibm.com or call (415)-545-2478.
Tuesday, June 19, 2012
IBM relcaims the #1 spot for super computing
IBM's Sequoia has taken the top spot
on the list of the world's fastest supercomputers. The US can re-claim the #1 spot after being beaten by China two years ago.
Here's a link to an article with some more information:
Friday, June 15, 2012
OMEGAMON DB2 PM/PE V5.1.1 now GA
As of today OMEGAMON DB2 PM/PE V5.1.1 is now generally available. In case of any confusion, OMEGAMON DB2 V5.1 has been available for a while. OMEGAMON DB2 V5.1 came out to coincide with the release of DB2 10 (if you are going to DB2 10 you need OMEGAMON DB2 V5.1 - prior releases do not support DB2 10).
What's in V5.1.1 that wasn't in V5.1? Well, several things including support for items that have been standard in the other OMEGAMON V5.1s, like self describing agents (SDA) and ITM 6.23 support. Probably one of the most interesting new features is support for the new enhanced 3270 user interface. Now you can run the new enhanced UI for z/OS, CICS, and DB2 (or the Big 3 as we sometimes call them).
For more information on OMEGAMON DB2 5.1.1 and other tools updates, here's a link:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS212-135
What's in V5.1.1 that wasn't in V5.1? Well, several things including support for items that have been standard in the other OMEGAMON V5.1s, like self describing agents (SDA) and ITM 6.23 support. Probably one of the most interesting new features is support for the new enhanced 3270 user interface. Now you can run the new enhanced UI for z/OS, CICS, and DB2 (or the Big 3 as we sometimes call them).
For more information on OMEGAMON DB2 5.1.1 and other tools updates, here's a link:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS212-135
Thursday, June 14, 2012
Load considerations for summarization and pruning
There have been some shops that have had issues with summarization and pruning of TDW data taking an inordinate amount of time. When you are summarizing or pruning millions of rows of data, this may take some time.
It's always useful to keep in mind that TDW is, fundamentally, a database. That means you may need the usual tuning and tweaks needed for any database. As I've mentioned in prior posts, TDW has the potential to gather quite a bit of data, and become quite large, so database tuning may become a factor.
Here's a link for some suggestions for DB2:
http://www-304.ibm.com/support/docview.wss?uid=swg21596149&myns=swgtiv&mynp=OCSSZ8F3&mync=R
It's always useful to keep in mind that TDW is, fundamentally, a database. That means you may need the usual tuning and tweaks needed for any database. As I've mentioned in prior posts, TDW has the potential to gather quite a bit of data, and become quite large, so database tuning may become a factor.
Here's a link for some suggestions for DB2:
http://www-304.ibm.com/support/docview.wss?uid=swg21596149&myns=swgtiv&mynp=OCSSZ8F3&mync=R
Wednesday, June 13, 2012
Using OMEGAMON IMS Application Trace with DBCTL transactions
I've posted before about using OMEGAMON IMS Application Trace to trace and analyze IMS transaction details. The Application Trace will show detailed, call by call level information, including DL/I calls, SQL calls, MQ calls, along with timing information and drill down call level detail.
In the past, I've shown examples illustrating the tool with IMS transactions, but what about CICS-DBCTL transactions? The good news is it's easy to use the Application Trace with DBCTL workload and it provides very detailed information on transaction timings and DL/I call activity.
Here I show an example. To trace DBCTL workload you specify the PSB to trace (in this example DFHSAM05). You specify the PSB and other parameters such as trace time and duration, and then start the trace. Once the trace is started, information is captured and then you see a list of DBCTL transactions (note the overview shows transaction code and CICS region).
To see detail for a specific transaction, you position the cursor and press F11. The detail display will show timing information, including elapsed time, CPU time, and time in database. You can also drill in another level and see DL/I call detail (including SSA, key feedback information and more).
The Application Trace facility has been improved tremendously, and provides very powerful analysis and detail not just of IMS transactions, but also CICS-DBCTL workload, as well.
In the past, I've shown examples illustrating the tool with IMS transactions, but what about CICS-DBCTL transactions? The good news is it's easy to use the Application Trace with DBCTL workload and it provides very detailed information on transaction timings and DL/I call activity.
Here I show an example. To trace DBCTL workload you specify the PSB to trace (in this example DFHSAM05). You specify the PSB and other parameters such as trace time and duration, and then start the trace. Once the trace is started, information is captured and then you see a list of DBCTL transactions (note the overview shows transaction code and CICS region).
To see detail for a specific transaction, you position the cursor and press F11. The detail display will show timing information, including elapsed time, CPU time, and time in database. You can also drill in another level and see DL/I call detail (including SSA, key feedback information and more).
The Application Trace facility has been improved tremendously, and provides very powerful analysis and detail not just of IMS transactions, but also CICS-DBCTL workload, as well.
Friday, June 8, 2012
Tivoli Data Warehouse planning spreadsheet
When you are looking at enabling the Tivoli Data Warehouse (TDW), some planning is in order. One of the more valuable planning tools available is the TDW warehouse load projection spreadsheet. I've posted on this tool in prior blog posts, but it's worth mentioning once more the availability of this tool.
At first glance the spreadsheet may appear a bit daunting. There are numerous tabs in the spreadsheet (many for agents you probably do not have). It's worth the time to at least briefly review the documentation that comes with the spreadsheet to learn how to hide information and agent types you are not interested in to make the graphics and data more legible.
Essentially, what the load projection tool does is it gives you a means to enter assumptions on numbers of agents, frequency of collection, tables of data to collect, and summarization/pruning to come up with space and load projections for sizing the TDW. As I've implied before, if you don't do some planning you may end up collecting a lot more data than you bargained for, and a lot of useless data becomes a headache and an impediment to using the TDW effectively.
Here's a link to the download the TDW load projection worksheet:
https://www-304.ibm.com/software/brandcatalog/ismlibrary/details?catalog.label=1TW10TM1Y
At first glance the spreadsheet may appear a bit daunting. There are numerous tabs in the spreadsheet (many for agents you probably do not have). It's worth the time to at least briefly review the documentation that comes with the spreadsheet to learn how to hide information and agent types you are not interested in to make the graphics and data more legible.
Essentially, what the load projection tool does is it gives you a means to enter assumptions on numbers of agents, frequency of collection, tables of data to collect, and summarization/pruning to come up with space and load projections for sizing the TDW. As I've implied before, if you don't do some planning you may end up collecting a lot more data than you bargained for, and a lot of useless data becomes a headache and an impediment to using the TDW effectively.
Here's a link to the download the TDW load projection worksheet:
https://www-304.ibm.com/software/brandcatalog/ismlibrary/details?catalog.label=1TW10TM1Y
Some potential z/OS maintenance requirements if running TEMS on z/OS 1.13
If you are running OMEGAMON (and therefore running a TEMS) on z/OS 1.13 you may run into a scenario where you would need to apply some maintenance to DFSMS. The problem would be fairly pronounced (the TEMS would fail to start) and you would see KLVST027 error messages in the RKLVLOG for the task.
Here's a link to a technote that describes the issue in more detail:
http://www-304.ibm.com/support/docview.wss?uid=swg21596240&myns=swgtiv&mynp=OCSS2JNN&mync=R
Here's a link to a technote that describes the issue in more detail:
http://www-304.ibm.com/support/docview.wss?uid=swg21596240&myns=swgtiv&mynp=OCSS2JNN&mync=R
Friday, June 1, 2012
Getting started with Tivoli Data Warehouse
So you login to the Tivoli Portal, click the history collection icon, and request that some history be collected. You specify the tables to collect, define the collection interval, specify the warehouse option, and enable summarization and pruning. And finally you click the distribution tab and select the managed systems you want to collect history for. Click 'Apply' and notice the icon to the left of your definition entry turns green. See the example of the typical steps to enable history collection.
Everything's great and all is now operational. Right? Well, maybe.
One area of confusion for users is that when you define history collection in the Tivoli Portal, it may look like everything is being collected and you should be getting data, but when you request the data in the Tivoli Portal what you get is no data at all or various SQL errors. You see the history icons and everything in the workspaces, but you don't get any data. The question then becomes, what do you do next?
The thing to keep in mind is that there is infrastructure at several levels that needs to be in place for the Tivoli Data Warehouse to function as advertised. To enable TDW you need the following infrastructure: history collection PDS's (persistent data stores) created for the TEMAs (i.e. the agents), warehouse and summarization/pruning processes installed and enabled with the proper drivers and definitions, and finally a target database to store the data (I usually use DB2). If any of these things is not in place, or not correctly configured, you will probably not get the history data you are looking for.
We will go through in subsequent posts things to look for when trying to debug TDW issues.
Everything's great and all is now operational. Right? Well, maybe.
One area of confusion for users is that when you define history collection in the Tivoli Portal, it may look like everything is being collected and you should be getting data, but when you request the data in the Tivoli Portal what you get is no data at all or various SQL errors. You see the history icons and everything in the workspaces, but you don't get any data. The question then becomes, what do you do next?
The thing to keep in mind is that there is infrastructure at several levels that needs to be in place for the Tivoli Data Warehouse to function as advertised. To enable TDW you need the following infrastructure: history collection PDS's (persistent data stores) created for the TEMAs (i.e. the agents), warehouse and summarization/pruning processes installed and enabled with the proper drivers and definitions, and finally a target database to store the data (I usually use DB2). If any of these things is not in place, or not correctly configured, you will probably not get the history data you are looking for.
We will go through in subsequent posts things to look for when trying to debug TDW issues.
Wednesday, May 30, 2012
Adventures in TDW land
Tivoli Data Warehouse (TDW) is a very useful and powerful feature of the OMEGAMON suite and of Tivoli monitoring in general. Each Tivoli monitoring solution from Linux, UNIX, Windows to z/OS connects to the Tivoli infrastructure and may optionally send information to the Tivoli Data Warehouse.
When you enable TDW history collection you specify many options, including what tables of information to collect, how often to collect, what agent types and managed systems to collect from, and if summarization/pruning is required. While seemingly straightforward, each of these options has important considerations that may impact the usefulness of the resultant data being collected.
With many users there can be quite a few questions. Where to begin? What data should be collected? How should the data be retained and for how long? Where should the data be stored? How is the data to be used and by what audiences?
Planning and analysis is important to a successful implementation of the TDW. One approach that should be avoided is the 'turn it all on' strategy. The turn it all on approach will inevitably result in the user collecting more data than is needed and this has multiple shortcomings. First, unnecessary data collection wastes space and resources. Second, unnecessary data collection makes it slower and more time consuming to retrieve information that is useful.
As a general methodology, it is usually better to employ a start small, then work your way up approach to enabling TDW history collection. You can always dynamically enable more collection options, but weeding out large quantities of useless data may be a time consuming exercise.
I will be doing a series of posts on TDW with the goal of documenting a best practices approach to enabling this portion of the tool.
When you enable TDW history collection you specify many options, including what tables of information to collect, how often to collect, what agent types and managed systems to collect from, and if summarization/pruning is required. While seemingly straightforward, each of these options has important considerations that may impact the usefulness of the resultant data being collected.
With many users there can be quite a few questions. Where to begin? What data should be collected? How should the data be retained and for how long? Where should the data be stored? How is the data to be used and by what audiences?
Planning and analysis is important to a successful implementation of the TDW. One approach that should be avoided is the 'turn it all on' strategy. The turn it all on approach will inevitably result in the user collecting more data than is needed and this has multiple shortcomings. First, unnecessary data collection wastes space and resources. Second, unnecessary data collection makes it slower and more time consuming to retrieve information that is useful.
As a general methodology, it is usually better to employ a start small, then work your way up approach to enabling TDW history collection. You can always dynamically enable more collection options, but weeding out large quantities of useless data may be a time consuming exercise.
I will be doing a series of posts on TDW with the goal of documenting a best practices approach to enabling this portion of the tool.
Monday, May 21, 2012
Interested in IMS 12?
In June IBM will be performing a series of detailed technical webcasts with IMS 12 as the theme. The line-up is as follows:
June 26 - IMS 12
It still offers the lowest-cost-per-transaction and fastest DBMS, with significant enhancements to help modernize applications, enable interoperation and integration, streamline installation and management, and enable growth. These enhancements to IMS 12 can also positively impact your bottom line.
June 27 - IMS Enterprise Suite (including IMS Explorer)
These integration solutions and tooling, part of the IMS SOA Integration Suite, support open integration technologies that enable new application development and extend access to IMS transactions and data. New features further simplify IMS application development tasks, and enable them to interoperate outside the IMS environment.
June 28 - IMS Tools Solutions Packs
With all the tools needed to support IMS databases now together in one package, many new features are available. You can reorganize IMS databases only when needed, improve IMS application performance and resource utilization — with faster end-to-end analysis of IMS transactions.
There will be a question-and-answer session at the end of each days program. The teleconference series
There will be a question-and-answer session at the end of each days program.
These will will be detailed all day technical sessions. If you are interested in attending, here's a link:
http://ibm.co/KsGilO
June 26 - IMS 12
It still offers the lowest-cost-per-transaction and fastest DBMS, with significant enhancements to help modernize applications, enable interoperation and integration, streamline installation and management, and enable growth. These enhancements to IMS 12 can also positively impact your bottom line.
June 27 - IMS Enterprise Suite (including IMS Explorer)
These integration solutions and tooling, part of the IMS SOA Integration Suite, support open integration technologies that enable new application development and extend access to IMS transactions and data. New features further simplify IMS application development tasks, and enable them to interoperate outside the IMS environment.
June 28 - IMS Tools Solutions Packs
With all the tools needed to support IMS databases now together in one package, many new features are available. You can reorganize IMS databases only when needed, improve IMS application performance and resource utilization — with faster end-to-end analysis of IMS transactions.
There will be a question-and-answer session at the end of each days program. The teleconference series
There will be a question-and-answer session at the end of each days program.
These will will be detailed all day technical sessions. If you are interested in attending, here's a link:
http://ibm.co/KsGilO
Friday, May 18, 2012
An interesting DB2 z/OS webcast
On June 12th there is an interesting webcast on what's new in DB2 z/OS and how it relates to using DB2 for operational analytics. "New DB2 for z/OS capabilities elevate operational analytics on IBM System z" covers the latest innovations with DB2 for z/OS that will help deliver on the operational analytic requirements of business. IBM experts will discuss new DB2 for z/OS capabilities that will help elevate analytic strategy on System z while reducing overhead.
It's a free webcast. The event is June 12, 2012 at 11:00 a.m. Eastern Daylight Time. If you are interested, here is a link:
http://ibm.co/JbKKWK
It's a free webcast. The event is June 12, 2012 at 11:00 a.m. Eastern Daylight Time. If you are interested, here is a link:
http://ibm.co/JbKKWK
Thursday, May 17, 2012
Enabling the IMS Health Workspace in OMEGAMON IMS
OMEGAMON IMS V4.20 added quite a few new workspaces. You may have noticed a new one called the IMS Health workspace. This workspace tracks a lot of useful rate/workload counters on a single display, such as enqueue/dequeue rates, CPU rates, I/O rates, and transaction queue depth. It's a handy screen to track several useful counters from a single display.
But what if when you go to the workspace you don't get any information? There is probably a step you need to take to enable the workspace. Ironically, in the RKLVLOG for the OMEGAMON IMS TEMA task you may see the following eyecatcher:
KIPDCI35W RKANPAR(KIPIFPRM) MEMBER NOT FOUND
KIPDCI38W IMS HEALTH COLLECTOR IS DISABLED; ADD IF1_HEALTH_COLLECTOR=ON IN RKANPAR(KIPIFPRM) TO ENABLE
If you see this eyecatcher, just follow the instructions. Add the member with the parm as shown in the eyecatcher, recycle the TEMA and you are in business. Here's an example:
But what if when you go to the workspace you don't get any information? There is probably a step you need to take to enable the workspace. Ironically, in the RKLVLOG for the OMEGAMON IMS TEMA task you may see the following eyecatcher:
KIPDCI35W RKANPAR(KIPIFPRM) MEMBER NOT FOUND
KIPDCI38W IMS HEALTH COLLECTOR IS DISABLED; ADD IF1_HEALTH_COLLECTOR=ON IN RKANPAR(KIPIFPRM) TO ENABLE
If you see this eyecatcher, just follow the instructions. Add the member with the parm as shown in the eyecatcher, recycle the TEMA and you are in business. Here's an example:
Wednesday, May 16, 2012
Using SOAP to retrieve history
I've done some prior posts on using the SOAP interface. SOAP requests are usually used to retrieve real time data from the monitoring infrastructure. SOAP can be used by a variety of Tivoli tools, and even technologies like System Automation may interface to Tivoli monitoring using SOAP. What you may not know is that SOAP
can also be used to retrieve historical data.
To retrieve history data via SOAP you would use the SOAP CT_GET request, and you need to use the
tag (it must be set to “Y”). There are some subtleties to this process, but it is interesting to be aware that you have the capability.
For an example and more detail here is a link:
http://www-01.ibm.com/support/docview.wss?uid=swg21591533&myns=swgtiv&mynp=OCSSZ8F3&mync=R
To retrieve history data via SOAP you would use the SOAP CT_GET request, and you need to use the
For an example and more detail here is a link:
http://www-01.ibm.com/support/docview.wss?uid=swg21591533&myns=swgtiv&mynp=OCSSZ8F3&mync=R
Thursday, May 10, 2012
A couple recent questions
I've had a couple questions come up recently on the blog recently.
One question was about TACMD. For those of you who are not familiar with TACMD, it is a very powerful utility that you use to manage and configure your ITM monitoring infrastructure. You have commands that allow you to change settings, import or export information (like workspace panels), customize situations and much more. You can use TACMD to do things like create situations that you cannot create using the Tivoli Portal GUI interface alone. So TACMD is very powerful, but you need to know what you are doing when you use it.
The questions was if there are any issues I'm aware of with TACMD on Windows Server 2008? All I can say is I have not run into any issues with TACMD on Win 2008, but I'm just speaking from my own experience on that.
Another question was related to NetView support in the Tivoli Portal. For those of you who do not know, NetView can connect to the Tivoli Portal and provides you with quite a bit of network information in the Portal.
The question was is there a charge for the NetView interface? The answer is no. It is provided as part of the tool. You just need to configure it.
One question was about TACMD. For those of you who are not familiar with TACMD, it is a very powerful utility that you use to manage and configure your ITM monitoring infrastructure. You have commands that allow you to change settings, import or export information (like workspace panels), customize situations and much more. You can use TACMD to do things like create situations that you cannot create using the Tivoli Portal GUI interface alone. So TACMD is very powerful, but you need to know what you are doing when you use it.
The questions was if there are any issues I'm aware of with TACMD on Windows Server 2008? All I can say is I have not run into any issues with TACMD on Win 2008, but I'm just speaking from my own experience on that.
Another question was related to NetView support in the Tivoli Portal. For those of you who do not know, NetView can connect to the Tivoli Portal and provides you with quite a bit of network information in the Portal.
The question was is there a charge for the NetView interface? The answer is no. It is provided as part of the tool. You just need to configure it.
Tuesday, May 1, 2012
Take the new OMEGAMON for a test drive!
If you haven't had a chance to get your hands on the new OMEGAMON enhanced 3270 user interface, now may be your chance. We will be doing a series of OMEGAMON V5.1 test drive events in the midwest this month. It will be a chance for you to get hands on usage of the tool in a live z/OS environment.
Here's the agenda for the events:
09:00 Registration & Breakfast
09:15 What’s New in OMEGAMON with Enhanced 3270 User Interface
10:00 Exploring OMEGAMON with hands-on Exercises on Live System z
Five Lab Test Drives at your choice:
e-3270 Intro, OM zOS v510, OM CICS v510, TEP, TEP-Advanced
11:45 Q&A, Feedback Summary
12:00 Lunch & Learn – Next Generation Performance Automation
13:00 Closing
The OMEGAMON test drives will happen in the following cities (with more likely to follow):
Chicago (at the IBM TEC center downtown) - May 8th
St Louis (at the IBM TEC center Hazelwood) - May 15th
Minneapolis (at the IBM office downtown) - May 23rd
Whether you are a current OMEGAMON customer or just want to learn more about OMEGAMON, feel free to attend. The event is free and we will be providing lunch. To attend, email my colleague Clifford Koch,
cjkoch@us.ibm.com or call (314)-409-2859.
Here's the agenda for the events:
09:00 Registration & Breakfast
09:15 What’s New in OMEGAMON with Enhanced 3270 User Interface
10:00 Exploring OMEGAMON with hands-on Exercises on Live System z
Five Lab Test Drives at your choice:
e-3270 Intro, OM zOS v510, OM CICS v510, TEP, TEP-Advanced
11:45 Q&A, Feedback Summary
12:00 Lunch & Learn – Next Generation Performance Automation
13:00 Closing
The OMEGAMON test drives will happen in the following cities (with more likely to follow):
Chicago (at the IBM TEC center downtown) - May 8th
St Louis (at the IBM TEC center Hazelwood) - May 15th
Minneapolis (at the IBM office downtown) - May 23rd
Whether you are a current OMEGAMON customer or just want to learn more about OMEGAMON, feel free to attend. The event is free and we will be providing lunch. To attend, email my colleague Clifford Koch,
cjkoch@us.ibm.com or call (314)-409-2859.
Missing data from the WAREHOUSELOG table
The WAREHOUSELOG table is part of the Tivoli Data Warehouse (TDW) infrastructure. It's function is to provide an ongoing log of activity into the TDW. If data is being sent into the TDW, this log is supposed to maintain a history of what data, how much, and if there were any errors as part of the process. It can be a useful source of information when trying to debug issues with data not making it into the TDW.
Well, with ITM 6.23 there is apparently a change in how this table may be created or used. As of ITM 6.23, there is a new variable KHD_WHLOG_ENABLE which enables/disables the creation of the warehouse log tables to save resources. The variable default is set to No (KHD_WHLOG_ENABLE = N ).
So now the default as of ITM 6.23 is to not log warehouse activity. This is a change from how the tool used to work (by the way, I'm not a big fan of changes to defaults that alter how a tool has worked for years). So if you use TDW, and may on occasion need to look at the WAREHOUSELOG, you may want to double check this setting.
Here's a link to a technote on how this is set:
http://www-01.ibm.com/support/docview.wss?uid=swg21592074&myns=swgtiv&mynp=OCSSZ8F3&mync=R
Well, with ITM 6.23 there is apparently a change in how this table may be created or used. As of ITM 6.23, there is a new variable KHD_WHLOG_ENABLE which enables/disables the creation of the warehouse log tables to save resources. The variable default is set to No (KHD_WHLOG_ENABLE = N ).
So now the default as of ITM 6.23 is to not log warehouse activity. This is a change from how the tool used to work (by the way, I'm not a big fan of changes to defaults that alter how a tool has worked for years). So if you use TDW, and may on occasion need to look at the WAREHOUSELOG, you may want to double check this setting.
Here's a link to a technote on how this is set:
http://www-01.ibm.com/support/docview.wss?uid=swg21592074&myns=swgtiv&mynp=OCSSZ8F3&mync=R
Friday, April 27, 2012
Plot chart options in the Tivoli Portal
One of the nice features of the Tivoli Enterprise Portal is the ability to select specific monitoring data and plot that data over time. Some things are easy to plot in that they are single metric numbers (like LPAR CPU percent, for example). Other things may be a little trickier. For example what if you want to plot the CPU usage of multiple address spaces and do it within a single plot chart?
Here's an example of how you can do it. First, set up your plot chart like you usually would (click and drag the plot chart icon from the tool bar to the desired spot on the workspace). Next, use the filter options in properties to specify what you want to plot. In this example we are plotting all the tasks that start with DEMO in the name. Then click on the Style tab and click on the chart icon.
Now here's the magic. Notice to the right there is a button that says "Attributes across all rows". Click this button. This will enable the plot for each task named DEMO* in a single plot chart.
Here's an example of the options, and the finished plot chart on the lower left.
Here's an example of how you can do it. First, set up your plot chart like you usually would (click and drag the plot chart icon from the tool bar to the desired spot on the workspace). Next, use the filter options in properties to specify what you want to plot. In this example we are plotting all the tasks that start with DEMO in the name. Then click on the Style tab and click on the chart icon.
Now here's the magic. Notice to the right there is a button that says "Attributes across all rows". Click this button. This will enable the plot for each task named DEMO* in a single plot chart.
Here's an example of the options, and the finished plot chart on the lower left.
Wednesday, April 25, 2012
Independent blog analysis of OMEGAMON V5.1
I' ve posted quite a bit over the past two months about OMEGAMON z/OS and CICS V5.1. If you're interested in an analysis from an independent (meaning not IBM) source, check out the Ptak/Noel blog. Their blog post, "IBM OMEGAMON V5.1 = Good reasons for Customer Interest and Excitement" goes into a good overview of the key features offered in V5.1.
If you want to check out their analysis, here's a link:
http://ptaknoel.com/ibm-omegamon-v5-1-good-reasons-for-customer-interest-and-excitement/
If you want to check out their analysis, here's a link:
http://ptaknoel.com/ibm-omegamon-v5-1-good-reasons-for-customer-interest-and-excitement/
Thursday, April 19, 2012
Upcoming webcast on z/OS storage management
There will be a webcast next week entitled "Improve the management of your growing mainframe storage environments".
Topics covered will include using a common set of tools across many storage systems, tailoring alerts, automating tasks, preplanning corrective actions, and supporting resource management through an enterprise-wide view of storage infrastructure.
The event is April 26th at 11 AM ET. If you are interested in attending, here's a link:
http://ibm.co/xYrbIn
Topics covered will include using a common set of tools across many storage systems, tailoring alerts, automating tasks, preplanning corrective actions, and supporting resource management through an enterprise-wide view of storage infrastructure.
The event is April 26th at 11 AM ET. If you are interested in attending, here's a link:
http://ibm.co/xYrbIn
Tuesday, April 17, 2012
A couple informational events worth checking out
There are a couple informational events coming up that you may find interesting.
The first is a webcast, "What’s new with IBM workload automation?" is being presented by Flora Tramantano, Tivoli Workload Automation Product Manager. The presentation will discuss Tivoli Workload Scheduler for z/OS, and how it uses business policies to aggregate and centrally manage cross-enterprise, and how heterogeneous workloads may be managed to support business goals and service levels.
The event is May 17, 2012, 11 a.m., Eastern Daylight Time. To attend you can register at:
http://ibm.co/Hd7XaV
The second event is a a DB2 Technical Update Briefing. In this event there will be discussions on
The event is April 18th in Nashville, TN. So if you are in the Nashville area, it's worth checking out. Here's a link to attend.
http://ibm.co/IfvxEs
The first is a webcast, "What’s new with IBM workload automation?" is being presented by Flora Tramantano, Tivoli Workload Automation Product Manager. The presentation will discuss Tivoli Workload Scheduler for z/OS, and how it uses business policies to aggregate and centrally manage cross-enterprise, and how heterogeneous workloads may be managed to support business goals and service levels.
The event is May 17, 2012, 11 a.m., Eastern Daylight Time. To attend you can register at:
http://ibm.co/Hd7XaV
The second event is a a DB2 Technical Update Briefing. In this event there will be discussions on
- Lessons learned in planning your migration to DB2 10 for immediate CPU savings
- Using DB2 10 to scale-up or scale-out simply, and with less system management
- New capabilities in protecting your data with encryption, obfuscation, audit, security, compliance and vulnerability assessment of data
- Major, new enhancements in DB2 Utilities, for DB2 10 and improvements to performance and availability
The event is April 18th in Nashville, TN. So if you are in the Nashville area, it's worth checking out. Here's a link to attend.
http://ibm.co/IfvxEs
Friday, April 13, 2012
Added links for all the videos
For convenience I added links for all the videos on the right side of the blog.
More Youtube videos on the new enhanced 3270 user interface
Here are some more Youtube videos to check out that show OMEGAMON V5.1 enhanced 3270 user interface in action. These videos go through several common usage scenarios. They're worth taking a look at. Here's the links:
Navigate between CICS regions within the same CICSplex
http://www.youtube.com/watch?v=WUlL0ZU4ZYE&feature=related
Locate and purge a task that is causing a CICS SOS condition
http://www.youtube.com/watch?v=5fKxtVdoayg&feature=relmfu
Detect and diagnose problems with CICS Web Services.
http://www.youtube.com/watch?v=a-KxKwVgLMg&feature=relmfu
Using the Filter and Find in CICS
http://www.youtube.com/watch?v=lib57ZfVupw&feature=relmfu
Edit Resource Limiting Rules for a CICS region.
http://www.youtube.com/watch?v=KM4eeOHhEcw&feature=relmfu
Navigate between CICS regions within the same CICSplex
http://www.youtube.com/watch?v=WUlL0ZU4ZYE&feature=related
Locate and purge a task that is causing a CICS SOS condition
http://www.youtube.com/watch?v=5fKxtVdoayg&feature=relmfu
Detect and diagnose problems with CICS Web Services.
http://www.youtube.com/watch?v=a-KxKwVgLMg&feature=relmfu
Using the Filter and Find in CICS
http://www.youtube.com/watch?v=lib57ZfVupw&feature=relmfu
Edit Resource Limiting Rules for a CICS region.
http://www.youtube.com/watch?v=KM4eeOHhEcw&feature=relmfu
Subscribe to:
Posts (Atom)