There is a new webcast event upcoming on the new release of OMEGAMON DB2, OMEGAMON V5.11. This webcast will cover the capabilities of the tool including: advanced problem determination using focused scenarios designed by customers, combining OMEGAMON DB2 information with CICS and z/OS in a new enhanced 3270 workspace, improved efficiency with fewer screen interactions to find root cause performance impact in real time, and additional end-to-end response time measurement capability.
The speaker is Steve Fafard, Product Manager, OMEGAMON for DB2. The event will be August 9th, 2012, 11 a.m., EDT.
The price is right. The webcast is a free event. Here's a link to sign up:
http://ibm.co/Nt3HIq
Monday, July 30, 2012
Friday, July 27, 2012
Tivoli Data Warehouse - Summarization And Pruning considerations
We' ve covered quite a bit on the Tivoli Data Warehouse (TDW) and what happens when you initialize history collection for a given table.
Once you have had history collecting for an interval of time, you may want to look at using summarization and pruning to control the quantity of data being retained, and to also take advantage of the ability to use summarized data for trending purposes.
Below is an example of the configuration options for summarization and pruning. As part of the earlier exercise we enabled TDW history collection for Common Storage. Now we are going to enable summarization and pruning for this data.
In the above example, we are enabling pruning on detailed data by specifying that we will keep 30 days of detail. Detail beyond that point will be removed by the Summarization and pruning agent process. Also, the detail data will be summarized, both on an hourly and on a daily basis. We will be keeping 90 days of hourly data, and one year of daily data.
By specifying summarization and pruning, we can control the quantity of data being retained within the TDW. This can be very important, especially for high volume tables of data or tables being collected for a large number of managed systems.
Another consideration to be aware of is that when you specify summarization, the summarization agent will create another set of tables. In the above example, the agent will have DB2 to create a table for hourly data and another table to hold daily data.
This is an area of confusion for some uers. I have encountered some shops who thought they were going to save space right away by turning on summarization. However, when they enabled the function, the TDW database ended up requiring more space. Why? Because the summarization agent had to create the additional tables to store the summarized data.
Once you have had history collecting for an interval of time, you may want to look at using summarization and pruning to control the quantity of data being retained, and to also take advantage of the ability to use summarized data for trending purposes.
Below is an example of the configuration options for summarization and pruning. As part of the earlier exercise we enabled TDW history collection for Common Storage. Now we are going to enable summarization and pruning for this data.
By specifying summarization and pruning, we can control the quantity of data being retained within the TDW. This can be very important, especially for high volume tables of data or tables being collected for a large number of managed systems.
Another consideration to be aware of is that when you specify summarization, the summarization agent will create another set of tables. In the above example, the agent will have DB2 to create a table for hourly data and another table to hold daily data.
This is an area of confusion for some uers. I have encountered some shops who thought they were going to save space right away by turning on summarization. However, when they enabled the function, the TDW database ended up requiring more space. Why? Because the summarization agent had to create the additional tables to store the summarized data.
Wednesday, July 25, 2012
Some things to be aware of when installing OMEGAMON DB2 V5.11
In addition to introducing support for the new enhanced 3270 user interface, there are some infrastructure considerations you should be aware of with OMEGAMON DB2 V5.11. OMEGAMON DB2 V5.11, in a smiliar fashion to OMEGAMON z/OS and CICS V5.1, introduces support for self describing agents (SDA). SDA support eliminates the need for the application support media when defining the agent to the Tivoli monitoring infrastructure.
Another thing to be aware of is that the OMEGAMON DB2 agent (i.e. the TEMA) has been re-architected in V5.11. Because of this, there are some additional steps you may need to be aware of when deploying OMEGAMON DB2 V5.11. It is important that you review the the documentation in the V5.11 PSP bucket, and look at relevant readme information. Here's a link to the PSP information:
http://www-304.ibm.com/support/docview.wss?uid=isg1_5655W37_HKDB511
If you overlook this step, you may be missing some relevant information, in particular data sharing related information.
Another thing to be aware of is that the OMEGAMON DB2 agent (i.e. the TEMA) has been re-architected in V5.11. Because of this, there are some additional steps you may need to be aware of when deploying OMEGAMON DB2 V5.11. It is important that you review the the documentation in the V5.11 PSP bucket, and look at relevant readme information. Here's a link to the PSP information:
http://www-304.ibm.com/support/docview.wss?uid=isg1_5655W37_HKDB511
If you overlook this step, you may be missing some relevant information, in particular data sharing related information.
Thursday, July 19, 2012
OMEGAMON DB2 V5.11 features the new enhanced 3270 user interface
OMEGAMON DB2 V5.11 was recently released. There were several interesting enhancements and additions to the tool. One of the more interesting new features is support for the new enhanced 3270 user interface. With the addition of OMEGAMON DB2 to the fold, this means that three core OMEGAMONs (z/OS, CICS, DB2) now support the enhanced 3270 ui.
One of the nice aspects of the enhanced ui is the ability to do your monitoring using an integrated interface that enables you to monitor from a single point of view. Here is an example of the default KOBSTART panel, now showing DB2 information on the display.
From this central screen, as the example shows, you can easily navigate to z/OS, CICS or DB2.
From the main panel (KOBSTART), you can then drill down into DB2 specific detailed views, and drill in on commonly used displays, such as DB2 thread views. Here is an example of how you would drill in on DB2 threads.
From this you you can drill in on detail for each thread executing on the system, and do analysis of what the threads are doing and where they are spedning their time.
One of the nice aspects of the enhanced ui is the ability to do your monitoring using an integrated interface that enables you to monitor from a single point of view. Here is an example of the default KOBSTART panel, now showing DB2 information on the display.
From this central screen, as the example shows, you can easily navigate to z/OS, CICS or DB2.
From the main panel (KOBSTART), you can then drill down into DB2 specific detailed views, and drill in on commonly used displays, such as DB2 thread views. Here is an example of how you would drill in on DB2 threads.
From this you you can drill in on detail for each thread executing on the system, and do analysis of what the threads are doing and where they are spedning their time.
Thursday, July 12, 2012
More TDW under the covers
Now that it appears data on z/OS Common Storage utilization is flowing to the TDW, be aware that there is information readily available to track on an ongoing basis the amount of data flowing to the TDW, and to show indications of potential issues with TDW history collection.
In the Tivoli Portal there are workspaces that show the status of the Warehouse Proxy, and also track history row processing counts and history collection error counts. The example below shows the Warehouse Proxy statistics workspace.
The rows statistics chart on the workspace shows the number of rows sent to the Warehosue Proxy and the number of rows inserted by the Warehouse Proxy to the TDW database. Note there is a marked difference in this example between the number of rows sent and the number of rows inserted. This indicates a potential issue where data is being sent to the Warehouse Proxy, but due to some issue the data is not making it into the TDW database. Note also in the Failures/Disconnections chart there is a count showing failures with the TDW.
Now that we see there is a potential issue with data making its way into the TDW database, how do you get more information on what the problem may be? First, you can look at the log for the Warehouse Proxy process. Also, you can look at the Warehouselog table in the TDW database (this is assuming you have it enabled - be aware of an earlier blog post I made on this not being on by default in ITM 6.23).
I the following example I show information selected from the Warehouselog table in the TDW. The Warehouselog table shows row statistics and error message information (if applicable) for each attempted insert to the TDW.
In the example we see that the Common Storage table has 16 rows received and 16 rows inserted. However, the Address Space CPU Utilization table seems to have an issue with 1 row received and one row skipped, and also an error message. Analysis of the error message would give some indication as to why the TDW is having an issue.
As you can see there is quite a bit of information available to track the status and activity of TDW, and it is worth checking this information on an ongoing basis to ensure history collection is proceeding normally.
In the Tivoli Portal there are workspaces that show the status of the Warehouse Proxy, and also track history row processing counts and history collection error counts. The example below shows the Warehouse Proxy statistics workspace.
The rows statistics chart on the workspace shows the number of rows sent to the Warehosue Proxy and the number of rows inserted by the Warehouse Proxy to the TDW database. Note there is a marked difference in this example between the number of rows sent and the number of rows inserted. This indicates a potential issue where data is being sent to the Warehouse Proxy, but due to some issue the data is not making it into the TDW database. Note also in the Failures/Disconnections chart there is a count showing failures with the TDW.
Now that we see there is a potential issue with data making its way into the TDW database, how do you get more information on what the problem may be? First, you can look at the log for the Warehouse Proxy process. Also, you can look at the Warehouselog table in the TDW database (this is assuming you have it enabled - be aware of an earlier blog post I made on this not being on by default in ITM 6.23).
I the following example I show information selected from the Warehouselog table in the TDW. The Warehouselog table shows row statistics and error message information (if applicable) for each attempted insert to the TDW.
In the example we see that the Common Storage table has 16 rows received and 16 rows inserted. However, the Address Space CPU Utilization table seems to have an issue with 1 row received and one row skipped, and also an error message. Analysis of the error message would give some indication as to why the TDW is having an issue.
As you can see there is quite a bit of information available to track the status and activity of TDW, and it is worth checking this information on an ongoing basis to ensure history collection is proceeding normally.
Tuesday, July 10, 2012
TDW under the covers - continued
The TDW database itself may reside on a number of platforms, including Linux, UNIX, Windows and z/OS. The other required infrastructure is the Tivoli Warehouse Proxy process and the Summarization and Pruning agent process. These processes may run on Linux, UNIX, or Windows. Note that these processes do not run on native z/OS (although they could run on Linux on z). That means if you run the TDW on DB2 on z/OS, you will also need to have the Warehouse Proxy and Summarization/Pruning agent running on a separate platform, such as Linux, UNIX, or Windows.
So we've looked at what happens within the agent (TEMA) task when historical collection is started. You should see messages indicating collection has begun, and be able to see if data is being gathered, and for which tables. The next question is how does the historical data get from the TEMA to the Tivoli Data Warehouse (TDW)? That is the job of the Warehouse Proxy process.
When you define history collection you also specify if the history data is to go to the TDW, and if so, how often that data should be sent to the TDW (anywhere from once per day to every 15 minutes). Sending data to the TDW on a regular interval is usually preferable to sending large quantities of data only once or twice per day. This would avoid large bursts of activity to the TDW.
It's the job of the Warehouse Proxy process to gather data from the various agents, as specified by the TDW interval, and send the data to the TDW database. The TDW database consists of tables that correlate to the respective TEMA tables being gathered. If the required table does not already exist in the database, it's the job of the Warehouse Proxy to create the table. Here we see an example of the messages from the Warehouse Proxy log that document table creation for our Common Storage (COMSTOR) table:
(Friday, July 6, 2012, 10:17:31 AM-{1684}khdxdbex.cpp,1716,"createTable") "Common_Storage" - Table Successfully Created in Target Database
(Friday, July 6, 2012, 10:17:31 AM-{1684}khdxdbex.cpp,1725,"createTable") "Common_Storage" - Access GRANTed to PUBLIC
Once the Warehouse Proxy process has the table defined, you should be able to display the tables in DB2 (using tools like the DB2 Control Center), and issue SQL selects to see the data in the tables, as shown below:
In the above example we see the Common Storage table in DB2, we see the cardinality column which indicates the number of rows in the table, and we can also run SQL Selects against the table to display the contents. By doing this we can verify that the data is now flowing to the TDW.
So we've looked at what happens within the agent (TEMA) task when historical collection is started. You should see messages indicating collection has begun, and be able to see if data is being gathered, and for which tables. The next question is how does the historical data get from the TEMA to the Tivoli Data Warehouse (TDW)? That is the job of the Warehouse Proxy process.
When you define history collection you also specify if the history data is to go to the TDW, and if so, how often that data should be sent to the TDW (anywhere from once per day to every 15 minutes). Sending data to the TDW on a regular interval is usually preferable to sending large quantities of data only once or twice per day. This would avoid large bursts of activity to the TDW.
It's the job of the Warehouse Proxy process to gather data from the various agents, as specified by the TDW interval, and send the data to the TDW database. The TDW database consists of tables that correlate to the respective TEMA tables being gathered. If the required table does not already exist in the database, it's the job of the Warehouse Proxy to create the table. Here we see an example of the messages from the Warehouse Proxy log that document table creation for our Common Storage (COMSTOR) table:
(Friday, July 6, 2012, 10:17:31 AM-{1684}khdxdbex.cpp,1716,"createTable") "Common_Storage" - Table Successfully Created in Target Database
(Friday, July 6, 2012, 10:17:31 AM-{1684}khdxdbex.cpp,1725,"createTable") "Common_Storage" - Access GRANTed to PUBLIC
Once the Warehouse Proxy process has the table defined, you should be able to display the tables in DB2 (using tools like the DB2 Control Center), and issue SQL selects to see the data in the tables, as shown below:
In the above example we see the Common Storage table in DB2, we see the cardinality column which indicates the number of rows in the table, and we can also run SQL Selects against the table to display the contents. By doing this we can verify that the data is now flowing to the TDW.
Friday, July 6, 2012
Understanding what is happening under the covers with TDW
When you are starting up historical data collection for TDW there are several things that happen under the covers that you may need to be aware of.
Here I show an example of starting history collection for z/OS Common Storage (CSA) utilization. In this example we start by specifying the agent (in this example z/OS) and the attribute group to collect (meaning the table of information to gather). You will then specify the collection interval, and how often the data is to be sent to the Tivoli data warehouse (TDW). You then click on the distribution tab and select one or more managed systems to collect data from. When you click Apply, collection should begin and the icon to the left of the collection definition should turn green.
Once the collection definition is distributed, collection should be active and you should see a message in the RKLVLOG of the TEMA (meaning the agent address space). You should see a reference to the starting of a UADVISOR for the table you are collecting (in this example UADVISOR_KM5_COMSTOR). At this point collection should begin, assuming all the other required infrastructure is in place and operational.
To validate that collection is occuring at the TEMA level, there are some useful commands. One command to try is /F taskname, KPDCMD QUERY CONNECT (the taskname would be the address space where the TEMA is running). This command will show the status of the various tables that could be collected by the agent, and how much data has been collected. The information will appear in the RKPDLOG output for the TEMA task (see example below):
QUERY CONNECT
Appl Table Active Group
Name Name Records Name Active Dataset Name
-------- ---------- -------- -------- ---------------------------
KM5 ASCPUUTIL 31019 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASCSOWN 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASREALSTOR 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASRESRC2 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASSUMRY 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASVIRTSTOR 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 BPXPRM2 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 CHNPATHS 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 COMSTOR 28 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
Note that data is being collected for the COMSTOR table (as indicated by the active record count). The next step in the process is to understand what is happening at the TDW and DB2 level. We will look at examples of this in later posts on this topic.
Here I show an example of starting history collection for z/OS Common Storage (CSA) utilization. In this example we start by specifying the agent (in this example z/OS) and the attribute group to collect (meaning the table of information to gather). You will then specify the collection interval, and how often the data is to be sent to the Tivoli data warehouse (TDW). You then click on the distribution tab and select one or more managed systems to collect data from. When you click Apply, collection should begin and the icon to the left of the collection definition should turn green.
Once the collection definition is distributed, collection should be active and you should see a message in the RKLVLOG of the TEMA (meaning the agent address space). You should see a reference to the starting of a UADVISOR for the table you are collecting (in this example UADVISOR_KM5_COMSTOR). At this point collection should begin, assuming all the other required infrastructure is in place and operational.
To validate that collection is occuring at the TEMA level, there are some useful commands. One command to try is /F taskname, KPDCMD QUERY CONNECT (the taskname would be the address space where the TEMA is running). This command will show the status of the various tables that could be collected by the agent, and how much data has been collected. The information will appear in the RKPDLOG output for the TEMA task (see example below):
QUERY CONNECT
Appl Table Active Group
Name Name Records Name Active Dataset Name
-------- ---------- -------- -------- ---------------------------
KM5 ASCPUUTIL 31019 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASCSOWN 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASREALSTOR 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASRESRC2 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASSUMRY 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 ASVIRTSTOR 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 BPXPRM2 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 CHNPATHS 0 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5 COMSTOR 28 LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
Note that data is being collected for the COMSTOR table (as indicated by the active record count). The next step in the process is to understand what is happening at the TDW and DB2 level. We will look at examples of this in later posts on this topic.
Thursday, July 5, 2012
A handy use for the TACMD command
TACMD is a handy tool with a variety of options for managing IBM Tivoli Monitoring (ITM) configuration, setup and infrastructure. TACMD may be seen as a Swiss army knife type of tool, with lots of options and uses. For example you can use TACMD to manage, edit. delete, start, stop, export and import situations. You can use TACMD to start/stop agents, manage managed system lists, and more.
You can also use TACMD to list out, export, and import workspaces defined to a Tivoli Portal. Let's say, for example, you need to list out all the workspaces defined to the Tivoli Portal. You may want to do that to determine what, if any, user defined workspaces have been added to the Tivoli Portal, and what they are called. The TACMD LISTWORKSPACES command allows you you to list out all the workspaces defined, and provides options to filter the display by application type or userid.
Here's an example of how this is done, and what the output may look like. In the example I show how you specify what portal server to access, and how you pass the userid and password (the one you would use to logon to the TEP) as part of the listworkspaces command. There's one more trick I show in the example. By adding >workspaces.txt to the end of the command, the command will direct it's output to a text file called workspaces.txt. This is convenient since you can then search and edit the output of the command.
TACMD LISTWORKSPACES is a convenient tool that allows you to identify what workspaces have been added and then plan for what workspaces you may need to backup, or import/export.

For more on TACMD, here's a link:
http://pic.dhe.ibm.com/infocenter/tivihelp/v15r1/index.jsp?topic=%2Fcom.ibm.itm.doc%2Fitm_cmdref35.htm
You can also use TACMD to list out, export, and import workspaces defined to a Tivoli Portal. Let's say, for example, you need to list out all the workspaces defined to the Tivoli Portal. You may want to do that to determine what, if any, user defined workspaces have been added to the Tivoli Portal, and what they are called. The TACMD LISTWORKSPACES command allows you you to list out all the workspaces defined, and provides options to filter the display by application type or userid.
Here's an example of how this is done, and what the output may look like. In the example I show how you specify what portal server to access, and how you pass the userid and password (the one you would use to logon to the TEP) as part of the listworkspaces command. There's one more trick I show in the example. By adding >workspaces.txt to the end of the command, the command will direct it's output to a text file called workspaces.txt. This is convenient since you can then search and edit the output of the command.
TACMD LISTWORKSPACES is a convenient tool that allows you to identify what workspaces have been added and then plan for what workspaces you may need to backup, or import/export.

For more on TACMD, here's a link:
http://pic.dhe.ibm.com/infocenter/tivihelp/v15r1/index.jsp?topic=%2Fcom.ibm.itm.doc%2Fitm_cmdref35.htm
Friday, June 29, 2012
Some interesting upcoming webcasts
There are a couple interesting performance and systems management related webcasts coming up in July.
"DB2 10 Performance and Scalability – Top Tips to reduce costs “Out of the Box! " discusses the performance benefits of going to DB2 10. Learn how you can save considerable CPU resources and enjoy numerous I/O improvements with DB2 10. Most of these improvements are enabled even in Conversion Mode, and available right “out of the box".
The speaker is Jeffery Berger, DB2 Performance, Senior Software Engineer, IBM Software Group
Broadcast Date: July 10, 2012 at 11:00 a.m. Eastern Daylight Time
Here's a link to sign up: http://ibm.co/N3c13W
"Increase enterprise productivity with enhanced System Automation for z/OS" covers recent enhancements in System Automation for z/OS.
The speaker is Uwe Gramm, Tivoli System Automation Product Manager, IBM Software Group.
Broadcast Date: July 12, 2012 at 11:00 a.m. Eastern Daylight Time
Here's a link to sign up: http://ibm.co/MnlUGC
"DB2 10 Performance and Scalability – Top Tips to reduce costs “Out of the Box! " discusses the performance benefits of going to DB2 10. Learn how you can save considerable CPU resources and enjoy numerous I/O improvements with DB2 10. Most of these improvements are enabled even in Conversion Mode, and available right “out of the box".
The speaker is Jeffery Berger, DB2 Performance, Senior Software Engineer, IBM Software Group
Broadcast Date: July 10, 2012 at 11:00 a.m. Eastern Daylight Time
Here's a link to sign up: http://ibm.co/N3c13W
"Increase enterprise productivity with enhanced System Automation for z/OS" covers recent enhancements in System Automation for z/OS.
The speaker is Uwe Gramm, Tivoli System Automation Product Manager, IBM Software Group.
Broadcast Date: July 12, 2012 at 11:00 a.m. Eastern Daylight Time
Here's a link to sign up: http://ibm.co/MnlUGC
Thursday, June 21, 2012
OMEGAMON DB2 and DB2 10
This is a question that has come up multiple times with customers of mine just in the past week or so. The question? If I'm running OMEGAMON XE for DB2 PM/PE V4.20 will it support DB2 10? The answer is no. To support DB2 10 you need to install OMEGAMON DB2 V5.1 or the newly released V5.1.1. V4.20, while still a supported release, will not be retro-fitted to support DB2 10.
Until recently this has not been much of an issue, but now customers are starting to make the move to DB2 10. So it's important to realize that if DB2 10 is in your plans, now is the time to start planning to upgrade your OMEGAMON DB2 to either the V5.1 or V5.1.1 level (either one will do).
Until recently this has not been much of an issue, but now customers are starting to make the move to DB2 10. So it's important to realize that if DB2 10 is in your plans, now is the time to start planning to upgrade your OMEGAMON DB2 to either the V5.1 or V5.1.1 level (either one will do).
Wednesday, June 20, 2012
More chances to take OMEGAMON V5.1 for a test drive!
If you haven't yet had a chance to get your hands on the new OMEGAMON
enhanced 3270 user interface, here's another chance to try it out. We will be doing the OMEGAMON V5.1 test drive events in some more cities.
It will be a chance for you to get hands-on usage of the tool in a live
z/OS environment.
Here's the agenda for the events:
09:00 Registration & Breakfast
09:15 What’s New in OMEGAMON with Enhanced 3270 User Interface
10:00 Exploring OMEGAMON with hands-on Exercises on Live System z
Five Lab Test Drives at your choice:
e-3270 Intro, OM zOS v510, OM CICS v510, TEP, TEP-Advanced
11:45 Q&A, Feedback Summary
12:00 Lunch & Learn – Next Generation Performance Automation
13:00 Closing
The OMEGAMON test drives will happen in the following cities:
San Francisco - 425 Market St (20th flr) - June 26th
Sacramento - 2710 S Gateway Oaks Dr (2nd flr) - June 28th
Costa Mesa - 600 Anton Blvd (2nd flr) - July 10th
Phoenix - 2929 N Central Ave (5th flr) - July 12th
Seattle - 1200 Fifth Ave (9th flr) - July 19th
Whether you are a current OMEGAMON customer or just want to learn more about OMEGAMON, feel free to attend. The event is free. To attend, email my colleague Tony Anderson andersan@us.ibm.com or call (415)-545-2478.
Here's the agenda for the events:
09:00 Registration & Breakfast
09:15 What’s New in OMEGAMON with Enhanced 3270 User Interface
10:00 Exploring OMEGAMON with hands-on Exercises on Live System z
Five Lab Test Drives at your choice:
e-3270 Intro, OM zOS v510, OM CICS v510, TEP, TEP-Advanced
11:45 Q&A, Feedback Summary
12:00 Lunch & Learn – Next Generation Performance Automation
13:00 Closing
The OMEGAMON test drives will happen in the following cities:
San Francisco - 425 Market St (20th flr) - June 26th
Sacramento - 2710 S Gateway Oaks Dr (2nd flr) - June 28th
Costa Mesa - 600 Anton Blvd (2nd flr) - July 10th
Phoenix - 2929 N Central Ave (5th flr) - July 12th
Seattle - 1200 Fifth Ave (9th flr) - July 19th
Whether you are a current OMEGAMON customer or just want to learn more about OMEGAMON, feel free to attend. The event is free. To attend, email my colleague Tony Anderson andersan@us.ibm.com or call (415)-545-2478.
Tuesday, June 19, 2012
IBM relcaims the #1 spot for super computing
IBM's Sequoia has taken the top spot
on the list of the world's fastest supercomputers. The US can re-claim the #1 spot after being beaten by China two years ago.
Here's a link to an article with some more information:
Friday, June 15, 2012
OMEGAMON DB2 PM/PE V5.1.1 now GA
As of today OMEGAMON DB2 PM/PE V5.1.1 is now generally available. In case of any confusion, OMEGAMON DB2 V5.1 has been available for a while. OMEGAMON DB2 V5.1 came out to coincide with the release of DB2 10 (if you are going to DB2 10 you need OMEGAMON DB2 V5.1 - prior releases do not support DB2 10).
What's in V5.1.1 that wasn't in V5.1? Well, several things including support for items that have been standard in the other OMEGAMON V5.1s, like self describing agents (SDA) and ITM 6.23 support. Probably one of the most interesting new features is support for the new enhanced 3270 user interface. Now you can run the new enhanced UI for z/OS, CICS, and DB2 (or the Big 3 as we sometimes call them).
For more information on OMEGAMON DB2 5.1.1 and other tools updates, here's a link:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS212-135
What's in V5.1.1 that wasn't in V5.1? Well, several things including support for items that have been standard in the other OMEGAMON V5.1s, like self describing agents (SDA) and ITM 6.23 support. Probably one of the most interesting new features is support for the new enhanced 3270 user interface. Now you can run the new enhanced UI for z/OS, CICS, and DB2 (or the Big 3 as we sometimes call them).
For more information on OMEGAMON DB2 5.1.1 and other tools updates, here's a link:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS212-135
Thursday, June 14, 2012
Load considerations for summarization and pruning
There have been some shops that have had issues with summarization and pruning of TDW data taking an inordinate amount of time. When you are summarizing or pruning millions of rows of data, this may take some time.
It's always useful to keep in mind that TDW is, fundamentally, a database. That means you may need the usual tuning and tweaks needed for any database. As I've mentioned in prior posts, TDW has the potential to gather quite a bit of data, and become quite large, so database tuning may become a factor.
Here's a link for some suggestions for DB2:
http://www-304.ibm.com/support/docview.wss?uid=swg21596149&myns=swgtiv&mynp=OCSSZ8F3&mync=R
It's always useful to keep in mind that TDW is, fundamentally, a database. That means you may need the usual tuning and tweaks needed for any database. As I've mentioned in prior posts, TDW has the potential to gather quite a bit of data, and become quite large, so database tuning may become a factor.
Here's a link for some suggestions for DB2:
http://www-304.ibm.com/support/docview.wss?uid=swg21596149&myns=swgtiv&mynp=OCSSZ8F3&mync=R
Wednesday, June 13, 2012
Using OMEGAMON IMS Application Trace with DBCTL transactions
I've posted before about using OMEGAMON IMS Application Trace to trace and analyze IMS transaction details. The Application Trace will show detailed, call by call level information, including DL/I calls, SQL calls, MQ calls, along with timing information and drill down call level detail.
In the past, I've shown examples illustrating the tool with IMS transactions, but what about CICS-DBCTL transactions? The good news is it's easy to use the Application Trace with DBCTL workload and it provides very detailed information on transaction timings and DL/I call activity.
Here I show an example. To trace DBCTL workload you specify the PSB to trace (in this example DFHSAM05). You specify the PSB and other parameters such as trace time and duration, and then start the trace. Once the trace is started, information is captured and then you see a list of DBCTL transactions (note the overview shows transaction code and CICS region).
To see detail for a specific transaction, you position the cursor and press F11. The detail display will show timing information, including elapsed time, CPU time, and time in database. You can also drill in another level and see DL/I call detail (including SSA, key feedback information and more).
The Application Trace facility has been improved tremendously, and provides very powerful analysis and detail not just of IMS transactions, but also CICS-DBCTL workload, as well.
In the past, I've shown examples illustrating the tool with IMS transactions, but what about CICS-DBCTL transactions? The good news is it's easy to use the Application Trace with DBCTL workload and it provides very detailed information on transaction timings and DL/I call activity.
Here I show an example. To trace DBCTL workload you specify the PSB to trace (in this example DFHSAM05). You specify the PSB and other parameters such as trace time and duration, and then start the trace. Once the trace is started, information is captured and then you see a list of DBCTL transactions (note the overview shows transaction code and CICS region).
To see detail for a specific transaction, you position the cursor and press F11. The detail display will show timing information, including elapsed time, CPU time, and time in database. You can also drill in another level and see DL/I call detail (including SSA, key feedback information and more).
The Application Trace facility has been improved tremendously, and provides very powerful analysis and detail not just of IMS transactions, but also CICS-DBCTL workload, as well.
Friday, June 8, 2012
Tivoli Data Warehouse planning spreadsheet
When you are looking at enabling the Tivoli Data Warehouse (TDW), some planning is in order. One of the more valuable planning tools available is the TDW warehouse load projection spreadsheet. I've posted on this tool in prior blog posts, but it's worth mentioning once more the availability of this tool.
At first glance the spreadsheet may appear a bit daunting. There are numerous tabs in the spreadsheet (many for agents you probably do not have). It's worth the time to at least briefly review the documentation that comes with the spreadsheet to learn how to hide information and agent types you are not interested in to make the graphics and data more legible.
Essentially, what the load projection tool does is it gives you a means to enter assumptions on numbers of agents, frequency of collection, tables of data to collect, and summarization/pruning to come up with space and load projections for sizing the TDW. As I've implied before, if you don't do some planning you may end up collecting a lot more data than you bargained for, and a lot of useless data becomes a headache and an impediment to using the TDW effectively.
Here's a link to the download the TDW load projection worksheet:
https://www-304.ibm.com/software/brandcatalog/ismlibrary/details?catalog.label=1TW10TM1Y
At first glance the spreadsheet may appear a bit daunting. There are numerous tabs in the spreadsheet (many for agents you probably do not have). It's worth the time to at least briefly review the documentation that comes with the spreadsheet to learn how to hide information and agent types you are not interested in to make the graphics and data more legible.
Essentially, what the load projection tool does is it gives you a means to enter assumptions on numbers of agents, frequency of collection, tables of data to collect, and summarization/pruning to come up with space and load projections for sizing the TDW. As I've implied before, if you don't do some planning you may end up collecting a lot more data than you bargained for, and a lot of useless data becomes a headache and an impediment to using the TDW effectively.
Here's a link to the download the TDW load projection worksheet:
https://www-304.ibm.com/software/brandcatalog/ismlibrary/details?catalog.label=1TW10TM1Y
Some potential z/OS maintenance requirements if running TEMS on z/OS 1.13
If you are running OMEGAMON (and therefore running a TEMS) on z/OS 1.13 you may run into a scenario where you would need to apply some maintenance to DFSMS. The problem would be fairly pronounced (the TEMS would fail to start) and you would see KLVST027 error messages in the RKLVLOG for the task.
Here's a link to a technote that describes the issue in more detail:
http://www-304.ibm.com/support/docview.wss?uid=swg21596240&myns=swgtiv&mynp=OCSS2JNN&mync=R
Here's a link to a technote that describes the issue in more detail:
http://www-304.ibm.com/support/docview.wss?uid=swg21596240&myns=swgtiv&mynp=OCSS2JNN&mync=R
Friday, June 1, 2012
Getting started with Tivoli Data Warehouse
So you login to the Tivoli Portal, click the history collection icon, and request that some history be collected. You specify the tables to collect, define the collection interval, specify the warehouse option, and enable summarization and pruning. And finally you click the distribution tab and select the managed systems you want to collect history for. Click 'Apply' and notice the icon to the left of your definition entry turns green. See the example of the typical steps to enable history collection.
Everything's great and all is now operational. Right? Well, maybe.
One area of confusion for users is that when you define history collection in the Tivoli Portal, it may look like everything is being collected and you should be getting data, but when you request the data in the Tivoli Portal what you get is no data at all or various SQL errors. You see the history icons and everything in the workspaces, but you don't get any data. The question then becomes, what do you do next?
The thing to keep in mind is that there is infrastructure at several levels that needs to be in place for the Tivoli Data Warehouse to function as advertised. To enable TDW you need the following infrastructure: history collection PDS's (persistent data stores) created for the TEMAs (i.e. the agents), warehouse and summarization/pruning processes installed and enabled with the proper drivers and definitions, and finally a target database to store the data (I usually use DB2). If any of these things is not in place, or not correctly configured, you will probably not get the history data you are looking for.
We will go through in subsequent posts things to look for when trying to debug TDW issues.
Everything's great and all is now operational. Right? Well, maybe.
One area of confusion for users is that when you define history collection in the Tivoli Portal, it may look like everything is being collected and you should be getting data, but when you request the data in the Tivoli Portal what you get is no data at all or various SQL errors. You see the history icons and everything in the workspaces, but you don't get any data. The question then becomes, what do you do next?
The thing to keep in mind is that there is infrastructure at several levels that needs to be in place for the Tivoli Data Warehouse to function as advertised. To enable TDW you need the following infrastructure: history collection PDS's (persistent data stores) created for the TEMAs (i.e. the agents), warehouse and summarization/pruning processes installed and enabled with the proper drivers and definitions, and finally a target database to store the data (I usually use DB2). If any of these things is not in place, or not correctly configured, you will probably not get the history data you are looking for.
We will go through in subsequent posts things to look for when trying to debug TDW issues.
Wednesday, May 30, 2012
Adventures in TDW land
Tivoli Data Warehouse (TDW) is a very useful and powerful feature of the OMEGAMON suite and of Tivoli monitoring in general. Each Tivoli monitoring solution from Linux, UNIX, Windows to z/OS connects to the Tivoli infrastructure and may optionally send information to the Tivoli Data Warehouse.
When you enable TDW history collection you specify many options, including what tables of information to collect, how often to collect, what agent types and managed systems to collect from, and if summarization/pruning is required. While seemingly straightforward, each of these options has important considerations that may impact the usefulness of the resultant data being collected.
With many users there can be quite a few questions. Where to begin? What data should be collected? How should the data be retained and for how long? Where should the data be stored? How is the data to be used and by what audiences?
Planning and analysis is important to a successful implementation of the TDW. One approach that should be avoided is the 'turn it all on' strategy. The turn it all on approach will inevitably result in the user collecting more data than is needed and this has multiple shortcomings. First, unnecessary data collection wastes space and resources. Second, unnecessary data collection makes it slower and more time consuming to retrieve information that is useful.
As a general methodology, it is usually better to employ a start small, then work your way up approach to enabling TDW history collection. You can always dynamically enable more collection options, but weeding out large quantities of useless data may be a time consuming exercise.
I will be doing a series of posts on TDW with the goal of documenting a best practices approach to enabling this portion of the tool.
When you enable TDW history collection you specify many options, including what tables of information to collect, how often to collect, what agent types and managed systems to collect from, and if summarization/pruning is required. While seemingly straightforward, each of these options has important considerations that may impact the usefulness of the resultant data being collected.
With many users there can be quite a few questions. Where to begin? What data should be collected? How should the data be retained and for how long? Where should the data be stored? How is the data to be used and by what audiences?
Planning and analysis is important to a successful implementation of the TDW. One approach that should be avoided is the 'turn it all on' strategy. The turn it all on approach will inevitably result in the user collecting more data than is needed and this has multiple shortcomings. First, unnecessary data collection wastes space and resources. Second, unnecessary data collection makes it slower and more time consuming to retrieve information that is useful.
As a general methodology, it is usually better to employ a start small, then work your way up approach to enabling TDW history collection. You can always dynamically enable more collection options, but weeding out large quantities of useless data may be a time consuming exercise.
I will be doing a series of posts on TDW with the goal of documenting a best practices approach to enabling this portion of the tool.
Monday, May 21, 2012
Interested in IMS 12?
In June IBM will be performing a series of detailed technical webcasts with IMS 12 as the theme. The line-up is as follows:
June 26 - IMS 12
It still offers the lowest-cost-per-transaction and fastest DBMS, with significant enhancements to help modernize applications, enable interoperation and integration, streamline installation and management, and enable growth. These enhancements to IMS 12 can also positively impact your bottom line.
June 27 - IMS Enterprise Suite (including IMS Explorer)
These integration solutions and tooling, part of the IMS SOA Integration Suite, support open integration technologies that enable new application development and extend access to IMS transactions and data. New features further simplify IMS application development tasks, and enable them to interoperate outside the IMS environment.
June 28 - IMS Tools Solutions Packs
With all the tools needed to support IMS databases now together in one package, many new features are available. You can reorganize IMS databases only when needed, improve IMS application performance and resource utilization — with faster end-to-end analysis of IMS transactions.
There will be a question-and-answer session at the end of each days program. The teleconference series
There will be a question-and-answer session at the end of each days program.
These will will be detailed all day technical sessions. If you are interested in attending, here's a link:
http://ibm.co/KsGilO
June 26 - IMS 12
It still offers the lowest-cost-per-transaction and fastest DBMS, with significant enhancements to help modernize applications, enable interoperation and integration, streamline installation and management, and enable growth. These enhancements to IMS 12 can also positively impact your bottom line.
June 27 - IMS Enterprise Suite (including IMS Explorer)
These integration solutions and tooling, part of the IMS SOA Integration Suite, support open integration technologies that enable new application development and extend access to IMS transactions and data. New features further simplify IMS application development tasks, and enable them to interoperate outside the IMS environment.
June 28 - IMS Tools Solutions Packs
With all the tools needed to support IMS databases now together in one package, many new features are available. You can reorganize IMS databases only when needed, improve IMS application performance and resource utilization — with faster end-to-end analysis of IMS transactions.
There will be a question-and-answer session at the end of each days program. The teleconference series
There will be a question-and-answer session at the end of each days program.
These will will be detailed all day technical sessions. If you are interested in attending, here's a link:
http://ibm.co/KsGilO
Subscribe to:
Posts (Atom)