Monday, July 30, 2012

Upcoming webcast on the new OMEGAMON DB2 V5.11

There is a new webcast event upcoming on the new release of OMEGAMON DB2,  OMEGAMON V5.11.  This webcast will cover the capabilities of the tool including:  advanced problem determination using focused scenarios designed by customers, combining OMEGAMON DB2 information with CICS and z/OS in a new enhanced 3270 workspace, improved efficiency with fewer screen interactions to find root cause performance impact in real time,  and additional end-to-end response time measurement capability.

The speaker is Steve Fafard, Product Manager, OMEGAMON for DB2.  The event will be August 9th, 2012, 11 a.m., EDT. 

The price is right.  The webcast is a free event.  Here's a link to sign up:

http://ibm.co/Nt3HIq

Friday, July 27, 2012

Tivoli Data Warehouse - Summarization And Pruning considerations

We' ve covered quite a bit on the Tivoli Data Warehouse (TDW) and what happens when you initialize history collection for a given table. 

Once you have had history collecting for an interval of time, you may want to look at using summarization and pruning to control the quantity of data being retained, and to also take advantage of the ability to use summarized data for trending purposes. 

Below is an example of the configuration options for summarization and pruning.  As part of the earlier exercise we enabled TDW history collection for Common Storage.  Now we are going to enable summarization and pruning for this data. 



In the above example, we are enabling pruning on detailed data by specifying that we will keep 30 days of detail.  Detail beyond that point will be removed by the Summarization and pruning agent process.   Also, the detail data will be summarized, both on an hourly and on a daily basis.  We will be keeping 90 days of hourly data, and one year of daily data. 

By specifying summarization and pruning, we can control the quantity of data being retained within the TDW.  This can be very important, especially for high volume tables of data or tables being collected for a large number of managed systems. 

Another consideration to be aware of is that when you specify summarization, the summarization agent will create another set of tables.  In the above example, the agent will have DB2 to create a table for hourly data and another table to hold daily data. 

This is an area of confusion for some uers. I have encountered some shops who thought they were going to save space right away by turning on summarization.  However, when they enabled the function, the TDW database ended up requiring more space.  Why?  Because the summarization agent had to create the additional tables to store the summarized data.

Wednesday, July 25, 2012

Some things to be aware of when installing OMEGAMON DB2 V5.11

In addition to introducing support for the new enhanced 3270 user interface, there are some infrastructure considerations you should be aware of with OMEGAMON DB2 V5.11.  OMEGAMON DB2 V5.11, in a smiliar fashion to OMEGAMON z/OS and CICS V5.1, introduces support for self describing agents (SDA).  SDA support eliminates the need for the application support media when defining the agent to the Tivoli monitoring infrastructure. 

Another thing to be aware of is that the OMEGAMON DB2 agent (i.e. the TEMA) has been re-architected in V5.11.  Because of this, there are some additional steps you may need to be aware of when deploying OMEGAMON DB2 V5.11.  It is important that you review the the documentation in the V5.11 PSP bucket, and look at relevant readme information.  Here's a link to the PSP information:

http://www-304.ibm.com/support/docview.wss?uid=isg1_5655W37_HKDB511

If you overlook this step, you may be missing some relevant information, in particular data sharing related information.

Thursday, July 19, 2012

OMEGAMON DB2 V5.11 features the new enhanced 3270 user interface

OMEGAMON DB2 V5.11 was recently released.  There were several interesting enhancements and additions to the tool.  One of the more interesting new features is support for the new enhanced 3270 user interface.  With the addition of OMEGAMON DB2 to the fold, this means that three core OMEGAMONs (z/OS, CICS, DB2) now support the enhanced 3270 ui. 

One of the nice aspects of the enhanced ui is the ability to do your monitoring using an integrated interface that enables you to monitor from a single point of view.  Here is an example of the default KOBSTART panel, now showing DB2 information on the display.

From this central screen, as the example shows, you can easily navigate to z/OS, CICS or DB2. 

From the main panel (KOBSTART), you can then drill down into DB2 specific detailed views, and drill in on commonly used displays, such as DB2 thread views.  Here is an example of how you would drill in on DB2 threads. 

From this you you can drill in on detail for each thread executing on the system, and do analysis of what the threads are doing and where they are spedning their time.

Thursday, July 12, 2012

More TDW under the covers

Now that it appears data on z/OS Common Storage utilization is flowing to the TDW, be aware that there is information readily available to track on an ongoing basis the amount of data flowing to the TDW, and to show indications of potential issues with TDW history collection.

In the Tivoli Portal there are workspaces that show the status of the Warehouse Proxy, and also track history row processing counts and history collection error counts.  The example below shows the Warehouse Proxy statistics workspace. 

The rows statistics chart on the workspace shows the number of rows sent to the Warehosue Proxy and the number of rows inserted by the Warehouse Proxy to the TDW database.  Note there is a marked difference in this example between the number of rows sent and the number of rows inserted.  This indicates a potential issue where data is being sent to the Warehouse Proxy, but due to some issue the data is not making it into the TDW database.  Note also in the Failures/Disconnections chart there is a count showing failures with the TDW.

Now that we see there is a potential issue with data making its way into the TDW database, how do you get more information on what the problem may be?  First, you can look at the log for the Warehouse Proxy process.  Also, you can look at the Warehouselog table in the TDW database (this is assuming you have it enabled - be aware of an earlier blog post I made on this not being on by default in ITM 6.23).

I the following example I show information selected from the Warehouselog table in the TDW.  The Warehouselog table shows row statistics and error message information (if applicable) for each attempted insert to the TDW. 

In the example we see that the Common Storage table has 16 rows received and 16 rows inserted.  However, the Address Space CPU Utilization table seems to have an issue with 1 row received and one row skipped, and also an error message.  Analysis of the error message would give some indication as to why the TDW is having an issue.

As you can see there is quite a bit of information available to track the status and activity of TDW, and it is worth checking this information on an ongoing basis to ensure history collection is proceeding normally.

Tuesday, July 10, 2012

TDW under the covers - continued

The TDW database itself may reside on a number of platforms, including Linux, UNIX, Windows and z/OS.  The other required infrastructure is the Tivoli Warehouse Proxy process and the Summarization and Pruning agent process.  These processes may run on Linux, UNIX, or Windows.  Note that these processes do not run on native z/OS (although they could run on Linux on z).  That means if you run the TDW on DB2 on z/OS, you will also need to have the Warehouse Proxy and Summarization/Pruning agent running on  a separate platform, such as Linux, UNIX, or Windows. 

So we've looked at what happens within the agent (TEMA) task when historical collection is started. You should see messages indicating collection has begun, and be able to see if data is being gathered, and for which tables. The next question is how does the historical data get from the TEMA to the Tivoli Data Warehouse (TDW)? That is the job of the Warehouse Proxy process.

When you define history collection you also specify if the history data is to go to the TDW, and if so, how often that data should be sent to the TDW  (anywhere from once per day to every 15 minutes).  Sending data to the TDW on a regular interval is usually preferable to sending large quantities of data only once or twice per day.  This would avoid large bursts of activity to the TDW. 

It's the job of the Warehouse Proxy process to gather data from the various agents, as specified by the TDW interval, and send the data to the TDW database.  The TDW database consists of tables that correlate to the respective TEMA tables being gathered.  If the required table does not already exist in the database, it's the job of the Warehouse Proxy to create the table.  Here we see an example of the messages from the Warehouse Proxy log that document table creation for our Common Storage (COMSTOR) table:

(Friday, July 6, 2012, 10:17:31 AM-{1684}khdxdbex.cpp,1716,"createTable") "Common_Storage" - Table Successfully Created in Target Database

(Friday, July 6, 2012, 10:17:31 AM-{1684}khdxdbex.cpp,1725,"createTable") "Common_Storage" - Access GRANTed to PUBLIC

Once the Warehouse Proxy process has the table defined, you should be able to display the tables in DB2 (using tools like the DB2 Control Center), and issue SQL selects to see the data in the tables, as shown below:


In the above example we see the Common Storage table in DB2, we see the cardinality column which indicates the number of rows in the table, and we can also run SQL Selects against the table to display the contents.  By doing this we can verify that the data is now flowing to the TDW.

Friday, July 6, 2012

Understanding what is happening under the covers with TDW

When you are starting up historical data collection for TDW there are several things that happen under the covers that you may need to be aware of. 

Here I show an example of starting history collection for z/OS Common Storage (CSA) utilization.  In this example we start by specifying the agent (in this example z/OS) and the attribute group to collect (meaning the table of information to gather).  You will then specify the collection interval, and how often the data is to be sent to the Tivoli data warehouse (TDW).  You then click on the distribution tab and select one or more managed systems to collect data from.  When you click Apply, collection should begin and the icon to the left of the collection definition should turn green. 

Once the collection definition is distributed, collection should be active and you should see a message in the RKLVLOG of the TEMA (meaning the agent address space).  You should see a reference to the starting of a UADVISOR for the table you are collecting (in this example UADVISOR_KM5_COMSTOR).   At this point collection should begin, assuming all the other required infrastructure is in place and operational. 



To validate that collection is occuring at the TEMA level, there are some useful commands.  One command to try is   /F taskname, KPDCMD QUERY CONNECT  (the taskname would be the address space where the TEMA is running).  This command will show the status of the various tables that could be collected by the agent, and how much data has been collected.  The information will appear in the RKPDLOG output for the TEMA task (see example below):

QUERY CONNECT



Appl  Table     Active  Group
Name  Name      Records Name      Active Dataset Name
-------- ---------- -------- -------- ---------------------------
KM5   ASCPUUTIL 31019   LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5   ASCSOWN       0   LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5   ASREALSTOR    0   LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5   ASRESRC2      0   LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5   ASSUMRY       0   LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5   ASVIRTSTOR    0   LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5   BPXPRM2       0   LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5   CHNPATHS      0   LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2
KM5   COMSTOR      28   LPARDATA OMEG.OMXE510.ADCD.RKM5LPR2

Note that data is being collected for the COMSTOR table (as indicated by the active record count).  The next step in the process is to understand what is happening at the TDW and DB2 level.  We will look at examples of this in later posts on this topic.

Thursday, July 5, 2012

A handy use for the TACMD command

TACMD is a handy tool with a variety of options for managing IBM Tivoli Monitoring (ITM) configuration, setup and infrastructure.  TACMD may be seen as a Swiss army knife type of tool, with lots of options and uses.  For example you can use TACMD to manage, edit. delete, start, stop, export and import situations.  You can use TACMD to start/stop agents, manage managed system lists, and more. 

You can also use TACMD  to list out, export, and import workspaces defined to a Tivoli Portal.  Let's say, for example, you need to list out all the workspaces defined to the Tivoli Portal.  You may want to do that to determine what, if any, user defined workspaces have been added to the Tivoli Portal, and what they are called. The TACMD LISTWORKSPACES command allows you you to list out all the workspaces defined, and provides options to filter the display by application type or userid. 

Here's an example of how this is done, and what the output may look like.  In the example I show how you specify what portal server to access, and how you pass the userid and password (the one you would use to logon to the TEP) as part of the listworkspaces command.  There's one more trick I show in the example.  By adding   >workspaces.txt    to the end of the command, the command will direct it's output to a text file called workspaces.txt.  This is convenient since you can then search and edit the output of the command. 

TACMD LISTWORKSPACES is a convenient tool that allows you to identify what workspaces have been added and then plan for what workspaces you may need to backup, or import/export.

For more on TACMD, here's a link:
http://pic.dhe.ibm.com/infocenter/tivihelp/v15r1/index.jsp?topic=%2Fcom.ibm.itm.doc%2Fitm_cmdref35.htm