There are a couple interesting performance and systems management related webcasts coming up in July.
"DB2 10 Performance and Scalability – Top Tips to reduce costs “Out of the Box! " discusses the performance benefits of going to DB2 10. Learn how you can save considerable CPU resources and enjoy numerous I/O improvements with DB2 10. Most of these improvements are enabled even in Conversion Mode, and available right “out of the box".
The speaker is Jeffery Berger, DB2 Performance, Senior Software Engineer, IBM Software Group
Broadcast Date: July 10, 2012 at 11:00 a.m. Eastern Daylight Time
Here's a link to sign up: http://ibm.co/N3c13W
"Increase enterprise productivity with enhanced System Automation for z/OS" covers recent enhancements in System Automation for z/OS.
The speaker is Uwe Gramm, Tivoli System Automation Product Manager, IBM Software Group.
Broadcast Date: July 12, 2012 at 11:00 a.m. Eastern Daylight Time
Here's a link to sign up: http://ibm.co/MnlUGC
Friday, June 29, 2012
Thursday, June 21, 2012
OMEGAMON DB2 and DB2 10
This is a question that has come up multiple times with customers of mine just in the past week or so. The question? If I'm running OMEGAMON XE for DB2 PM/PE V4.20 will it support DB2 10? The answer is no. To support DB2 10 you need to install OMEGAMON DB2 V5.1 or the newly released V5.1.1. V4.20, while still a supported release, will not be retro-fitted to support DB2 10.
Until recently this has not been much of an issue, but now customers are starting to make the move to DB2 10. So it's important to realize that if DB2 10 is in your plans, now is the time to start planning to upgrade your OMEGAMON DB2 to either the V5.1 or V5.1.1 level (either one will do).
Until recently this has not been much of an issue, but now customers are starting to make the move to DB2 10. So it's important to realize that if DB2 10 is in your plans, now is the time to start planning to upgrade your OMEGAMON DB2 to either the V5.1 or V5.1.1 level (either one will do).
Wednesday, June 20, 2012
More chances to take OMEGAMON V5.1 for a test drive!
If you haven't yet had a chance to get your hands on the new OMEGAMON
enhanced 3270 user interface, here's another chance to try it out. We will be doing the OMEGAMON V5.1 test drive events in some more cities.
It will be a chance for you to get hands-on usage of the tool in a live
z/OS environment.
Here's the agenda for the events:
09:00 Registration & Breakfast
09:15 What’s New in OMEGAMON with Enhanced 3270 User Interface
10:00 Exploring OMEGAMON with hands-on Exercises on Live System z
Five Lab Test Drives at your choice:
e-3270 Intro, OM zOS v510, OM CICS v510, TEP, TEP-Advanced
11:45 Q&A, Feedback Summary
12:00 Lunch & Learn – Next Generation Performance Automation
13:00 Closing
The OMEGAMON test drives will happen in the following cities:
San Francisco - 425 Market St (20th flr) - June 26th
Sacramento - 2710 S Gateway Oaks Dr (2nd flr) - June 28th
Costa Mesa - 600 Anton Blvd (2nd flr) - July 10th
Phoenix - 2929 N Central Ave (5th flr) - July 12th
Seattle - 1200 Fifth Ave (9th flr) - July 19th
Whether you are a current OMEGAMON customer or just want to learn more about OMEGAMON, feel free to attend. The event is free. To attend, email my colleague Tony Anderson andersan@us.ibm.com or call (415)-545-2478.
Here's the agenda for the events:
09:00 Registration & Breakfast
09:15 What’s New in OMEGAMON with Enhanced 3270 User Interface
10:00 Exploring OMEGAMON with hands-on Exercises on Live System z
Five Lab Test Drives at your choice:
e-3270 Intro, OM zOS v510, OM CICS v510, TEP, TEP-Advanced
11:45 Q&A, Feedback Summary
12:00 Lunch & Learn – Next Generation Performance Automation
13:00 Closing
The OMEGAMON test drives will happen in the following cities:
San Francisco - 425 Market St (20th flr) - June 26th
Sacramento - 2710 S Gateway Oaks Dr (2nd flr) - June 28th
Costa Mesa - 600 Anton Blvd (2nd flr) - July 10th
Phoenix - 2929 N Central Ave (5th flr) - July 12th
Seattle - 1200 Fifth Ave (9th flr) - July 19th
Whether you are a current OMEGAMON customer or just want to learn more about OMEGAMON, feel free to attend. The event is free. To attend, email my colleague Tony Anderson andersan@us.ibm.com or call (415)-545-2478.
Tuesday, June 19, 2012
IBM relcaims the #1 spot for super computing
IBM's Sequoia has taken the top spot
on the list of the world's fastest supercomputers. The US can re-claim the #1 spot after being beaten by China two years ago.
Here's a link to an article with some more information:
Friday, June 15, 2012
OMEGAMON DB2 PM/PE V5.1.1 now GA
As of today OMEGAMON DB2 PM/PE V5.1.1 is now generally available. In case of any confusion, OMEGAMON DB2 V5.1 has been available for a while. OMEGAMON DB2 V5.1 came out to coincide with the release of DB2 10 (if you are going to DB2 10 you need OMEGAMON DB2 V5.1 - prior releases do not support DB2 10).
What's in V5.1.1 that wasn't in V5.1? Well, several things including support for items that have been standard in the other OMEGAMON V5.1s, like self describing agents (SDA) and ITM 6.23 support. Probably one of the most interesting new features is support for the new enhanced 3270 user interface. Now you can run the new enhanced UI for z/OS, CICS, and DB2 (or the Big 3 as we sometimes call them).
For more information on OMEGAMON DB2 5.1.1 and other tools updates, here's a link:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS212-135
What's in V5.1.1 that wasn't in V5.1? Well, several things including support for items that have been standard in the other OMEGAMON V5.1s, like self describing agents (SDA) and ITM 6.23 support. Probably one of the most interesting new features is support for the new enhanced 3270 user interface. Now you can run the new enhanced UI for z/OS, CICS, and DB2 (or the Big 3 as we sometimes call them).
For more information on OMEGAMON DB2 5.1.1 and other tools updates, here's a link:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS212-135
Thursday, June 14, 2012
Load considerations for summarization and pruning
There have been some shops that have had issues with summarization and pruning of TDW data taking an inordinate amount of time. When you are summarizing or pruning millions of rows of data, this may take some time.
It's always useful to keep in mind that TDW is, fundamentally, a database. That means you may need the usual tuning and tweaks needed for any database. As I've mentioned in prior posts, TDW has the potential to gather quite a bit of data, and become quite large, so database tuning may become a factor.
Here's a link for some suggestions for DB2:
http://www-304.ibm.com/support/docview.wss?uid=swg21596149&myns=swgtiv&mynp=OCSSZ8F3&mync=R
It's always useful to keep in mind that TDW is, fundamentally, a database. That means you may need the usual tuning and tweaks needed for any database. As I've mentioned in prior posts, TDW has the potential to gather quite a bit of data, and become quite large, so database tuning may become a factor.
Here's a link for some suggestions for DB2:
http://www-304.ibm.com/support/docview.wss?uid=swg21596149&myns=swgtiv&mynp=OCSSZ8F3&mync=R
Wednesday, June 13, 2012
Using OMEGAMON IMS Application Trace with DBCTL transactions
I've posted before about using OMEGAMON IMS Application Trace to trace and analyze IMS transaction details. The Application Trace will show detailed, call by call level information, including DL/I calls, SQL calls, MQ calls, along with timing information and drill down call level detail.
In the past, I've shown examples illustrating the tool with IMS transactions, but what about CICS-DBCTL transactions? The good news is it's easy to use the Application Trace with DBCTL workload and it provides very detailed information on transaction timings and DL/I call activity.
Here I show an example. To trace DBCTL workload you specify the PSB to trace (in this example DFHSAM05). You specify the PSB and other parameters such as trace time and duration, and then start the trace. Once the trace is started, information is captured and then you see a list of DBCTL transactions (note the overview shows transaction code and CICS region).
To see detail for a specific transaction, you position the cursor and press F11. The detail display will show timing information, including elapsed time, CPU time, and time in database. You can also drill in another level and see DL/I call detail (including SSA, key feedback information and more).
The Application Trace facility has been improved tremendously, and provides very powerful analysis and detail not just of IMS transactions, but also CICS-DBCTL workload, as well.
In the past, I've shown examples illustrating the tool with IMS transactions, but what about CICS-DBCTL transactions? The good news is it's easy to use the Application Trace with DBCTL workload and it provides very detailed information on transaction timings and DL/I call activity.
Here I show an example. To trace DBCTL workload you specify the PSB to trace (in this example DFHSAM05). You specify the PSB and other parameters such as trace time and duration, and then start the trace. Once the trace is started, information is captured and then you see a list of DBCTL transactions (note the overview shows transaction code and CICS region).
To see detail for a specific transaction, you position the cursor and press F11. The detail display will show timing information, including elapsed time, CPU time, and time in database. You can also drill in another level and see DL/I call detail (including SSA, key feedback information and more).
The Application Trace facility has been improved tremendously, and provides very powerful analysis and detail not just of IMS transactions, but also CICS-DBCTL workload, as well.
Friday, June 8, 2012
Tivoli Data Warehouse planning spreadsheet
When you are looking at enabling the Tivoli Data Warehouse (TDW), some planning is in order. One of the more valuable planning tools available is the TDW warehouse load projection spreadsheet. I've posted on this tool in prior blog posts, but it's worth mentioning once more the availability of this tool.
At first glance the spreadsheet may appear a bit daunting. There are numerous tabs in the spreadsheet (many for agents you probably do not have). It's worth the time to at least briefly review the documentation that comes with the spreadsheet to learn how to hide information and agent types you are not interested in to make the graphics and data more legible.
Essentially, what the load projection tool does is it gives you a means to enter assumptions on numbers of agents, frequency of collection, tables of data to collect, and summarization/pruning to come up with space and load projections for sizing the TDW. As I've implied before, if you don't do some planning you may end up collecting a lot more data than you bargained for, and a lot of useless data becomes a headache and an impediment to using the TDW effectively.
Here's a link to the download the TDW load projection worksheet:
https://www-304.ibm.com/software/brandcatalog/ismlibrary/details?catalog.label=1TW10TM1Y
At first glance the spreadsheet may appear a bit daunting. There are numerous tabs in the spreadsheet (many for agents you probably do not have). It's worth the time to at least briefly review the documentation that comes with the spreadsheet to learn how to hide information and agent types you are not interested in to make the graphics and data more legible.
Essentially, what the load projection tool does is it gives you a means to enter assumptions on numbers of agents, frequency of collection, tables of data to collect, and summarization/pruning to come up with space and load projections for sizing the TDW. As I've implied before, if you don't do some planning you may end up collecting a lot more data than you bargained for, and a lot of useless data becomes a headache and an impediment to using the TDW effectively.
Here's a link to the download the TDW load projection worksheet:
https://www-304.ibm.com/software/brandcatalog/ismlibrary/details?catalog.label=1TW10TM1Y
Some potential z/OS maintenance requirements if running TEMS on z/OS 1.13
If you are running OMEGAMON (and therefore running a TEMS) on z/OS 1.13 you may run into a scenario where you would need to apply some maintenance to DFSMS. The problem would be fairly pronounced (the TEMS would fail to start) and you would see KLVST027 error messages in the RKLVLOG for the task.
Here's a link to a technote that describes the issue in more detail:
http://www-304.ibm.com/support/docview.wss?uid=swg21596240&myns=swgtiv&mynp=OCSS2JNN&mync=R
Here's a link to a technote that describes the issue in more detail:
http://www-304.ibm.com/support/docview.wss?uid=swg21596240&myns=swgtiv&mynp=OCSS2JNN&mync=R
Friday, June 1, 2012
Getting started with Tivoli Data Warehouse
So you login to the Tivoli Portal, click the history collection icon, and request that some history be collected. You specify the tables to collect, define the collection interval, specify the warehouse option, and enable summarization and pruning. And finally you click the distribution tab and select the managed systems you want to collect history for. Click 'Apply' and notice the icon to the left of your definition entry turns green. See the example of the typical steps to enable history collection.
Everything's great and all is now operational. Right? Well, maybe.
One area of confusion for users is that when you define history collection in the Tivoli Portal, it may look like everything is being collected and you should be getting data, but when you request the data in the Tivoli Portal what you get is no data at all or various SQL errors. You see the history icons and everything in the workspaces, but you don't get any data. The question then becomes, what do you do next?
The thing to keep in mind is that there is infrastructure at several levels that needs to be in place for the Tivoli Data Warehouse to function as advertised. To enable TDW you need the following infrastructure: history collection PDS's (persistent data stores) created for the TEMAs (i.e. the agents), warehouse and summarization/pruning processes installed and enabled with the proper drivers and definitions, and finally a target database to store the data (I usually use DB2). If any of these things is not in place, or not correctly configured, you will probably not get the history data you are looking for.
We will go through in subsequent posts things to look for when trying to debug TDW issues.
Everything's great and all is now operational. Right? Well, maybe.
One area of confusion for users is that when you define history collection in the Tivoli Portal, it may look like everything is being collected and you should be getting data, but when you request the data in the Tivoli Portal what you get is no data at all or various SQL errors. You see the history icons and everything in the workspaces, but you don't get any data. The question then becomes, what do you do next?
The thing to keep in mind is that there is infrastructure at several levels that needs to be in place for the Tivoli Data Warehouse to function as advertised. To enable TDW you need the following infrastructure: history collection PDS's (persistent data stores) created for the TEMAs (i.e. the agents), warehouse and summarization/pruning processes installed and enabled with the proper drivers and definitions, and finally a target database to store the data (I usually use DB2). If any of these things is not in place, or not correctly configured, you will probably not get the history data you are looking for.
We will go through in subsequent posts things to look for when trying to debug TDW issues.
Subscribe to:
Posts (Atom)