Sep 28

Collecting Data for Tivoli Storage Manager: Server Database Reorganization

DB2, TSM Server 6 Comments Off on Collecting Data for Tivoli Storage Manager: Server Database Reorganization
Gathering General Information

For supported levels of IBM Tivoli Storage Manager you can use IBM Support Assistant (ISA) to capture general information. Alternatively, you can also manually collect the general information.
Entering general information into an electronically-opened PMR (ESR)eliminates waiting on the phone to provide general information to Level 1 support.

Manually Gathering General Information

From a Tivoli Storage Manager Administrative command line client, enter the following commands:

  • QUERY SYSTEM > querysys.txt
  • QUERY ACTLOG begind=<mm/dd/yyyy> begint=<hh:mm> endd=<mm/dd/yyyy> endt=<hh:mm> > actlog.txt

– where begind and begint are the beginning date and time for the actlog entries being collected – where endd and endt are the ending date and time for the actlog entries being collected – the actlog gather should cover the full time frame of the issue/problem/scenario being diagnosed
Explicitly using the above commands will redirect the output to files called querysys.txt and actlog.txt in the Tivoli Storage Manager servers working directory. The names of these files can be changed and a full path can be specified to place the output in any desired directory using any desired name.
These files along with the following files/info should be included as general information:

  • dsmserv.opt
  • dsmserv.err
  • details of operating system levels
  • Tivoli Storage Manager Server specific version (ex: 6.2.3.0)
Manually Gathering Server Database Reorganization Information

If you are experiencing difficulties with server-initiated reorganization, follow the instructions in this section to gather the information that will be required by IBM Software Support:
1. Verify that you are running V6.1.5.10, 6.2.4, or 6.3.1 or later versions of the Tivoli Storage Manager server.
2. Indicate whether you are running data deduplication.
3. From a DB2 CLP window, run the following commands (for steps #3 thru #8) as the instance user while the Tivoli Storage Manager server is running:
db2 connect to tsmdb1 db2 set schema tsmdb1 db2pd -d tsmdb1 -reorg index > db2pd-reorg-index.txt db2pd -d tsmdb1 -runstats > db2pd-runstats.txt
4. Determine whether the database was created under Tivoli Storage Manager V6.1 or later versions.
In the following select
db2 "select reclaimable_space_enabled from table(mon_get_tablespace('',-1)) as T1 where tbsp_id in (2,4,5,6)" > reclaimable_space.txt
the reclaimable_space_enabled column will be zero for server V6.1 databases, even if the system was later upgraded to server V6.2 or later.
Note that the mon_get_tablespace does not exist on V6.1 servers.
If the database was created under Tivoli Storage Manager V6.2 or later, the columns will be 1:

5. If you are experiencing unexplained issues with database growth, collect the following information:
db2 reorgchk current statistics on table all > db2reorgchk.txt
Important note: If you do not specify “current statistics,” the default is “update statistics,” which will run RUNSTATS commands on all tables in the database. This will likely have a huge performance impact and will take many days to complete.
db2pd -d tsmdb1 -tablespace > db2pd-tablespace.txt
After reorganization is run on all the tables, the output from
db2 "select count(*) as \"TableCount\" from global_attributes where owner='RDB' and name like 'REORG_TB_%'" > table_count.txt
will be at least 130.
After reorganization is run on all the indices on all the tables, the output from
db2 "select count(*) as \"Indices for TableCount\" from global_attributes where owner='RDB' and name like 'REORG_IX_%'" > index_count.txt
will be at least 130.
The following selects can be used to get the timestamps for table reorganizations and the tables for which indices have been reorganized:
db2 "select cast( substr(name,10,min(30,length(name)-9)) as char(30)) as \"Tablename\", substr(char(datetime),1,10) as \"Last Reorg\" from global_attributes where owner='RDB' and name like 'REORG_TB_%' and datetime is not NULL order by datetime desc" >
table_last_reorg.txt db2 "select cast( substr(name,10,min(30,length(name)-9)) as char(30)) as \"Indices for Tablename\", substr(char(datetime),1,10) as \"Last Reorg\" from global_attributes where owner='RDB' and name like 'REORG_IX_%' and datetime is not NULL order by datetime desc" >
index_last_reorg.txt

Items that will help detect lock-wait conditions:
db2 get snapshot for all applications >application.txt
db2 "select application_handle, elapsed_time_sec,
substr( stmt_text, 1, 512) as stmt_text from sysibmadm.mon_current_sql where elapsed_time_sec > 600" > application_handle.txt
db2pd -d tsmdb1 -wlocks >wlocks.out
db2 "SELECT agent_id FROM sysibmadm.applications
"WHERE appl_name='db2reorg' AND appl_status='LOCKWAIT' " > agent_id.txt

6. From a dsmadmc client, obtain the last 30 days of reorganization activity:
q actlog begindate=today-30 enddate=today search=anr029 > anr029.txt q actlog begindate=today-30 enddate=today search=anr031 > anr031.txt q actlog begindate=today-30 enddate=today search=anr033 > anr033.txt
Those queries are used to get the Tivoli Storage Manager view of table and index reorganization activity, and RUNSTATS activity.
7. Collect a trace collected while the reorganization window (REORGBEGINTIME + REORDURATION hours) is active, and ensure that a database backup is not running because reorganizations cannot run when a database backup is running. From a dsmadmc client, issue the following commands:
trace dis * trace ena TBREORG trace begin <valid_path_and_filename>
-> collect the trace for at least 1 hour
trace flush trace end trace dis *
8. As the instance user collect the output for the

    “db2support -d tsmdb1 -c -s -g”

command from a system shell.

Submitting Information to IBM Support

After a PMR is open, you can submit diagnostic troubleshooting data to IBM.
If using ESR, update the PMR to indicate that data has been sent.

Online Self-Help Resources
  • Review up-to-date product information at the Tivoli Storage Manager Product Support page.
  • Utilize the IBM Electronic Service Requesttool to access the Tivoli Storage Manager Support team when requiring assistance from IBM.
  • Use the IBM Support Assistant (ISA), this free cross product tool assists you in increasing your capacity for self-help. The Tivoli Storage Manager server has a plugin for the ISA tool.
  • Install and use the IBM Support Toolbar. This is a stand-alone application that allows you to easily search IBM.com for all types of software support content plus organizes the major areas of not only Software support, but the individual brand support sites into a concise application.
Related Information

Related information

Database Reorg Technote

written by Bosse

Sep 28

INTRA_PARALLEL

DB2 Comments Off on INTRA_PARALLEL

The exploitation of parallelism within a database and within an application accessing a database can also have a significant benefit for overall database performance, as well as the normal administrative tasks. There are two types of query parallelism that are available with DB2 UDB: inter-query parallelism and intra-query parallelism.

Inter-query parallelism refers to the ability of multiple applications to query a database at the same time. Each query will execute independently of the others, but DB2 UDB will execute them at the same time.

Intra-query parallelism refers to the ability to break a single query into a number of pieces and replicate them at the same time using either intra-partition parallelism or inter-partition parallelism, or both.

 

Intra-Partition Parallelism

Intra-partition parallelism refers to the ability to break up a query into multiple parts within a single database partition and execute these parts at the same time. This type of parallelism subdivides what is usually considered a single database operation, such as index creation, database load, or SQL queries into multiple parts, many or all of which can be executed in parallel within a single database partition. Intra-partition parallelism can be used to take advantage of multiple processors of a symmetric multiprocessor (SMP) server.

Intra-partition parallelism can take advantage of either data parallelism or pipeline parallelism. Data parallelism is normally used when scanning large indexes or tables. When data parallelism is used as part of the access plan for an SQL statement, the index or data will be dynamically partitioned, and each of the executing parts of the query (known as package parts) is assigned a range of data to act on. For an index scan, the data will be partitioned based on the key values, whereas for a table scan, the data will be partitioned based on the actual data pages.

Pipeline parallelism is normally used when distinct operations on the data can be executed in parallel. For example, a table is being scanned and the scan is immediately feeding into a sort operation that is executing in parallel to sort the data as it is being scanned.

Figure 2.2 shows a query that is broken into four pieces that can be executed in parallel, each working with a subset of the data. When this happens, the results can be returned more quickly than if the query was run serially. To utilize intra-partition parallelism, the database must be configured appropriately.

 

Figure 2.2. Intra-partition parallelism.

Intra-partition parallelism must be enabled for the DB2 instance before the queries can be executed in parallel. Once intra-partition parallelism is enabled, the degree of parallelism, or number of pieces of the query that can execute in parallel, can be controlled using database configuration parameters.

 

Configuring Intra-Partition Parallelism

Intra-partition parallelism in DB2 UDB is enabled or disabled using the database manager configuration parameter INTRA_PARALLEL. To enable intra-partition parallelism in DB2 UDB, the INTRA_PARALLEL configuration must be set to YES. This can be done using the following command:

UPDATE DBM CFG USING INTRA_PARALLEL YES

The degree of parallelism can then be controlled at the instance level, the database level, the application, or the statement level. The degree of parallelism can be set to a specific value or to ANY. If the degree of parallelism is set to ANY, the optimizer will determine the degree of parallelism for each individual SQL query that is submitted, based on the query itself and the number of CPUs available to the database or database partition.

Table 2.2 gives an overview of the parameters and options that are related to intra-partition parallelism in DB2 UDB.

Table 2.2. Controlling Intra-Partition Parallelism in DB2 UDB

Parameter Value
INTRA_PARALLEL YES/NODefaults to NO on uni-processor machineDefaults to YES on SMP machineIf changed, packages already bound will automatically be rebound at next execution.
MAX_QUERYDEGREE 1?32767, ANYDefaults to ANY; allows optimizer to choose degree of parallelism based on cost.No SQL executed on a database in this instance can use a degree of parallelism higher than this value.
DFT_DEGREE 1?32767, ANYDefaults to 1 (no parallelism)Provides the default value for:

CURRENT DEGREE special register

 

DEGREE bind option

 

Maximum for any SQL in this database

CURRENT DEGREE 1?32767, ANYSets degree of parallelism for dynamic SQLDefaults to DFT_DEGREE
DEGREE 1?32767, ANYSets degree of parallelism for static SQLDefaults to DFT_DEGREE

To change: PREP STATIC.SQL DEGREE n

RUNTIME DEGREE(SET RUNTIME DEGREE command) 1?32767, ANYSets degree of parallelism for running applicationsTo change:

SET RUNTIME DEGREE FOR (appid) to n

Affects only queries issued after SET RUNTIME is executed

DB2DEGREE(CLI configuration file) 1?32767, ANYDefault is 1Sets degree of parallelism for CLI applicationsCLI application issues a SET CURRENT DEGREE statement after database connection

 

The maximum degree of parallelism for an active application can be specified using the SET RUNTIME DEGREE command. The application can set its own run time degree of parallelism by using the SET CURRENT DEGREE statement. The actual run time degree used is the lower of:

  • MAX_QUERYDEGREE instance configuration parameter
  • Application run time degree
  • SQL statement compilation degree

More information on parallelism support in DB2 Universal Database can be found in the DB2 UDB Administration Guide: Performance.

For a multi-partitioned database on a large SMP server, the maximum degree of parallelism for each partition should be limited so that each partition does not attempt to use all of the CPUs on the server. This can be done using the MAX_QUERYDEGREE instance configuration parameter. For a 32-way SMP server with eight database partitions, the maximum degree of parallelism for each partition could be limited to four, as follows:

UPDATE DBM CFG USING MAX_QUERYDEGREE 4

For an SMP server with 16 CPUs, running two separate DB2 instances, the maximum degree of parallelism for each partition could be limited to eight, as follows:

UPDATE DBM CFG USING MAX_QUERYDEGREE 8

for a DB2 instance with two databases that has intra-partition parallelism enabled. The benefit of intra-partition parallelism is very different if one database is a data mart or data warehouse and has large, complex queries scanning a large amount of data and the other database is used as a journal and is accessed using only INSERT statements. In this case, the default degree of parallelism can be set to different values for each database. If the databases are named DMDB and JRNLDB, this can be done as follows:

UPDATE DB CFG for DMDB USING DFT_DEGREE 8 UPDATE DB CFG for JRNLDB USING DFT_DEGREE 1

For CLI, ODBC, and JDBC applications, the degree of parallelism is controlled using the db2cli.ini file. The following is an example db2cli.ini file where any application that connects to the SAMPLE database will use a degree of parallelism of four.

[common]
TRACE=1                               Turn the trace on
TRACECOMM=1                           Trace communications costs as well
TRACEFLUSH=1                          Flush the trace as it happens
TRACEPATHNAME=d:\trace                Directory for the trace.

; Comment lines start with a
semi-colon.

[sample]
DBALIAS=MYSAMP
DB2DEGREE=4
autocommit=0

To change the degree of parallelism for a currently executing SQL statement, the SYSADM can change the run time degree for an application. For an application with an application ID of 130, to change the degree of parallelism to 2, the following command can be used:

SET RUNTIME DEGREE FOR (130) to 2

NOTE

This change cannot affect the currently executing SQL statement but will be effective for all subsequent SQL statements.

 

 

written by Bosse

Sep 28

Automated System Recovery backup fails with ANS1468E

Windows VSS Comments Off on Automated System Recovery backup fails with ANS1468E

ASR backups fail stating no files will be backed up.

Symptom

The following is reported in the error log.

ANS1468E Backing up Automated System Recovery (ASR) files failed. No files will be backed up.

Resolving the problem

The following Microsoft hotfixes should be applied to resolve this. Please contact Microsoft for additional information on these fixes.

http://support.microsoft.com/kb/934016
http://support.microsoft.com/kb/951568

written by Bosse

Sep 27

Configure switches successfully for IBM TotalStorage Productivity Center for Fabric

Fabric and Switches Comments Off on Configure switches successfully for IBM TotalStorage Productivity Center for Fabric

Introduction

IBM TotalStorage Productivity Center for Fabric version 2 is a storage area network (SAN) management application that discovers devices in the SAN and displays a topology of the SAN environment. It is designed to operate using industry-based standards for communication with fibre channel switches and other SAN devices. This can be done using the simple network management protocol (SNMP) interface of out-of-band agents, the FC-GS-3 interface of in-band agents or a combination of both. FC-GS-3 refers to the Fibre Channel Generic Services 3 standard. In order for the information to be gathered and displayed as expected, the switches must be configured correctly. The configuration varies between vendors and whether in-band or out-of-band agents are used. One of the common sources of customer problems is incorrect configuration of the switches being managed. This leads to missing information and misconceptions that the IBM TotalStorage Productivity Center for Fabric product does not work with certain switches. This document addresses the basic configuration requirements of the fibre channel switches supported by IBM TotalStorage Productivity Center for Fabric. It is intended to help enable you to configure the switches for a high chance of success. The switch vendors covered by this document are Brocade, Cisco, CNT, McDATA and QLogic. Other vendors, such as IBM, often sell these switches under their own labels.

Configuration overview

With IBM TotalStorage Productivity Center for Fabric in-band discovery, the agent software is installed on SAN-attached hosts. The agents collect information about the fabric across the fibre channel network by querying the switch and the attached devices through the host bus adapter (HBA) in the system. For the switches to successfully receive and respond to the queries, there are some basic requirements.

  • The switch must support the FC-GS-3 standard interface for discovery.
    • Name server
    • Configuration server
    • Unzoned name server
  • For zone control functions, the fabric zone server must be supported, except in the case of Brocade.

Fabric events are automatically sent from the agent to the Fabric Manager with in-band discovery. There is no need for configuration.

The switch configuration for in-band agents is typically much simpler than for out-of-band, although it requires more involvement on the host side with the HBA and agent software.

SNMP-based out-of-band discovery collects much of the same information that can be obtained in-band, but it does so differently. In out-of-band discovery, the Fabric Manager system queries the switch directly rather than going through a Fabric Agent and the fibre channel network. It does this using the SNMP protocol to send queries across the IP network to management information bases (MIBs) supported on the switch. IBM Total Storage Productivity Center for Fabric uses the FC Management MIB (sometimes referred to as the FA MIB) and the FE MIB. The queries are sent only to switches that have been added to IBM Total Storage Productivity Center for Fabric for use as SNMP agents. In order for the switch to successfully receive and respond to the query, there are some basic requirements.

  • The FC Management MIB and FE MIB must be enabled on the switch.
  • The switch must be configured to receive SNMPv1 queries and respond in SNMPv1. Some switches are configured to use SNMPv2 or SNMPv3 by default.
  • The community string configured in IBM TotalStorage Productivity Center for Fabric must match one of the community strings configured on the switch with read access. Cisco switches must additionally have a community string match for write access. The default community strings in IBM TotalStorage Productivity Center for Fabric are public for read access and private for write access. Refer to the IBM TotalStorage Productivity Center for Fabric User’s Guidefor details on changing the strings. Additional community strings can be defined on the switches, but will not be used.
  • SNMP access control lists need to include the Fabric Manager system. Some automatically include all hosts while others exclude all by default.

Another aspect of the SNMP configuration includes trap notification. SNMP traps are generated by the switch and directed to IBM TotalStorage Productivity Center for Fabric as an indication that something in the fabric has changed and that a discovery should occur to identify changes. The default configuration for handling switch traps is to send them from the switch to port 162 on the Microsoft® Windows® Fabric Manager system or the Windows Fabric Remote Console system. In this configuration, Tivoli® NetView® receives the traps and forwards them to port 9556 on the Fabric Manager system. The IBM TotalStorage Productivity Center for Fabric User’s Guide discusses this and other possible trap flow options in further detail. For the successful generation and reception of traps, there are some configuration requirements.

  • The trap destination must be set. This is the host that receives the trap and sends it to Fabric Manager.
  • The destination port must be set. Tivoli NetView utilizes the Microsoft SNMP service that listens on port 162 by default.
  • The traps must be sent as SNMPv1.
  • The trap severity level should be set to generate traps in change conditions. This typically means to send error level traps and anything more severe.

Configuring these settings differs between switch vendors and models. Details for configuring the supported switches are provided. Additional settings that are vendor specific are also provided. The intent is to provide enough information to help you configure the switch so that IBM TotalStorage Productivity Center for Fabric can use it. It is not intended to describe every feature available for configuration on the switches. If you are not familiar with the settings suggested, please refer to the vendor’s documentation for details about them. Although the settings can be done using a variety of management interfaces for the switches, most are described using commands from the command line interface (CLI), which can be accessed through a telnet session to the switch. For details about IBM TotalStorage Productivity Center for Fabric support for specific models, refer to the device compatibility table available from the support Web site. See the Resources section for a link.

Brocade configuration

Brocade fibre channel switches are supported with IBM TotalStorage Productivity Center for Fabric as out-of-band SNMP agents and through in-band discovery.

In-band FC-GS-3 configuration

Brocade switches should be configured with core Port_ID (PID) format (PID 1) to be used with IBM TotalStorage Productivity Center for Fabric in-band agents. No other configuration is required. However, the platform management capabilities of the switch can be activated or deactivated. This potentially affects how some storage devices display in the topology. You can use the following commands from the switch CLI to view or change the PID and platform settings.

  • configShowShow the configuration settings selected on the switch.
    • Show the fabric.ops.mode.pidFormat value.
  • configureEnter into a guided switch configuration.
    • Set switch PID format to 1. It is located in the fabric parameters section.
  • msPlatShowShow which devices are registered with the platform database.
  • msPlCapabilityShowShow whether the switch is configured for platform support.
  • msPlMgmtActivateEnable the platform support on the switch.
  • msPlMgmtDeactivateDisable the platform support on the switch.
  • msPlClearDBRemove the currently registered platforms from the database.

Out-of-band SNMP configuration

Configuring a Brocade switch for out-of-band management addresses the basic items listed earlier. The following commands can be used from the switch CLI for configuration.

  • snmpMibCapSetSet the SNMP MIB capabilities on the switch. This is required for both discovery and traps. Enable the following MIBs
    • FE-MIB
    • FA-MIB
    • FA-TRAP or SW-TRAPInclude individual traps, if given the option. Additional traps can be enabled but should be carefully considered for their usefulness for invoking a discovery.
  • agtCfgShowShow the current SNMP configuration.
  • agtCfgSetSet the basic SNMP configuration.
    • Specify a read-only or read-write community string.
    • Specify an access control list that includes the Fabric Manager system. An ACL of 0.0.0.0 is default and will not restrict SNMP access from any host.
    • Set the trap destination address. Port 162 is used by default and cannot be configured.
    • A minimal trap severity level of 2 is recommended to include error and critical traps.
    • AuthTrapsEnabled is not a required option, but can be set.

Brocade switches should also be configured to use the core PID format when doing out-of-band SNMP discovery. See the configuration in the in-band section.

Cisco configuration

Cisco MDS 9000 family fibre channel switches are supported with IBM TotalStorage Productivity Center for Fabric with in-band and out-of-band discovery.

In-band FC-GS-3 configuration

Cisco switches do not require special configuration to work with IBM TotalStorage Productivity Center for Fabric in-band agents. However, in-band discovery is limited to virtual SANs (VSANs) with agents attached.

Out-of-band SNMP configuration

Configuring a Cisco switch for out-of-band management addresses the basic items listed earlier. Note that Cisco switches are configured to use SNMP v3 by default and must be reconfigured. VSAN information for the entire physical infrastructure is gathered. The following commands can be used from the switch CLI for configuration.

  • show snmpUse this command to view the current SNMP settings. The command can be made more specific to provide details on a particular SNMP setting, such as show snmp community or show snmp trap.
  • config terminalUse the above command to enter into configuration mode. The following commands are issued at the config prompt.
    • snmp-server community <string> roSet the read-only community string. It has network-administrator access by default.
    • snmp-server community <string> rwSet the read-write community string. This is necessary to make sure VSAN information gathered is not stale. It has network-administrator access by default.
    • snmp-sever community <string> group network-operatorSet the role for the community string. By setting it to network-operator, it is specifying to use the community string for SNMPv1 communication. If the community string is left with a role of network-administrator, discovery in IBM TotalStorage Productivity Center for Fabric will not work.
    • snmp-sever host <address> traps version 1 <community> udp-port <number>Set the trap destination address so that it sends SNMPv1 traps. Specify the community defined earlier. Port 162 is the default listening port for the host.
  • snmp-sever enable trapUsed as specified above, this command enables all traps on the switch. The default enablement of traps on the switch is sufficient in most cases.

CNT configuration

CNT fibre cannel switches are supported with IBM TotalStorage Productivity Center for Fabric with in-band and out-of-band discovery.

In-band FC-GS-3 configuration

CNT switches do not require special configuration to work with IBM TotalStorage Productivity Center for Fabric in-band agents.

Out-of-band SNMP configuration

Configuring a CNT switch for out-of-band management addresses the basic items listed earlier. In addition, you must configure an SNMP Start Port option for CNT FC9000 directors. The CNT UMD does not require this. The information below describes setting the fields using CNT’s Enterprise Manager application. The steps differ between the FC9000 and UMD models.

  • FC9000
    • System Configuration Panel -> Configuration Type -> Network Option -> Trap/Manager Settings
      • Set trap address. Port 162 is used by default.
      • Trap authorization does not need to be checked.
      • SNMP Configuration should be enabled.
      • Set SNMP Start Port to 0. This is required to match the port numbering obtained out-of-band with Enterprise Manager and that obtained in-band.
      • Set the Fabric Manager system’s address for the SNMP manager IP.
      • Set a community string.
    • UMD
      • SNMP Configuration Panel
        • Set SNMP Access to Enabled.
        • Set the trap destination address. Port 162 is used by default.
        • Select SNMP v1 trap and check the Enabled box.
      • User Security
        • Configure an SNMP user in the Users tab.
        • Use SNMP Ver 1 for the user type.
        • Set the Fabric Manager system’s address in the IP address field.
        • Set a community string.

McDATA configuration

McDATA fibre channel switches are supported with IBM TotalStorage Productivity Center for Fabric with both in-band and out-of-band discovery. There are some anomalies in this support for certain models. Refer to the IBM TotalStorage Productivity Center for Fabric Support site for details.

In-band FC-GS-3 configuration

Configuring a McDATA switch for in-band management is different than for the other vendors. The McDATA switch has the ability to enable or disable support for the FC-GS-3 interface through their open systems management server (OSMS). It also has a security feature that allows it to enable or disable the ability for other hosts to make changes on the switch. These settings must be done on every switch in the fabric, not just the ones with Fabric Agents attached.

  • config openSysMSUse the above command to enter into the open systems management server configuration mode. The following commands are issued at the Config.OpenSysMS prompt.
    • setState EnableEnable OSMS on the switch.
    • setHostCtrlState EnableEnable the host control option on the switch. This allows the zone control functions of IBM TotalStorage Productivity Center for Fabric to function with McDATA fabrics.

Out-of-band SNMP configuration

Configuring a McDATA switch for out-of-band management addresses the basic items listed earlier, but also has a unique concern. McDATA fabrics are typically on private networks. In order for out-of-band discovery to occur with IBM TotalStorage Productivity Center for Fabric, the Fabric Manager system must have network connectivity directly to the switches. It cannot use the McDATA EFCM system instead. The following commands can be used from the switch CLI for configuration.

  • config snmpUse the above command to enter into SNMP configuration mode. The following commands are issued at the Config.SNMP prompt.
    • addCommunity <Index> <Name> <writeAuth> <trapRecip> <udpNum>Set the community string name and specify a destination address and port number. The options for writeAuth are Enabled or Disabled and either can be used. The index refers to the number of communities already defined.
  • show snmpView the current SNMP settings on the switch.
  • config securityUse the above command to enter into security configuration mode. The following commands are issued at the Config.Security prompt.
    • switchACL setState <Enabled|Disabled>The ACL is disabled by default, which does not restrict access from the Fabric Manager system.
  • show security switchACLView the current switch ACL settings.

QLogic configuration

QLogic fibre channel switches are supported with IBM TotalStorage Productivity Center for Fabric with both in-band and out-of-band discovery.

In-band FC-GS-3 configuration

QLogic switches do not require any special configuration to work with IBM TotalStorage Productivity Center for Fabric in-band agents.

Out-of-band SNMP configuration

Configuring a QLogic switch for out-of-band management addresses the basic items listed earlier. You must configure an additional ProxyEnabled option for QLogic switches. The following commands can be used from the switch CLI for configuration.

  • admin startEnter into administrative configuration mode.
  • set setup snmpEnter into a guided SNMP configuration utility where the following options can be configured.
    • Set SNMPEnabled option to true.
    • Set the trap destination addresses.
    • Set the trap destination ports. Port 162 is used.
    • Set the trap severity level to a minimum of error to include unknown, emergency, alert, critical and error level traps.
    • Set the trap version to 1so that SNMPv1 is used.
    • Set the trap to be enabled so that it will be generated.
    • Set the ReadCommunity to a string that matches what is used by IBM TotalStorage Productivity Center for Fabric.
    • AuthFailureTrap can be set to either true or false.
    • Set ProxyEnabled to false when more than one switch exists in the fabric. If it is not, duplicate entries for the switches will appear in the topology view.

Conclusion

IBM TotalStorage Productivity Center for Fabric is a powerful tool for managing SANs, but the switches must be configured properly. Incorrectly configured switches will cause the user to have missing information and think the product is working incorrectly. Since the configuration settings vary by vendor and model, this can be confusing. Using the configuration settings described in this article, the user will have a high chance of success when setting up IBM TotalStorage Productivity Center for Fabric to manage an environment.

Resources

Learn

Get products and technologies

  • Build your next development project with IBM trial software, available for download directly from developerWorks.

written by Bosse