IBM Tivoli Workload Scheduler. General Information. Version 8.1 (Maintenance Release October 2003) GH

IBM Tivoli Workload Scheduler 򔻐򗗠򙳰 General Information Version 8.1 (Maintenance Release October 2003) GH19-4539-01 IBM Tivoli Workload Scheduler ...
Author: Octavia Kelley
1 downloads 0 Views 732KB Size
IBM Tivoli Workload Scheduler

򔻐򗗠򙳰

General Information Version 8.1 (Maintenance Release October 2003)

GH19-4539-01

IBM Tivoli Workload Scheduler

򔻐򗗠򙳰

General Information Version 8.1 (Maintenance Release October 2003)

GH19-4539-01

Note Before using this information and the product it supports, read the information in “Notices” on page 51.

Refreshed Edition (October 2003) This refreshed edition applies to version 8, release 1, modification 0 of IBM Tivoli Workload Scheduler for z/OS (program number 5697-WSZ) and to all subsequent releases and modifications until otherwise indicated in new editions.

Contents Figures . . . . . . . . . . . . . . . v Preface . . . . . . . . . . . . . . vii

| | | | |

Who Should Read This Manual . . . . . . . vii What This Manual Contains . . . . . . . . . vii Publications . . . . . . . . . . . . . vii Accessing Publications Online . . . . . . ix Softcopy Collection Kit . . . . . . . . . ix Ordering Publications . . . . . . . . . ix Using LookAt to Look Up Message Explanations . . ix Contacting IBM Software Support . . . . . . . x Summary of Enhancements . . . . . . . . . xi Enhancements to Tivoli Workload Scheduler for z/OS. . . . . . . . . . . . . . . . xi Restart and Cleanup . . . . . . . . . xi Job Durations in Seconds . . . . . . . . xi Integration with Tivoli Business Systems Manager . . . . . . . . . . . . . xi Integration with Removable Media Manager . xi Tivoli Workload Scheduler End-to-end . . . xi Minor Enhancements . . . . . . . . . xi Enhancements to Tivoli Workload Scheduler . . xii Multiple Holiday Calendars . . . . . . . xii Free Day Rule . . . . . . . . . . . xii Integration with Tivoli Business Systems Manager . . . . . . . . . . . . . xii Performance Improvements . . . . . . . xii Installation Improvements . . . . . . . xii Linux Support . . . . . . . . . . . xii Enhancements to the Job Scheduling Console . . xii Usability Enhancements . . . . . . . . xii Graphical Enhancements. . . . . . . . xiii Non-modal Windows . . . . . . . . . xiii Common View . . . . . . . . . . . xiii Tivoli Workload Scheduler for z/OS-specific Enhancements . . . . . . . . . . . xiii Tivoli Workload Scheduler-specific Enhancements . . . . . . . . . . . xiv

Chapter 1. Overview of the Workload Scheduler Suite . . . . . . . . . . . 1 The State-of-the-art Solution . . . . . . . . . Comprehensive Workload Planning . . . . . . Centralized Systems Management . . . . . . Systems Management Integration . . . . . . Automation . . . . . . . . . . . . . . Workload Monitoring . . . . . . . . . . Automatic Workload Recovery . . . . . . . Productivity . . . . . . . . . . . . . Business Solutions . . . . . . . . . . . . User Productivity . . . . . . . . . . . . . Growth Enabling . . . . . . . . . . . . . Who Uses the Workload Scheduler Suite . . . . . Role of the Scheduling Manager—The Focal Point Role of the Operations Manager . . . . . . .

1 2 2 2 4 4 5 5 5 5 5 6 6 6

A Powerful Tool for the Shift Supervisor Role of the Application Programmer . Console Operators . . . . . . . Workstation Operators . . . . . . End Users and the Help Desk . . . Summary . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

7 7 7 7 7 8

Chapter 2. Tivoli Workload Scheduler . . 9 Overview . . . . . . . . . . . . . . What is Tivoli Workload Scheduler . . . . . The Tivoli Workload Scheduler Network . . . Manager and Agent Types . . . . . . . Topology . . . . . . . . . . . . . Networking . . . . . . . . . . . . Tivoli Workload Scheduler Components . . . Tivoli Workload Scheduler Scheduling Objects . The Production Process . . . . . . . . Scheduling . . . . . . . . . . . . . Defining Scheduling Objects . . . . . . . Creating Job Streams . . . . . . . . . Setting Job Recovery . . . . . . . . . Running Production . . . . . . . . . . Start-of-day Processing . . . . . . . . Running Job Streams . . . . . . . . . Monitoring . . . . . . . . . . . . Reporting . . . . . . . . . . . . . Auditing . . . . . . . . . . . . . Options and Security . . . . . . . . . . Setting Global and Local Options . . . . . Setting Security . . . . . . . . . . . Using Time Zones . . . . . . . . . .

. 9 . 9 . 9 . 11 . 12 . 12 . 13 . 14 . 15 . 16 . 16 . 16 . 16 . 17 . 17 . 17 . 18 . 18 . 19 . 19 . 19 . 20 . 20

Chapter 3. Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . 21 How Your Production Workload Is Managed . Structure . . . . . . . . . . . . Concepts . . . . . . . . . . . . Plans in Tivoli Workload Scheduler for z/OS Long-term Planning . . . . . . . Detailed Planning . . . . . . . . Automatically Controlling the Production Workload . . . . . . . . . . . . Automatic Workload Submission . . . Automatic Recovery and Restart . . . z/OS Automatic Restart Manager Support Workload Manager (WLM) Support . . Automatic Status Checking . . . . . Status Reporting from Heterogeneous Environments. . . . . . . . . . Status Reporting from User Programs . Additional Job-completion Checking . . Managing Unplanned Work . . . . . Interfacing with Other Programs . . . . Manual Control and Intervention . . . . Status Inquiries . . . . . . . . .

. . . . . .

. . . . . .

21 21 21 24 25 25

. . . . . .

. . . . . .

25 26 27 29 29 29

. . . . . . .

. . . . . . .

29 29 30 30 30 30 30

iii

Modifying the Current Plan . . . . . Management of Critical Jobs . . . . . . Security . . . . . . . . . . . . Audit Trail . . . . . . . . . . System Authorization Facility . . . . Protection of Data and Resources . . Data Integrity During Submission . . Configurations of Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . The Controlling System . . . . . . . Controlled z/OS Systems . . . . . . . Remote Systems . . . . . . . . . Remote Panels and Program Interface Applications . . . . . . . . . . . Scheduling Jobs That Are in Tivoli Workload Scheduler . . . . . . . . . . . .

. . . . . . .

. . . . . . .

31 31 32 32 32 32 33

. . . .

. . . .

33 33 34 34

.

. 35

.

. 35

Scheduler Tasks . . . . . . . . . Working with Job Streams . . . . Working with Jobs . . . . . . . Working with Calendars . . . . . Working with Prompts . . . . . Working with Parameters . . . . . Working with Domains . . . . . Working with Workstations . . . . Working with Workstation Classes . . Working with Resources . . . . . Working with Users . . . . . . Operator Tasks . . . . . . . . . Working with Job Stream Instances . Working with Job Instances . . . . Working with Workstations . . . . Working with Domains . . . . . Working with File Dependencies . . Working with Prompt Dependencies . Working with Resource Dependencies Common Tasks . . . . . . . . . .

Chapter 4. Tivoli Job Scheduling Console . . . . . . . . . . . . . . 37 Overview . . . . . . . . . . . . Tivoli Workload Scheduler for z/OS Tasks . Scheduler Tasks . . . . . . . . . Working with Job Streams . . . . Working with Jobs . . . . . . . Working with Workstations . . . . Working with Resources . . . . . Operator Tasks . . . . . . . . . Working with Job Stream Instances . Working with Job Instances . . . . Working with Workstations in the Plan Working with Resources in the Plan . Tivoli Workload Scheduler Tasks . . . .

iv

General Information

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

37 38 38 38 39 39 40 40 40 40 41 41 42

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

42 43 43 43 43 43 44 44 44 44 44 45 45 45 45 46 47 47 47 48

Chapter 5. End-to-end Scheduling . . . 49 How End-to-end Scheduling Works . Distributed Agents . . . . . . . Supported End-to-end Configurations Benefits of End-to-end scheduling . .

| |

. . . .

. . . .

. . . .

. . . .

. . . .

49 49 50 50

Notices . . . . . . . . . . . . . . 51 Trademarks .

.

.

.

.

.

.

.

.

.

.

.

.

. 52

Index . . . . . . . . . . . . . . . 53

Figures 1. 2. 3. 4. 5. 6. 7.

This Tivoli Workload Scheduler network is made up by two domains. . . . . . . . Graphic Display of Dependencies between Jobs Automatic Recovery and Restart . . . . . Production Workload Restart and Hot Standby Security . . . . . . . . . . . . . . Tivoli Workload Scheduler for z/OS Configurations . . . . . . . . . . . Job Scheduling Console main window and Tivoli Workload Scheduler for z/OS tasks. . .

10 23 27 29 32

8. 9. 10. 11. 12.

34 13. 38

Listing job streams in the database. . . . Listing job instances. . . . . . . . . Job Scheduling Console main window and Tivoli Workload Scheduler tasks. . . . . Changing the job limit of a workstation in the plan. . . . . . . . . . . . . . . Listing Tivoli Workload Scheduler domains involved in the plan. . . . . . . . . Job Scheduling Console main window and common tasks. . . . . . . . . . .

. 39 . 41 . 42 . 46 . 47 . 48

v

vi

General Information

Preface ®

This book describes the suite of Tivoli® Workload Scheduler 8.1 and its enterprise workload management functions. This book provides introductory information about Tivoli Workload Scheduler, Tivoli Workload Scheduler for z/OS™, and the Tivoli Job Scheduling Console for all users. It does not provide detailed technical explanations about how it works. This book describes: v The structure of the product v Where it fits in single-host and multiple-host systems v Major functions v How it works with other products

Who Should Read This Manual This book is intended for: v Data processing (DP) operations managers and their technical advisors who are evaluating the product or planning their scheduling service v Individuals who require general information for evaluating, installing, or using the product

What This Manual Contains The information in this book is organized into the following chapters: v Chapter 1, “Overview of the Workload Scheduler Suite,” on page 1 Outlines the benefits your enterprise can achieve with the suite of Tivoli Workload Scheduler. v Chapter 2, “Tivoli Workload Scheduler,” on page 9 Describes the functions of Tivoli Workload Scheduler. v Chapter 3, “Tivoli Workload Scheduler for z/OS,” on page 21 Describes the functions of Tivoli Workload Scheduler for z/OS. v Chapter 4, “Tivoli Job Scheduling Console,” on page 37 Describes the functions of Tivoli Job Scheduling Console. v Chapter 5, “End-to-end Scheduling,” on page 49 Describes the end-to-end scheduling solution.

Publications This book is part of an extensive library. The books in this library can help you use the product more effectively. The following table lists the publications in the library. Table 1. List of Publications Order number

Task

Publication

Planning Tivoli Workload Scheduler for z/OS.

Tivoli Workload Scheduler for z/OS Licensed Program Specifications

GH19-4540

Understanding the workload scheduler suite.

Tivoli Workload Scheduler General Information

GH19-4539

vii

Table 1. List of Publications (continued) Task

Publication

Order number

Using the Java™ GUI.

Tivoli Job Scheduling Console User’s Guide

SH19-4552

Using the Java GUI.

Tivoli Job Scheduling Console Release Notes

GI10-5781

Interpreting Tivoli Workload Tivoli Workload Scheduler for z/OS Messages Scheduler for z/OS messages and Codes and codes.

SH19-4548

Installing Tivoli Workload Scheduler for z/OS.

Tivoli Workload Scheduler for z/OS Installation Guide

SH19-4543

Customizing and tuning Tivoli Workload Scheduler for z/OS.

Tivoli Workload Scheduler for z/OS Customization and Tuning

SH19-4544

Planning and scheduling the Tivoli Workload Scheduler for z/OS Planning and SH19-4546 workload on Tivoli Workload Scheduling the Workload Scheduler for z/OS. Controlling and monitoring the Tivoli Workload Scheduler for z/OS current plan.

viii

General Information

Tivoli Workload Scheduler for z/OS Controlling and Monitoring the Workload

SH19-4547

Writing application programs Tivoli Workload Scheduler for z/OS for Tivoli Workload Programming Interfaces Scheduler for z/OS.

SH19-4545

Tivoli Workload Scheduler for z/OS Quick reference.

Tivoli Workload Scheduler for z/OS Quick Reference

GH19-4541

Diagnosing failures of Tivoli Workload Scheduler for z/OS.

Tivoli Workload Scheduler for z/OS Diagnosis Guide and Reference

LY19-6410

Planning and installing Tivoli Tivoli Workload Scheduler Planning and Workload Scheduler. Installation Guide

SH19-4555

Using the Tivoli Workload Tivoli Workload Scheduler Reference Guide Scheduler command line, understanding how extended and network agents work, and integrating Tivoli Workload Scheduler with NetView® and with Tivoli Business Systems Manager.

SH19-4556

Interpreting Tivoli Workload Scheduler error messages.

Tivoli Workload Scheduler Error Messages

SH19-4557

Installing, configuring, and using Tivoli Workload Scheduler fault-tolerant agents on AS/400®.

Tivoli Workload Scheduler AS/400 Limited FTA User’s Guide

SH19-4558

Setting up and using the Tivoli Workload Scheduler Plus module.

Tivoli Workload Scheduler Plus Module User’s Guide

SH19-4562

| | | | |

Accessing Publications Online

|

http://publib.boulder.ibm.com/tividd/td/tdprodlist.html

|

Click the OPC link to access the product library.

| | | |

Note: If you print PDF documents on other than letter-sized paper, select the Fit to page check box in the Adobe Acrobat Print dialog. This option is available when you click File → Print. Fit to page ensures that the full dimensions of a letter-sized page print on the paper that you are using.

| | | | |

Softcopy Collection Kit

| | | | | |

You can read the softcopy books on CD-ROMs using these IBM® licensed programs: v Softcopy Reader v BookManager® READ/2 v BookManager READ/DOS v BookManager READ/6000

| | | | |

All the BookManager programs need a personal computer equipped with a CD-ROM disk drive (capable of reading disks formatted in the ISO 9660 standard) and a matching adapter and cable. For additional hardware and software information, refer to the documentation for the specific BookManager product you are using.

|

Updates to books between releases are provided in softcopy only.

| |

Ordering Publications

|| |

http://www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi

| | |

You can also order by telephone by calling one of these numbers: v In the United States: 800-879-2755 v In Canada: 800-426-4968

|

In other countries, see the following Web site for a list of telephone numbers:

|

http://www.ibm.com/software/tivoli/order-lit/

| | | | |

IBM posts publications for this and all other Tivoli products, as they become available and whenever they are updated, to the Tivoli Software Information Center Web site. The Tivoli Software Information Center is located at the following Web address:

All the books in the Tivoli Workload Scheduler for z/OS library, except the licensed publications, are available in displayable softcopy form on CD-ROM in the following Softcopy Collection Kit: v OS/390®, SK2T-6951

You can order many Tivoli publications online at the following Web site:

Using LookAt to Look Up Message Explanations LookAt is an online facility that lets you look up explanations for most messages you encounter, as well as for some system abends and codes. Using LookAt to find information is faster than a conventional search because in most cases LookAt goes directly to the message explanation. Preface

ix

| | | |

You can access LookAt from the Internet at: http://www.ibm.com/eserver/zseries/zos/bkserv/lookat/ or from anywhere in z/OS or z/OS.e where you can access a TSO/E command line (for example, TSO/E prompt, ISPF, z/OS UNIX® System Services running OMVS).

| | | |

The LookAt Web site also features a mobile edition of LookAt for devices such as Pocket PCs, Palm OS, or Linux-based handhelds. So, if you have a handheld device with wireless access and an Internet browser, you can now access LookAt message information from almost anywhere.

| | |

To use LookAt as a TSO/E command, you must have LookAt installed on your host system. You can obtain the LookAt code for TSO/E from a disk on your (SK3T-4270) or from the LookAt Web site’s Download link.

|

Contacting IBM Software Support

| |

If you have a problem with any Tivoli product, you can contact IBM Software Support. See the IBM Software Support Guide at the following Web site:

|

http://techsupport.services.ibm.com/guides/handbook.html

| | | | | |

The guide provides information about how to contact IBM Software Support, depending on the severity of your problem, and the following information: v Registration and eligibility v Telephone numbers and e-mail addresses, depending on the country in which you are located v Information you must have before contacting IBM Software Support

x

General Information

Summary of Enhancements This section describes the enhancements to the following: v Tivoli Workload Scheduler for z/OS v Tivoli Workload Scheduler v Tivoli Job Scheduling Console

Enhancements to Tivoli Workload Scheduler for z/OS The following are enhancements to Version 8.1 of Tivoli Workload Scheduler for z/OS.

Restart and Cleanup Data set management has been extended to improve the flexibility of job and step restart. You do not have to rely on the exclusive use of the data store, removing the delay in normal JES processing of system data sets. Now, the data store can run at a very low priority, if so desired.

Job Durations in Seconds You can now create schedule plans in the duration of seconds, adding more control to scheduling second-by-second.

Integration with Tivoli Business Systems Manager Tivoli Business Systems Manager is the solution to unifying the management of business systems. Tivoli Workload Scheduler for z/OS has been enhanced to support monitoring from Tivoli Business Systems Manager. From Tivoli Business Systems Manager, you can monitor the following: v Status changes to jobs v Addition of jobs to the plan v Alert conditions

Integration with Removable Media Manager Removable Media Manager (RMM) works with restart and cleanup to assist with the management of data sets. Removable Media Manager verifies that a data set exists on a volume and then takes user-requested actions on the data set. Consequently, Tivoli Workload Scheduler for z/OS works with Removable Media Manager to properly mark and expire data sets, as needed.

Tivoli Workload Scheduler End-to-end End-to-end scheduling is based on the possibility to connect a Tivoli Workload Scheduler domain manager, and its underlying agents and domains, to the Tivoli Workload Scheduler for z/OS engine. The engine is seen by the distributed network as the master domain manager. The Tivoli Workload Scheduler domain manager acts as the broker for the distributed network and has the task of resolving all dependencies. With this version, the fault-tolerant agents replace the Tivoli OPC tracker agents and make scheduling possible on the distributed platform with more reliable, fault tolerant and scalable agents.

Minor Enhancements The installation and use of EQQAUDIT is now a full member of Tivoli Workload Scheduler for z/OS. You can access the functionality of EQQAUDIT from the main menu of Tivoli Workload Scheduler for z/OS. The batch control interface tool (BCIT) has been made part of the regular installation of Tivoli Workload Scheduler for z/OS.

Preface

xi

Enhancements to Tivoli Workload Scheduler The following are enhancements to Version 8.1 of Tivoli Workload Scheduler.

Multiple Holiday Calendars The freedays calendar (where a freeday is the opposite of a workday) extends now the role of the Holidays calendar, as it allows users to customize the meaning of workdays within Tivoli Workload Scheduler. With this new function, you can define and associate as many calendars as you need to each job stream you create.

Free Day Rule The freeday rule introduces the concept of run cycle that is already used in Tivoli Workload Scheduler for z/OS. It consists of a number of options (or rules) that determine when a job stream should be actually run if its schedule falls on a freeday.

Integration with Tivoli Business Systems Manager The integration with Tivoli Business Systems Manager for Tivoli Workload Scheduler provides the same functionality as with Tivoli Workload Scheduler for z/OS. See page xi for more information.

Performance Improvements The new performance enhancements will be particularly appreciated in Tivoli Workload Scheduler networks with many CPUs, massive scheduling plans, and complex relations between scheduling objects. The improvements are in the following areas: v Daily plan creation: Jnextday runs faster and consequently the master domain manager can start its production tasks sooner. v Daily plan distribution: the Tivoli Workload Scheduler administrator can now enable the compression of the Symphony file so that the daily plan can be distributed to other nodes earlier. v I/O optimization: Tivoli Workload Scheduler performs fewer access to the files and optimizes the use of system resources. The improvements reflect: – Event files: The response time to the events is improved so the message flow is faster. – Daily plan: the access to the Symphony file is quicker in both read and write. The daily plan can therefore be updated in a shorter time than it was previously.

Installation Improvements

On Windows NT®, the installation of Netman is no longer a separate process. It is now part of the installation steps of the product.

Linux Support Version 8.1 of Tivoli Workload Scheduler adds support for the following Linux platforms: v Linux for INTEL as master domain manager and fault-tolerant agent. v Linux for S/390® as fault-tolerant agent.

Enhancements to the Job Scheduling Console The Job Scheduling Console Feature Level 1.2 is delivered with the workload scheduler suite or either of its components. The following are the latest enhancements.

Usability Enhancements The following usability enhancements are featured:

xii

General Information

v Improved tables for displaying list views. Users can now sort and filter table contents by right-clicking on the table. They can also automatically resize a column by double-clicking on its header. v Message windows now directly display the messages. Users no longer have to click the Details button to read the message. For error messages, the error code is displayed also on the window title. v The addition of jobs to a job stream can be done also from within the Timeline view as well as from the Graph view of the job stream editor. v New jobs are now automatically positioned within the Graph view of the job stream editor. Users are no longer required to click their mouse on the background of the view to open the job’s properties window. v A new editor for job stream instances is featured. This editor is similar to the job stream editor for the database and enables users to see and work with all the job instances contained in a specified job stream. From it users can modify the properties of a job stream instance and of its job instances, and the dependencies between the jobs. The job stream instance editor does not include the Timeline and Run cycle views. The job instance icons display also the current status of the job.

Graphical Enhancements The following graphical enhancements are featured: v Input fields have changed to conform to Tivoli Presentation Services norms. Mandatory fields have a yellow background. Fields into which input containing syntax errors was introduced display a white cross on a red background. v A new Hyperbolic view graphically displays all the job dependencies of every single job in the current plan.

Non-modal Windows The properties windows of scheduling objects are now non-modal. This means that you can have two or more properties windows open at the same time. This can be particularly useful if you need to define a new object that is in turn required for another object’s definition.

Common View The Common view provides users with the possibility to list job and job stream instances in a single view and regardless of their scheduling engine, thus furthering integration for workload scheduling on the mainframe and the distributed platforms. The Common view is displayed as an additional selection at the bottom of the tree view of the scheduling engines.

Tivoli Workload Scheduler for z/OS-specific Enhancements The Job Scheduling Console now supports the following Tivoli Workload Scheduler for z/OS functions: v Submit job streams. Users can select a specific job stream from the database and submit it directly to the current plan. They can choose to have the job stream run immediately upon submission or to be put on hold and, eventually, edited on the fly before it is submitted. The possibility to modify the start and the deadline times is also provided. v A text editor to display and modify JCLs. The editor provides import and export functions, so that users can store a JCL as a template and then reuse it for other JCLs. It also includes functions to copy, cut, and paste JCL text. The JCL editor displays information on the current JCL, such as the current row and column, the job name, the workstation name, and who last updated it. v Read-only text editors to visualize: Preface

xiii

– The logs produced by job instance runs. – The operator instruction associated with a job instance. v The possibility to restart a job after it has run. Users can now: – Restart a job instance with the option to choose which step must be first, which must be last, and which must be included or excluded. – Rerun a job instance that will execute entirely all the steps of a job instance. – Clean the list of datasets used by the selected job instance. – Display the list of datasets cleaned by a previous clean up action. v The possibility to rerun an entire job stream instance. This function opens a job stream instance editor with a set of reduced functionalities where users can select the job instance that will be the starting point of the rerun. When the starting point is selected, an impact list is displayed that shows all the possible job instances that will be impacted from this action. For every job instance within the current job stream instance, users can perform a clean up action and display its results.

Tivoli Workload Scheduler-specific Enhancements The Job Scheduling Console now supports the possibility to set an old plan as an alternate plan so that all the lists at plan level refer to the selected file. This functionality was previously available with the legacy GUI.

xiv

General Information

Chapter 1. Overview of the Workload Scheduler Suite The workload scheduler suite is the state-of-the-art production workload manager, designed to help you meet your present and future data processing challenges. Its scope encompasses your entire enterprise information system, including heterogeneous environments. Pressures on today’s data processing (DP) environment are making it increasingly difficult to maintain the same level of services to customers. Many installations find that their batch window is shrinking. More critical jobs must be finished before the morning online work begins. Conversely, requirements for the integrated availability of online services during the traditional batch window put pressure on the resources available for processing the production workload. More and more 7 days a week, 24 hours a day is not only a DP objective but a requirement. Users and owners of DP services are also making more use of batch services than ever before. The batch workload tends to increase each year at a rate slightly below the increase in the online workload. Combine this with the increase in data usage by batch jobs, and the end result is a significant increase in the volume of work. Furthermore, there is a shortage of people with the required skills to operate and manage increasingly complex DP environments. The complex interrelationships between production activities—between manual and machine tasks—have become unmanageable without a workload management tool. The workload scheduler suite simplifies systems management across heterogeneous environments by integrating systems management functions. There are three main components to the suite: v Tivoli Workload Scheduler for z/OS The scheduler in OS/390 and z/OS environments v Tivoli Workload Scheduler The scheduler in distributed environments v Tivoli Job Scheduling Console The common user interface for both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler

The State-of-the-art Solution The suite provides leading-edge solutions to problems in production workload management. It can automate, plan, and control the processing of your enterprise’s entire production workload, not just the batch subset. The suite functions as an “automatic driver” for your production workload to maximize the throughput of work, and optimize your resources, but also allows you to intervene manually as required. When the suite interfaces with other system management products, it forms part of an integrated automation and systems management platform for your DP operation.

1

Comprehensive Workload Planning The suite forms operating plans based on user descriptions of the operations department and its production workload. These plans provide the basis for your service level agreements and give you a picture of the production workload at any point in time. Good planning is the cornerstone of any successful management technique. Effective planning also helps you maximize return on your investments in information technology.

Centralized Systems Management The suite automates, monitors, and controls the flow of work through your enterprise’s entire DP operation—on both local and remote systems. From a single point of control, the suite analyzes the status of the production work and drives the processing of the workload according to installation business policies. It supports a multiple-end-user environment, enabling distributed processing and control across sites and departments within your enterprise.

Systems Management Integration Solutions to today’s systems management problems require an integration of application programs and processes. The suite offers you integration with the following: v Agents for controlling the workload on non-z/OS platforms v Other systems management applications and architecture environments. The suite interfaces directly with some of the z/OS products as well as with a number of other IBM products to provide a comprehensive, automated processing facility and an integrated approach for the control of complex production workloads. NetView. The NetView program is the IBM platform for network management and automation. You can use the interface for Tivoli Workload Scheduler for z/OS with the NetView program to pass information about the work that is being processed. The suite lets you communicate with the NetView program in conjunction with the production workload processing. Tivoli Workload Scheduler for z/OS can also pass information to the NetView program for alert handling in response to situations that occur while processing the production workload. The NetView program can automatically trigger Tivoli Workload Scheduler for z/OS to perform actions in response to these situations using a variety of methods. Tivoli Workload Scheduler/NetView is a NetView application that gives network managers the ability to monitor and diagnose Tivoli Workload Scheduler networks from a NetView management node. It includes a set of submaps and symbols to view Tivoli Workload Scheduler networks topographically and determine the status of job scheduling activity and critical Tivoli Workload Scheduler processes on each workstation. Workload Manager (WLM). WLM controls the amount of system resources available to each work unit in host environments. Tivoli Workload Scheduler for z/OS works in concert with WLM to detect critical jobs and move them to a higher-performance service class. In addition with WLM, critical jobs receive more system resources and complete more quickly. Resource Object Data Manager (RODM). RODM provides a central location for storing, retrieving, and managing the operational resource information needed for

2

General Information

network and systems management. You can map a special resource to a RODM object. This lets you schedule the production workload considering actual resource availability, dynamically updated. | | | | | |

Tivoli Decision Support for OS/390 (Decision Support). Decision Support helps you effectively manage the performance of your system by collecting performance data in a DATABASE 2™ (DB2®) database and presenting the data in a variety of formats for use in systems management. Decision Support uses data from Tivoli Workload Scheduler for z/OS to produce summary and management reports about the production workload, both planned and actual results. Report Management and Distribution System (RMDS). RMDS helps customers increase productivity and reduce the costs of printing by providing a means for storing and handling reports in a z/OS environment. When a dialog user requests to view a job log or to automatically rebuild the JCL for a step-level restart, Tivoli Workload Scheduler for z/OS interfaces with RMDS. This interface removes the requirement to duplicate job log information, saving both CPU cycles and direct access storage device (DASD) space. Tivoli Service Desk for OS/390 (TSD/390). TSD/390 supports the administration of the systems management process of an enterprise’s hardware, software, and related resources. An interface with TSD/390 is provided for reporting problems detected while processing the production workload. Resource Access Control Facility (RACF®). RACF is the IBM product for data security. You can use RACF as the primary tool to protect your Tivoli Workload Scheduler for z/OS services and data at the level required by your enterprise. With RACF 2.1, you can use a Tivoli Workload Scheduler for z/OS reserved resource class to protect your resources. System Automation for OS/390 (SA/390). SA/390 initiates automation procedures that perform operator functions to manage OS/390 components, data sets, and subsystems. SA/390 includes an automation feature for Tivoli Workload Scheduler for z/OS. Data Facility Hierarchical Storage Manager (DFHSM). Tivoli Workload Scheduler for z/OS catalog management functions invoke DFHSM to recall migrated data sets during data set cleanup for a failed or rerun job. CICS® and IMS™ (Computer Information Control System and Information Management System). Tivoli Workload Scheduler for z/OS lets you schedule the starting and stopping of started tasks. Because Tivoli Workload Scheduler for z/OS tracks the status of started tasks, you can serialize work, such as backups of your transaction databases, according to the status of your CICS or IMS subsystems. Tivoli Business Systems Manager. Tivoli Business Systems Manager provides monitoring and event management of resources, applications, and subsystems with the objective of providing continuous availability for the enterprise. Using Tivoli Business Systems Manager with the suite provides the ability to manage strategic applications from a unique business systems perspective. Tivoli Business Systems Manager monitors batch-related applications and operations represented by the suite and seamlessly integrates these objects with all other business objects monitored by Tivoli Business Systems Manager. Tivoli Enterprise Console®. The Tivoli Enterprise Console is a powerful, rules-based event management application that integrates network, systems, Chapter 1. Overview

3

database, and application management. It offers a centralized, global view of your computing enterprise while ensuring the high availability of your application and computing resources. Tivoli Enterprise Console acts as a central collection point for alarms and events from a variety of sources, including those from Tivoli applications. Tivoli Workload Scheduler runs a Tivoli Enterprise Console adapter that reads events from the Tivoli Workload Scheduler log file. Besides these IBM products, there are many products from other software vendors that work with or process data from the suite.

Automation By automating management of your production workload with the suite, you can minimize human errors in production workload processing and free your staff for more productive work. The suite lets you to plan, drive, and control the processing of your production workload—important steps toward automation and unattended operations. Whether you are running one or more systems at a single site—or at several distributed sites—the suite helps you automate your production workload by: v Coordinating all shifts and production work across installations of all sizes, from a single point of control v Automating complex and repetitive operator tasks v Dynamically modifying your production workload schedule in response to changes in the production environment (such as urgent jobs, changed priorities, or hardware failures) and then managing the workload accordingly v Resolving workload dependencies v Managing utilization of shared resources v Tracking each unit of work v Detecting unsuccessful processing v Displaying status information and instructions to guide operations personnel in their work v Interfacing with other key IBM products to provide an integrated automation platform The suite lets you centralize and integrate control of your production workload and reduces the number of tasks that your staff need to perform.

Workload Monitoring Besides providing a single point of control for the production workload across your systems, the suite: v Monitors the production workload in real time, providing operations staff with the latest information on the status of the workload so that they can react quickly when problems occur. v v v v

4

General Information

Provides security interfaces that ensure the protection of your services and data. Enables manual intervention in the processing of work. Reports the current status of your production workload processing. Provides reports that can serve as the basis for documenting your service level agreements with users. Your customers can see when and how their work is to be processed.

Automatic Workload Recovery The suite enables processing production workload to continue even when system or connection failures occur. If one system fails, the suite can restart the processing on another system. When the controlling system is running in a z/OS system complex (sysplex), a hot standby function can automatically transfer control of the production workload to another system in the sysplex. Because the suite continues to manage the production workload during failures, you can maintain the integrity of your processing schedules and continue to service your customers. In Tivoli Workload Scheduler, a switchmanager function provides the possibility to replace a failing master domain manager or domain manager workstation with an appropriately configured backup fault-tolerant agent or domain manager .

Productivity The suite represents real productivity gains by ensuring fast and accurate performance through automation. Many of today’s automation solutions quote unrealistic productivity benefits. Some of the tasks automated should never be performed, or certainly not as often as they are by automation. Because of this, it is difficult to correlate real productivity benefits to your enterprise. The tasks the suite performs not only have to be performed, but have to be performed correctly, every time, and as quickly as possible. Many of these tasks, traditionally performed by DP professionals, are tedious and as a result prone to error. With the suite, your DP staff can use their time more efficiently.

Business Solutions The suite provides business solutions by: v Driving production according to your business objectives v Automating the production workload to enhance company productivity v Providing you with information about current and future workloads v Managing a high number of activities efficiently.

User Productivity Your DP staff and end users can realize significant productivity gains through the suite’s: v Fast-path implementation. v Immediate response to dialog requests for workload status inquiries. Users are provided with detailed real-time information about production workload processing so that they can detect and promptly correct errors. v Automation of operator tasks such as error recovery and data set cleanup. v Job Scheduling Console with its easy-to-use graphical user interface and sophisticated online help facilities.

Growth Enabling As you implement automation and control you can manage greater production workload volumes. The suite enables growth within your DP operation by providing: v Ways of absorbing the increasing batch workload without increasing operations personnel

Chapter 1. Overview

5

v An open interface for submitting and tracking the workload on a variety of operating systems v Interfaces with other systems management application programs v An open interface for, and communicating with, programs on other platforms v Management of current and future production workload volumes v Simulation facilities to forecast future workloads

Who Uses the Workload Scheduler Suite In a typical enterprise, many people contribute to the implementation and operation of the suite: v v v v v v

Scheduling manager Operations manager Shift supervisor Application programmer Console operators Workstation operators, such as print operators, job setup staff, and login receptionists v End users v Help desk This section describes how the suite can directly benefit your DP staff.

Role of the Scheduling Manager—The Focal Point The scheduler makes it possible for the scheduling manager to maintain current and future production processing across your enterprise. The suite benefits the scheduling manager in the following ways: v Automatically scheduling all production workload activities. v Automatically resolving the complexity of production workload dependencies and driving the work in the most efficient way. v Supporting the simulation of future workloads on the system. The scheduling manager can evaluate, in advance, the effect of changes in production workload volumes or processing resources. v Giving a real-time view of the status of work as it flows through the system so that the scheduling manager can quickly: – Respond to customer queries about the status of their work – Identify problems in the workload processing. v Providing facilities for manual intervention. v Managing many workload problems automatically. The production-workloadrestart facilities, hot standby, automatic recovery of jobs and started tasks, and data set cleanup provide the scheduling manager with comprehensive error-management and disaster-management facilities. v Providing a log of changes to the production workload data through the audit-trail facility. This assists the scheduling manager in resolving problems caused by user errors. v Managing unplannable work.

Role of the Operations Manager The reporting, planning, and control functions can help the operations manager to do the following:

6

General Information

v v v v v v

Improve the efficiency of the operation Improve control of service levels and quality Set service level agreements for end-user applications and for services provided Improve relationships with end-user departments Increase the return on your IT investment Develop staff potential.

A Powerful Tool for the Shift Supervisor The suite is important for the shift supervisor, especially in multisystem complexes, where local and remote systems are controlled from a central site. The suite can help the shift supervisor to do the following: v Monitor and control the production workload through multisystem complexes v Control the use of mountable devices v Separate information about work status from system and other information v Provide end users with status information directly v Manage the workload if a system failure occurs v Make changes to the current plan in response to unplanned events, such as equipment failures, personnel absences, and rush jobs.

Role of the Application Programmer The user-authority checking enables application development groups to use all the planning and control functions in parallel with—but in isolation from—production systems and services. The suite can be a valuable tool for application development staff when they are doing the following: v Packaging new applications for regular running v Testing new JCL in final packaged form v Testing new applications and changes to existing ones v Restarting or rerunning unsuccessful jobs.

Console Operators The suite can free console operators from the these time-consuming tasks: v Starting and stopping started tasks v Preparing JCL before job submission v Submitting jobs v Verifying the sequence of work v Reporting job status v Performing data set cleanup in recovery and rerun situations v Responding to workload failure v Preparing the JCL for step-level restarts.

Workstation Operators The suite helps workstation operators do their work by providing the following: v Complete and timely status information v Up-to-date ready lists that prioritize the work flow v Online assistance in operator instructions.

End Users and the Help Desk Your end users often need to be informed about the status of workload processing. They can use the Job Scheduling Console to check the status of the processing of their job streams themselves from a personal workstation. End users can make Chapter 1. Overview

7

queries using the Job Scheduling Console without having to be familiar with the suite, ISPF, or TSO, and without having to be logged on to a local system. The help desk can use the Job Scheduling Console in the same way to answer queries from end users about the progress of their workload processing.

Summary The suite communicates with other key IBM products to provide a comprehensive, automated processing facility and an integrated solution for the control of all production workloads. Here are the benefits that the suite offers you: v Increased automation, which increases efficiency and uses DP resources more effectively, resulting in improved service levels for your customers v Improved systems management integration, providing a unified solution to your systems management problems v More effective control of DP operations, which lets you implement change and manage growth more efficiently v Increased availability, through automatic workload recovery v Opportunities for growth, through your ability to manage greater workload volumes v Investment protection, by building on your current investment in z/OS and allowing existing customers to build on their existing investment in workload management v Improved customer satisfaction, resulting from higher levels of service and availability, fewer errors, and faster response to problems v Greater productivity, because repetitive, error-prone tasks are automated and operations personnel can use their time more efficiently v Integration of multiple operating environments, which provides a single controlling point for the cooperating systems that comprise your DP operation The suite is more than just a batch scheduling tool—it is a production management system with the capability to schedule all the work running on any system.

8

General Information

Chapter 2. Tivoli Workload Scheduler Tivoli Workload Scheduler’s scheduling features help you plan every phase of production. During the processing day, the Tivoli Workload Scheduler production control programs manage the production environment and automate most operator activities. Tivoli Workload Scheduler prepares jobs for execution, resolves interdependencies, and launches and tracks each job. Because jobs start running as soon as their dependencies are satisfied, idle time is minimized, and throughput improves significantly. Jobs never run out of sequence, and, if a job fails, Tivoli Workload Scheduler handles the recovery process with little or no operator intervention.

Overview The next pages provide an outline of Tivoli Workload Scheduler.

What is Tivoli Workload Scheduler Tivoli Workload Scheduler is composed of three parts: Tivoli Workload Scheduler engine The scheduling engine. It runs on every computer of a Tivoli Workload Scheduler network. Upon installation, the engine is configured for the role that the workstation will play within the scheduling network, such as master domain manager, domain manager, or agent. Tivoli Workload Scheduler Connector Maps Job Scheduling Console commands to the Tivoli Workload Scheduler engine. The Tivoli Workload Scheduler connector runs on the master and on any of the fault-tolerant agents (FTA) that you will use as backup machines for the master workstation. The connector pre-requires the Tivoli Management Framework configured for a Tivoli server or Tivoli managed node. Job Scheduling (JS) Console A Java™ based graphical user interface (GUI) for the Tivoli Workload Scheduling suite. The Job Scheduling Console runs on any machine from which you want to manage Tivoli Workload Scheduler plan and database objects. It provides, through the Tivoli Workload Scheduler connector, Conman and Composer functionality. The Job Scheduling Console does not require to be installed in the same machine with the Tivoli Workload Scheduler engine or connector. You can use the Job Scheduling Console from any machine as long as it has a TCP/IP link with the machine running the Tivoli Workload Scheduler connector. From the same Job Scheduling Console you can also manage Tivoli Workload Scheduler for z/OS plan and database objects, provided that you can log into a machine running the Tivoli Workload Scheduler for z/OS connector.

The Tivoli Workload Scheduler Network A Tivoli Workload Scheduler network is made up of the workstations, or CPUs, on which jobs and job streams are run.

9

A Workload Scheduler network contains at least one Workload Scheduler domain, the master domain, in which the master domain manager is the management hub. Additional domains can be used to divide a widely distributed network into smaller, locally managed groups.

Figure 1. This Tivoli Workload Scheduler network is made up by two domains.

Using multiple domains reduces the amount of network traffic by reducing the communications between the master domain manager and other computers. In a single domain configuration, the master domain manager maintains communications with all of the workstations in the Workload Scheduler network. In a multi-domain configuration, the master domain manager communicates with the workstations in its domain and with the subordinate domain managers. The subordinate domain managers, in turn, communicate with the workstations in their domains and subordinate domain managers. Multiple domains also provide fault-tolerance by limiting the problems caused by losing a domain manager to a single domain. To limit the effects further, you can designate backup domain managers to take over if their domain managers fail. Before the start of each new day, the master domain manager creates a production control file, named Symphony. Tivoli Workload Scheduler is then restarted in the network, and the master domain manager sends a copy of the new production control file to each of its automatically linked agents and subordinate domain managers. The domain managers, in turn, send copies to their automatically linked agents and subordinate domain managers. Once the network is started, scheduling messages like job starts and completions are passed from the agents to their domain managers, through the parent domain managers to the master domain manager. The master domain manager then broadcasts the messages throughout the hierarchical tree to update the production control files of domain managers and fault tolerant agents running in Full Status mode.

10

General Information

Manager and Agent Types Primarily, workstation definitions refer to physical workstations. However, in the case of extended and network agents, the workstations are logical definitions that must be hosted by a physical Tivoli Workload Scheduler workstation. Tivoli Workload Scheduler workstations can be of the following types: Master domain manager (MDM) The domain manager in the topmost domain of a Tivoli Workload Scheduler network. It contains the centralized database files used to document scheduling objects. It creates the production plan at the start of each day, and performs all logging and reporting for the network. Backup master A fault-tolerant agent or domain manager capable of assuming the responsibilities of the master domain manager for automatic workload recovery. Domain manager The management hub in a domain. All communications to and from the agents in a domain are routed through the domain manager. Backup domain manager A fault-tolerant agent capable of assuming the responsibilities of its domain manager. Fault-tolerant agent (FTA) A workstation capable of resolving local dependencies and launching its jobs in the absence of a domain manager. Standard agent A workstation that launches jobs only under the direction of its domain manager. Extended agent A logical workstation definition that enables you to launch and control jobs on other systems and applications, such as Peoplesoft, Oracle Applications, SAP, and MVS™ JES2 and JES3. Network Agent A logical workstation definition for creating dependencies between jobs and job streams in separate Tivoli Workload Scheduler networks. Job Scheduling Console Client Any workstation running the graphical user interface from which schedulers and operators can manage Tivoli Workload Scheduler plan and database objects. The following table summarizes which Tivoli Workload Scheduler component goes into what type of workstation: Workstation type

Engine

Connector

Job Scheduling Console

Master Domain Manager

Yes

Yes

Optional

Backup Master

Yes

Yes

Optional

Domain Manager

Yes

Optional

Optional

Backup Domain Manager Yes

Optional

Optional

Fault-tolerant Agent

Yes

Optional

Optional

Standard Agent

Yes

No

Optional Chapter 2. Tivoli Workload Scheduler

11

Workstation type

Engine

Connector

Job Scheduling Console

Extended Agent

Not Applicable

Not Applicable

Not Applicable

Network Agent

Not Applicable

Not Applicable

Not Applicable

Job Scheduling Console Client

No

No

Yes

Topology A key to choosing how to set up Tivoli Workload Scheduler domains for an enterprise is the concept of localized processing. The idea is to separate or localize the enterprises’s scheduling needs based on a common set of characteristics. Common characteristics are things such as geographical locations, business functions, and application groupings. Grouping related processing can limit the amount of interdependency information that needs to be communicated between domains. The benefits of localizing processing in domains are: v Decreased network traffic. Keeping processing localized to domains eliminates the need for frequent interdomain communications. v Provides a convenient way to tighten security and simplify administration. Security and administration can be defined at, and limited to, the domain level. Instead of network-wide or workstation-specific administration, you can have domain administration. v Network and workstation fault tolerance can be optimized. In a multiple domain Tivoli Workload Scheduler network, you can define backups for each domain manager, so that problems in one domain do not disrupt operations in other domains.

Networking The following questions will help in making decisions about how to set up your enterprise’s Tivoli Workload Scheduler network. Some questions involve aspects of your network, and others involve the applications controlled by Tivoli Workload Scheduler. You may need to consult with other people in your organization to resolve some issues. v How large is your Tivoli Workload Scheduler network? How many computers does it hold? How many applications and jobs does it run? The size of your network will help you decide whether to use a single domain or the multiple domain architecture. If you have a small number of computers, or a small number of applications to control with Tivoli Workload Scheduler, there may not be a need for multiple domains. v How many geographic locations will be covered in your Tivoli Workload Scheduler network? How reliable and efficient is the communication between locations? This is one of the primary reasons for choosing a multiple domain architecture. One domain for each geographical location is a common configuration. If you choose single domain architecture, you will be more reliant on the network to maintain continuous processing. v Do you need centralized or decentralized management of Tivoli Workload Scheduler? A Tivoli Workload Scheduler network, with either a single domain or multiple domains, gives you the ability to manage Tivoli Workload Scheduler from a single node, the master domain manager. If you want to manage multiple locations separately, you can consider the installation of a separate Tivoli

12

General Information

Workload Scheduler network at each location. Note that some degree of decentralized management is possible in a stand-alone Tivoli Workload Scheduler network by mounting or sharing file systems. v Do you have multiple physical or logical entities at a single site? Are there different buildings, and several floors in each building? Are there different departments or business functions? Are there different applications? These may be reasons for choosing a multi-domain configuration. For example, a domain for each building, department, business function, or each application (manufacturing, financial, engineering, etc.). v Do you run applications, like SAP R/3, that will operate with Tivoli Workload Scheduler? If they are discrete and separate from other applications, you may choose to put them in a separate Tivoli Workload Scheduler domain. v Would you like your Tivoli Workload Scheduler domains to mirror your Windows NT domains? This is not required, but may be useful. v Do you want to isolate or differentiate a set of systems based on performance or other criteria? This may provide another reason to define multiple Tivoli Workload Scheduler domains to localize systems based on performance or platform type. v How much network traffic do you have now? If your network traffic is manageable, the need for multiple domains is less important. v Do your job dependencies cross system boundaries, geographical boundaries, or application boundaries? For example, does the start of Job1 on CPU3 depend on the completion of Job2 running on CPU4? The degree of interdependence between jobs is an important consideration when laying out your Tivoli Workload Scheduler network. If you use multiple domains, you should try to keep interdependent objects in the same domain. This will decrease network traffic and take better advantage of the domain architecture. v What level of fault-tolerance do you require? An obvious disadvantage of the single domain configuration is the reliance on a single domain manager. In a multi-domain network, the loss of a single domain manager affects only the agents in its domain.

Tivoli Workload Scheduler Components Tivoli Workload Scheduler uses several manager processes to efficiently segregate and manage networking, dependency resolution, and job launching. These processes communicate among themselves through the use of message queues. Message queues are also used by the Console Manager to integrate operator commands into the batch process. On any computer running Tivoli Workload Scheduler there are a series of active management processes. They are started as a system service, or by the StartUp command, or manually from the Job Scheduling Console. The following are the main processes: Netman The network management process that establishes network connections between remote Mailman processes and local Writer processes.

Chapter 2. Tivoli Workload Scheduler

13

Mailman The mail management process that sends and receives inter-CPU messages. Batchman The production control process. Working from Symphony, the production control file, it runs jobs streams, resolves dependencies, and directs Jobman to launch jobs. Writer The network writer process that passes incoming messages to the local Mailman process. Jobman The job management process that launches and tracks jobs under the direction of Batchman. Conman The console manager. It is the user’s interface to daily production activities by means of the command line interface or of the Job Scheduling Console. Conman writes information that is received by either the local Netman or Mailman processes.

Tivoli Workload Scheduler Scheduling Objects Scheduling with Tivoli Workload Scheduler includes the capability to do the following: v Schedule jobs across a network. v Group jobs into job streams according, for example, to function or application. v Set limits on the number of jobs that can run concurrently. v Create job streams based on day of the week, on specified dates and times, or by customized calendars. v Ensure correct processing order by identifying dependencies such as successful completion of previous jobs, availability of resources, or existence of required files. v Set automatic recovery procedures for unsuccessful jobs. v Forward incomplete jobs to the next production day. Tivoli Workload Scheduler administrators and operators work with these objects for their scheduling activity: Workstation Also referred to as CPU. Usually an individual computer on which jobs and job streams are run. Workstations are defined in the Tivoli Workload Scheduler database as a unique object. A workstation definition is required for every computer that executes jobs or job streams in the Workload Scheduler network. Workstation class A group of workstations. Any number of workstations can be placed in a class. Job streams and jobs can be assigned to execute on a workstation class. This makes replication of a job or job stream across many workstations easy. Job A script or command, run on the user’s behalf, run and controlled by Tivoli Workload Scheduler. Job stream Also referred to as schedule. A mechanism for grouping jobs by function or application on a particular day and time. A job stream definition includes a launch time, priorities, dependencies, and job names. Calendar An object defined in the Tivoli Workload Scheduler database that contains

14

General Information

a list of scheduling dates. Each calendar can be assigned to multiple job streams. Assigning a calendar to a job stream causes that job stream to run on the days specified in the calendar. A calendar can be used as an inclusionary or exclusionary run cycle. Run cycle A cycle that specifies the days that a job stream is scheduled to run. Run cycles are defined as part of job streams and may include calendars that were previously defined. There are three types of run cycles: a Simple run cycle, a Weekly run cycle, or a Calendar run cycle (commonly called a calendar). Each type of run cycle can be inclusionary or exclusionary. That is, each run cycle can define the days when a job stream is included in the production cycle, or when the job stream is excluded from the production cycle. Prompt An object that can be used as a dependency for jobs and job streams. A Prompt must be answered affirmatively for the dependent job or job stream to launch. There are two types of prompts: predefined and ad hoc. An ad hoc prompt is defined within the properties of a job or job stream and is unique to that job or job stream. A predefined prompt is defined in the Tivoli Workload Scheduler database and can be used by any job or job stream. Resource An object representing either physical or logical resources on your system. Once defined in the Tivoli Workload Scheduler database, resources can be used as dependencies for jobs and job streams. For example, you can define a resource named tapes with a unit value of two. Then, define jobs that require two available tape drives as a dependency. Jobs with this dependency cannot run concurrently because each time a job is run the “tapes” resource is in use. Parameter A parameter used to substitute values into your jobs and job streams. When using a parameter in a job script, the value is substituted at run time. In this case, the parameter must be defined on the workstation where it will be used. Parameters cannot be used when scripting extended agent jobs. Dependency A condition that must be met in order to launch a job or job stream. User For Windows NT only, the user name specified in a job definition’s “Logon” field must have a matching user definition. The definitions furnish the user passwords required by Tivoli Workload Scheduler to launch jobs.

The Production Process Tivoli Workload Scheduler runs in daily run cycles called a production day. The production day is a 24-hour period, but it does not have to conform to the actual calendar day. It may be offset. For example, the production day by default runs from 6:00 a.m. to 5:59 a.m. the next day. At the start of each production day, Tivoli Workload Scheduler executes a program that selects the job streams that are to run on that day from the databases found on the master domain manager. Then another program includes the uncompleted schedules from the previous production day into the current day’s production and logs all the previous day’s statistics into an archive. All of the required information for that production day is placed into a production control database named Symphony. During the production day, the production Chapter 2. Tivoli Workload Scheduler

15

control database is continually being updated to reflect the work that needs to be done, the work in progress, and the work that has been completed. A copy of the Symphony file is sent to all subordinate domain managers and to all the fault-tolerant agents in the same domain. The subordinate domain managers distribute their copy to all the fault-tolerant agents in their domain and to all the domain managers that are subordinate to them, and so on down the line. This enables fault-tolerant agents throughout the network to continue processing even if the network connection to their domain manager is down. From the Job Scheduling Console or the command line interface, the operator can view and make changes in the day’s production by making changes in the Symphony file. Tivoli Workload Scheduler processes monitor the production control database and make calls to the operating system to launch jobs as required. The operating system runs the job, and in return informs Tivoli Workload Scheduler whether the job completed successfully or not. This information is entered into the production control database to indicate the status of the job.

Scheduling Scheduling can be accomplished either through the Tivoli Workload Scheduler command line interface or the Tivoli Job Scheduling Console. Scheduling includes the following tasks: v Defining and maintaining workstations. v Defining scheduling objects. v Defining job streams. v Starting and stopping production processing. v Viewing and modifying jobs and job streams.

Defining Scheduling Objects Scheduling objects are workstations, workstation classes, domains, jobs, job streams, resources, prompts, calendars, and parameters. Scheduling objects are managed with the Composer program and are stored in Workload Scheduler’s database. To create or modify an object, you can use either the Tivoli Workload Scheduler command line interface or the Job Scheduling Console.

Creating Job Streams Tivoli Workload Scheduler’s primary processing task is running job streams. A job stream is an outline of batch processing consisting of a list of jobs. Although job streams can be defined from the product’s command line interface, using the Job stream editor of the Job Scheduling Console is the recommended way to create and modify job streams. The Job Stream Editor is for working with the jobs and follows dependencies between the jobs, as well as the run cycles of the job stream. The job stream properties window is for specifying time restrictions, resource dependencies, file dependencies, and prompt dependencies at the job stream level.

Setting Job Recovery When defining a job, the possibility must be taken into account that in some instance the job may not complete successfully. The administrator can define a recovery option and recovery actions when defining the job. One of the following recovery options is possible: v Not continuing with the next job. This stops the execution of the job stream and puts it in the stuck state. This is the default action.

16

General Information

v Continuing with the next job. v Running the job again. Optionally, a recovery prompt can be associated with the job. A recovery prompt is a local prompt to display when the job completes unsuccessfully. Processing does not continue until the prompt is answered affirmatively. Another option is to define a recovery job that can be run in the place of the original job if it completes unsuccessfully. The recovery job must have been defined previously. Processing will stop if also the recovery job completes unsuccessfully.

Running Production Production consists in taking the definitions of the scheduling objects from the database, their time constraints, and their dependencies, and building and running the production control file.

Start-of-day Processing The processing day of Tivoli Workload Scheduler begins at the time defined by the Global Option start, which is set by default to 6:00 a.m. To turnover a new day, pre-production set up is performed for the upcoming day, and post-production logging and reporting is performed for the day just ended. Pre and post-production processing can be fully automated by adding the Tivoli-supplied final job stream, or a user-supplied equivalent, to the Tivoli Workload Scheduler database along with other job streams. The final job stream is placed in production everyday, and results in running a job named Jnextday prior to the start of a new day. The job performs the following tasks: 1. Selects job streams for the new day’s production plan. 2. Compiles the production plan. 3. Prints pre-production reports. 4. Stops Tivoli Workload Scheduler. 5. Carries forward uncompleted job streams, logs the old production plan, and installs the new plan. 6. Start Tivoli Workload Scheduler for the new day. 7. Prints post-production reports for the previous day. 8. Logs job statistics for the previous day. These steps are run on the master workstation.

Running Job Streams Depending on their run cycle definition, job streams are taken from the Tivoli Workload Scheduler database and automatically inserted in the daily production plan of the day. While the job stream is in the plan, and as long as it has not completed, it can still be modified in any of its components. That is, you can modify the job stream properties, the properties of its jobs, their sequence, the workstation or resources they use, and so on, in order to be able to face last minute contingencies. The best Chapter 2. Tivoli Workload Scheduler

17

way to do this is by means of the job stream instance editor of the Job Scheduling Console, where the term instance implies a scheduling object that has been included in the current plan. You can also hold, release, or cancel a job stream, as well as change the maximum number of jobs within the job stream that can run concurrently. You can change the priority previously assigned to the job stream and release the job stream from all its dependencies. Last minute changes to the current production plan include the possibility to submit jobs and job streams that are already defined in the Tivoli Workload Scheduler database but were not included in the plan. You can as well submit jobs that are being defined ad hoc. These jobs are submitted to the current plan but are not stored in the database.

Monitoring Monitoring is done by listing plan objects in the Job Scheduling Console. Using lists, you can see the status of all or of subsets of the following objects in the current plan: v Job stream instances. v Job instances. v Domains. v Workstations. v Resources. v File dependencies, where a file dependency is when a job or job stream needs to verify the existence of one or more files before it can begin execution. v Prompt dependencies, where a prompt dependency is when a job or job stream needs to wait for an affirmative response to a prompt before it can begin execution. You can use these lists also for managing some of these objects. You can for instance reallocate resources, link or unlink workstations, kill jobs, or switching a domain manager. Additionally, you can monitor the daily plan with Tivoli Business Systems Manager, an object-oriented systems management application that provides monitoring and event management of resources, applications and subsystems, that is integrated with version 8.1 of Tivoli Workload Scheduler. Network managers can use Tivoli Workload Scheduler/NetView, a NetView application, to monitor and diagnose Tivoli Workload Scheduler networks from a NetView management node. It includes a set of submaps and symbols to view Tivoli Workload Scheduler networks topographically, and determine the status of the job scheduling activity and critical Tivoli Workload Scheduler processes on each workstation. Menu actions are provided to start and stop Tivoli Workload Scheduler processing, and to run conman on any workstation in the network.

Reporting As part of the pre-production and post-production processes, reports are generated which show summary or detail information about the previous or next production day. These reports can also be generated ad-hoc. The available reports are: v Job details listing v Prompt listing

18

General Information

v Calendar listing v Parameter listing v Resource listing v Job History listing v Job histogram v Planned production schedule v Planned production summary v v v v

Planned production detail Actual production summary Actual production detail Cross reference report

In addition, during production, a standard list file (STDLIST) is created for each job instance launched by Tivoli Workload Scheduler. Standard list files contain header and trailer banners, echoed commands, and errors and warnings. These files can be used to troubleshoot problems in job execution.

Auditing An auditing option helps track changes to the database and the plan. For the database, all user modifications, except for the delta of the modifications, are logged. If an object is opened and saved, the action will be logged even if no modification has been done. For the plan, all user modifications to the plan are logged. Actions are logged whether they are successful or not. Audit files are logged to a flat text file on individual machines in the Tivoli Workload Scheduler network. This minimizes the risk of audit failure due to network issues and allows a straightforward approach to writing the log. The log formats are the same for both plan and database in a general sense. The logs consist of a header portion which is the same for all records, an “action ID”, and a section of data which will vary according the action type. All data is kept in clear text and formatted to be readable and editable from a text editor such as vi or notepad.

Options and Security The Tivoli Workload Scheduler options files determine how Tivoli Workload Scheduler runs on your system. several performance, tuning, security, logging, an other configuration options are available.

Setting Global and Local Options Global options are defined on the master domain manager and apply to all workstations in the Tivoli Workload Scheduler network. Global options are entered in the globalopts file with a text editor. Changes can be made at any time, but they do not take effect until Tivoli Workload Scheduler is stopped and restarted. Global options are used to: v Set the name of the master domain manager. v Determine if object names can be up to sixteen characters long. v Determine whether or not uncompleted job streams will be carried forward from the old to the new production control file. Chapter 2. Tivoli Workload Scheduler

19

v Define the start time of the Tivoli Workload Scheduler processing day. Local options are entered with a text editor into a file named localopts, which resides in the Tivoli Workload Scheduler user’s home directory. The local options are defined on each workstation and apply only to that workstation. Local options are used to: v Specify the name of the local workstation. v Prevent the launching of jobs execute by root in UNIX. v Prevent unknown clients from connecting to the system. v Specify a number of performance options. v Specify a number of logging preferences.

Setting Security Security is accomplished with the use of a security file that contains one or more user definitions. Each user definition identifies a set of users, the objects they are permitted to access, and the types of actions they can perform. A template file is installed with the product. The template must be edited to create the user definitions and compiled and installed with a utility program to create a new operational security file. After it is installed, further modifications can be made by creating an editable copy with another utility. Each workstation in a Tivoli Workload Scheduler network has its own security file. An individual file can be maintained on each workstation, or a single security file can be created on the master domain manager and copied to each domain manager, fault-tolerant agent, and standard agent.

Using Time Zones Tivoli Workload Scheduler supports time zones. Enabling time zones provides the ability to manage one’s workload on a global level. Time-zone implementation also allows for easy scheduling across multiple time zone and for jobs that need to run in the “dead zone.” The dead zone is the gap between the Tivoli Workload Scheduler start of day time on the master and the time on the fault-tolerant agent in another time zone. For example, if an eastern master with a Tivoli Workload Scheduler start of day of 6 a.m. initializes a western agent with a 3-hour time-zone difference, the dead zone for this agent is between the hours of 3 a.m. and 6 a.m. Previously, special handling was required to run jobs in this time period. Now when specifying a time zone with the start time on a job or job stream, Tivoli Workload Scheduler runs them as expected. Once enabled, time zones can be specified in the Job Scheduling Console or composer for start and deadline times within jobs and job streams.

20

General Information

Chapter 3. Tivoli Workload Scheduler for z/OS Tivoli Workload Scheduler for z/OS expands the scope for automating your data processing (DP) operations. It plans and automatically schedules the production workload. From a single point of control, it drives and controls the workload processing at both local and remote sites. By using Tivoli Workload Scheduler for z/OS to increase automation, you use your DP resources more efficiently, have more control over your DP assets, and manage your production workload processing better.

How Your Production Workload Is Managed How does Tivoli Workload Scheduler for z/OS give you all this? This section describes functions that make your information systems (IS) operations run more efficiently. But first, here is a brief introduction to the structure of the product and some concepts.

Structure Tivoli Workload Scheduler for z/OS consists of a base product, the agent and a number of features. Every z/OS system in your complex requires the base product. One z/OS system in your complex is designated the controlling system and runs the engine feature. Only one engine feature is required, even when you want to start standby engines on other z/OS systems in a sysplex. Tivoli Workload Scheduler for z/OS with Tivoli Workload Scheduler addresses your production workload in the distributed environment. You can schedule, control, and monitor jobs in Tivoli Workload Scheduler from Tivoli Workload Scheduler for z/OS. For example, in the current plan, you can specify jobs to run on workstations in Tivoli Workload Scheduler. The workload on other operating environments can also be controlled with the open interfaces provided with Tivoli Workload Scheduler for z/OS. Sample programs using TCP/IP or an NJE/RSCS (network job entry/remote spooling communication subsystem) combination show you how you can control the workload on environments that at present have no scheduling feature. Additionally, national language features let you see the dialogs and messages, in the language of your choice. These languages are currently available: v English v German v Japanese v Spanish Panel and message text can also be modified to include enterprise-specific instructions or help.

Concepts In managing production workloads, Tivoli Workload Scheduler for z/OS builds on several important concepts. Plans. Tivoli Workload Scheduler for z/OS constructs operating plans based on user-supplied descriptions of the DP operations department and its production

21

workload. These plans provide the basis for your service level agreements and give you a picture of the status of the production workload at any point in time. You can simulate the effects of changes to your production workload, calendar, and installation by generating trial plans. Job streams. A job stream is a description of a unit of production work. It can include the following: v A list of the jobs (related tasks) associated with that unit of work, such as: – Data entry – Job preparation – Job submission or started-task initiation – Communication with the NetView program – File transfer to other operating environments – Printing of output – Postprocessing activities, such as quality control or dispatch – Other tasks related to the unit of work that you want to schedule, control, and track v A description of dependencies between jobs within a job stream and between jobs in other job streams v Information about resource requirements, such as exclusive use of a data set v Special operator instructions that are associated with a job v How and where each job should be processed v Run policies for that unit of work; that is, when it should be scheduled or alternatively the name of a group definition that records the run policy Tivoli Workload Scheduler for z/OS schedules work based on the information you provide in your job stream descriptions. Workstations. When scheduling and processing work, Tivoli Workload Scheduler for z/OS considers the processing requirements of each job. Some typical processing considerations are: v What human or machine resources are required for processing the work—for example, operators, processors, or printers? v When are these resources available? v How will these jobs be tracked? v Can this work be processed somewhere else if the resources become unavailable? Tivoli Workload Scheduler for z/OS supports a range of work process types, called workstations, that map the processing needs of any task in your production workload. Each workstation supports one type of activity. This gives you the flexibility to schedule, monitor, and control any type of DP activity, including the following: v Job setup—both manual and automatic v Job submission v Started-task actions v Communication with the NetView program v Print jobs v Manual preprocessing or postprocessing activity You can plan for maintenance windows in your hardware and software environments. Tivoli Workload Scheduler for z/OS enables you to perform a controlled and incident-free shutdown of the environment, preventing last-minute

22

General Information

cancellation of active tasks. You can choose to reroute the workload automatically during any outage, planned or unplanned. Tivoli Workload Scheduler for z/OS tracks jobs as they are processed at workstations and dynamically updates the plan with real-time information on the status of jobs. You can view or modify this status information online using the workstation ready lists in the dialog. Dependencies. In general, every DP-related activity must occur in a specific order. Activities performed out of order will, at the very least, create invalid output; in the worst case your corporate data will be corrupted. In any case, the result is costly reruns, missed deadlines, and unsatisfied customers. You can define dependencies for jobs when a specific processing order is required. When Tivoli Workload Scheduler for z/OS manages the dependent relationships for you, the jobs are always started in the correct order every time they are scheduled. A dependency is called internal when it is between two jobs in the same job stream, and external when it is between two jobs in different job streams. You can work with job dependencies graphically from Tivoli Job Scheduling Console, as illustrated in Figure 2.

Figure 2. Graphic Display of Dependencies between Jobs

Tivoli Workload Scheduler for z/OS lets you serialize work based on the status of any DP resource. A typical example is a job that uses a data set as input, but must not start until the data set is successfully created and loaded with valid data. You can use resource serialization support to send availability information about a DP resource to Tivoli Workload Scheduler for z/OS. Special resources. Special resources are typically defined to represent physical or logical object used by jobs. A special resource can be used to serialize access to a

Chapter 3. Tivoli Workload Scheduler for z/OS

23

data set or to limit the number of file transfers on a particular network link. The resource does not have to represent a physical object in your configuration, although it often does. Tivoli Workload Scheduler for z/OS keeps a record of the state of each resource and its current allocation status. You can choose to hold resources in case a job allocating the resources ends abnormally. You can also use the Tivoli Workload Scheduler for z/OS interface with the Resource Object Data Manager (RODM) to schedule jobs according to real resource availability. You can subscribe to RODM updates in both local and remote domains. Tivoli Workload Scheduler for z/OS lets you subscribe to data set activity on z/OS systems. The data set triggering function of Tivoli Workload Scheduler for z/OS automatically updates special resource availability when a data set is closed. You can use this notification to coordinate planned activities or to add unplanned work to the schedule. Calendars. Tivoli Workload Scheduler for z/OS uses information about when the jobs departments work and when they are free, so that job streams are not scheduled to run on days when processing resources are not available (for example, Sundays and holidays). This information is stored in a calendar. Tivoli Workload Scheduler for z/OS supports multiple calendars for enterprises where different departments have different work days and free days. Different groups within a business operate according to different calendars). The multiple calendar function is critical if your enterprise has installations in more than one geographical location (for example, with different local or national holidays). Business processing cycles. Tivoli Workload Scheduler for z/OS uses business processing cycles, or periods, to calculate when your job streams should be run—for example, weekly, or every 10th working day. Periods are based on the business cycles of your customers. Tivoli Workload Scheduler for z/OS supports a range of periods for processing the different job streams in your production workload. When you define a job stream, you specify when it should be planned using a run cycle, which can be: v A rule with a format such as ONLY the SECOND TUESDAY of every MONTH EVERY FRIDAY in the user-defined period SEMESTER1

where the words in capitals are selected from lists of ordinal numbers, names of days, and common calendar intervals or period names, respectively. v A combination of period and offset. For example, an offset of 10 in a monthly period specifies the tenth day of each month.

Plans in Tivoli Workload Scheduler for z/OS Tivoli Workload Scheduler for z/OS plans your production workload schedule. It produces both high-level and detailed plans. Not only do these plans drive the production workload, but they can also show you the status of the production workload on your system at any specified time. You can produce trial plans to forecast future workloads.

24

General Information

Long-term Planning The long-term plan is a high-level schedule of your anticipated production workload. It lists, by day, the instances of job streams to be run during the period of the plan. Each instance of a job stream is called an occurrence. The long-term plan shows when occurrences are to run, as well as the dependencies that exist between the job streams. You can view these dependencies graphically on your terminal as a network, to check that work has been defined correctly. The plan can assist you in forecasting and planning for heavy processing days. The long-term-planning function can also produce histograms showing planned resource use for individual workstations during the plan period. You can use the long-term plan as the basis for documenting your service level agreements. It lets you relate service level agreements directly to your production workload schedules so that your customers can see when and how their work is to be processed. The long-term plan provides a window to the future. How far into the future is up to you: from one day to four years. You can also produce long-term plan simulation reports for any future date. Tivoli Workload Scheduler for z/OS can automatically extend the long-term plan at regular intervals. You can print the long-term plan as a report, or you can view, alter, and extend it online using the dialogs.

Detailed Planning The current plan is the heart of Tivoli Workload Scheduler for z/OS processing: in fact, it drives the production workload automatically and provides a way to check its status. The current plan is produced by the run of batch jobs that extract from the long term plan the occurrences that fall within the specified period of time considering also the job details. What the current plan does is to select a window from the long term plan and make the jobs ready to be run: they will be really started depending on the decided restrictions (for example, dependencies, resources availability, or time depending jobs). The current plan is a rolling plan that can cover several days. A common method is to cover 1–2 days with regular extensions each shift. Production workload processing activities are listed by minute. You can either print the current plan as report, or view, or alter and extend it online, by using the dialogs.

Automatically Controlling the Production Workload Tivoli Workload Scheduler for z/OS automatically drives the production workload by monitoring the flow of work and by directing the processing of jobs so that it follows the business priorities established in the plan. Through its interface to the NetView program or its management-by-exception ISPF dialog, Tivoli Workload Scheduler for z/OS can alert the production control specialist to problems in the production workload processing. Furthermore, the NetView program can automatically trigger Tivoli Workload Scheduler for z/OS to perform corrective actions in response to these problems. Tivoli Workload Scheduler for z/OS automatically: v Starts and stops started tasks v Edits job statements: z/OS JCL or equivalent job statements for other operating environments before submission Chapter 3. Tivoli Workload Scheduler for z/OS

25

v Submits jobs in the specified sequence to the target operating environment—every time v Tracks each scheduled job in the plan v Determines the success or failure of the jobs v Displays status information and instructions to guide workstation operators v Provides automatic recovery of jobs when they end in error, regardless of the operating environment v Generates processing dates for your job stream run cycles using rules, such as: – Every second Tuesday of the month – Only the last Saturday in June, July, and August – Every third workday in the user-defined PAYROLL period v Starts jobs with regard to real resource availability v Performs data set cleanup in error and rerun situations for the z/OS workload v Tailors the JCL for step restarts of z/OS jobs and started tasks v Dynamically schedules additional processing in response to unplannable activities v Provides automatic notification when an updated data set is closed—which can be used to trigger subsequent processing v Generates alerts when abnormal situations are detected in the workload Tivoli Workload Scheduler for z/OS also provides manual control facilities, which are described in “Manual Control and Intervention” on page 30.

Automatic Workload Submission Tivoli Workload Scheduler for z/OS automatically drives work through the system, taking into account work that requires manual or program-recorded completion. (Program-recorded completion refers to situations where the status of a scheduler-controlled job is set to “complete” by a user-written program.) It also promotes the optimum use of resources, improves system availability, and automates complex and repetitive operator tasks. Tivoli Workload Scheduler for z/OS automatically controls the submission of work according to: v Dependencies between jobs v Workload priorities v Specified time for the submission of particular work v Availability of resources By saving a copy of the JCL for each separate run, or occurrence, of a particular job in its plans, Tivoli Workload Scheduler for z/OS prevents the unintentional reuse of temporary JCL changes, such as overrides. Job tailoring. Tivoli Workload Scheduler for z/OS provides automatic job tailoring functions, which enables jobs to be automatically edited. This can reduce your dependency on time-consuming and error-prone manual editing of jobs. Tivoli Workload Scheduler for z/OS job tailoring provides: v Automatic variable substitution v Dynamic inclusion and exclusion of inline job statements v Dynamic inclusion of job statements from other libraries or from an exit For jobs to be submitted on a z/OS system, these job statements will be z/OS JCL, but scheduler JCL tailoring directives can be included in jobs to be submitted on other operating systems, such as AIX®/6000.

26

General Information

Variables can be substituted in specific columns, and you can define verification criteria to ensure that invalid strings are not substituted. Special directives supporting the variety of date formats used by job stream programs let you dynamically define the required format and change the multiple times for the same job. Arithmetic expressions can be defined to let you calculate values such as the current date plus four work days.

Automatic Recovery and Restart Tivoli Workload Scheduler for z/OS provides automatic restart facilities for your production work. You can specify the restart actions to be taken if work initiated by Tivoli Workload Scheduler for z/OS ends in error (see Figure 3.) You can use these functions to predefine automatic error-recovery and restart actions for jobs and started tasks. The scheduler’s integration with the NetView program allows it to automatically pass alerts to the NetView program in error situations. Use of z/OS’s cross-system coupling facility (XCF) enables Tivoli Workload Scheduler for z/OS to maintain production workload to maintain production workload processing when system failures occur. Recovery of jobs and started tasks. Automatic recovery actions for failed jobs are specified in user-defined control statements. Parameters in these statements determine the recovery actions to be taken when a job or started task ends in error.

User Application Restart An Earlier Job?

Job 1

Automatic Catalog Cleanup? Restart the Failing Job?

Job 2

Recovery Job? 2

The Scheduler Analyzes the Error and Determines the Restart Action

1

Job 3

Job 3 Ends In Error

!

Analyze

Continue?

Do Nothing?

Figure 3. Automatic Recovery and Restart

Restart and cleanup. You can use restart and cleanup to catalog, uncatalog, or delete data sets when a job ends in error, or when you need to rerun a job. Data set cleanup takes care of JCL in the form of in-stream JCL, in-stream procedures, and cataloged procedures on both local and remote systems. This function can be initiated automatically by Tivoli Workload Scheduler for z/OS or manually by a user through the panels. Tivoli Workload Scheduler for z/OS will reset the catalog to the status that it was before the job ran for both generation data set groups (GDGs) and for DD allocated data sets contained in JCL. In addition, restart and cleanup supports the use of Removable Media Manager in your environment.

Chapter 3. Tivoli Workload Scheduler for z/OS

27

Restart at both the step- and job-level is also provided in the Tivoli Workload Scheduler for z/OS panels. It manages resolution of generation data group (GDG) names, JCL containing nested INCLUDEs or PROC, and IF-THEN-ELSE statements. Tivoli Workload Scheduler for z/OS also automatically identifies problems that can prevent successful restart, providing a logic of the “best restart step.” You can browse the job log or request a step-level restart for any z/OS job or started task even when there are no catalog modifications. The job-log browse functions are also available for the workload on other operating platforms, which is especially useful for those environments that do not support an SDSF-like facility. If you use a SYSOUT archiver, for example RMDS, you can interface with it from Tivoli Workload Scheduler for z/OS and so prevent duplication of job log information. These facilities are available to you without the need to make changes to your current JCL. Tivoli Workload Scheduler for z/OS gives you an enterprise-wide data set cleanup capability on remote agent systems. Production workload restart. Tivoli Workload Scheduler for z/OS provides a production workload restart, which can automatically maintain the processing of your work if a system or connection fails. Scheduler-controlled production work for the unsuccessful system is rerouted to another system. Because Tivoli Workload Scheduler for z/OS can restart and manage the production workload, the integrity of your processing schedule is maintained, and service continues for your customers. Tivoli Workload Scheduler for z/OS exploits the VTAM® Model Application Program Definition feature and the z/OS defined symbols to ease the configuration and job in a sysplex environment, giving the user a single system view of the sysplex. Starting, stopping, and managing your engines and agents do not require you to know which sysplex z/OS image is actually running on.

28

General Information

z/OS Parallel Sysplex

Shared DASD

Controlled Scheduler (Hot Standby)

XCF

Controlling Scheduler

XC F

XC

F

Controlled Scheduler

Figure 4. Production Workload Restart and Hot Standby

Hot standby. Tivoli Workload Scheduler for z/OS provides a single point of control for your z/OS production workload. If this controlling system fails, Tivoli Workload Scheduler for z/OS can automatically transfer the controlling functions to a backup system within a Parallel Sysplex®, see Figure 4. Through XCF, Tivoli Workload Scheduler for z/OS can automatically maintain production workload processing during system or connection failures.

z/OS Automatic Restart Manager Support All the scheduler components are enabled to be restarted by the Automatic Restart Manager (ARM) of the z/OS operating system, in the case of program failure.

Workload Manager (WLM) Support With Workload Manager (WLM), you can make the best use of resources accessed by your scheduled jobs. In addition, your jobs maintain the highest possible throughput with WLM and Tivoli Workload Scheduler for z/OS. When used with WLM, the scheduler is able to achieve the best possible system response times.

Automatic Status Checking To track the work flow, Tivoli Workload Scheduler for z/OS interfaces directly with the operating system, collecting and analyzing status information about the production work that is currently active in the system. Tivoli Workload Scheduler for z/OS can record status information from both local and remote processors. When status information is reported from remote sites in different time zones, Tivoli Workload Scheduler for z/OS makes allowances for the time differences.

Status Reporting from Heterogeneous Environments The processing on other operating environments can also be tracked by Tivoli Workload Scheduler for z/OS. You can use supplied programs to communicate with the engine from any environment that can establish communications with a z/OS system.

Status Reporting from User Programs You can pass status information about production workload processing to Tivoli Workload Scheduler for z/OS from your own user programs through a standard supplied routine.

Chapter 3. Tivoli Workload Scheduler for z/OS

29

Additional Job-completion Checking If required, Tivoli Workload Scheduler for z/OS provides further status checking by scanning SYSOUT and other print data sets from your processing when the success or failure of the processing cannot be determined by completion codes. For example, Tivoli Workload Scheduler for z/OS can check the text of system messages or messages originating from your user programs. Using information contained in job completion checker (JCC) tables, Tivoli Workload Scheduler for z/OS determines what actions to take when it finds certain text strings. These actions can include: v Reporting errors v Requeuing SYSOUT v Writing incident records to an incident data set

Managing Unplanned Work Tivoli Workload Scheduler for z/OS can be automatically triggered to update the current plan with information about work that cannot be planned in advance. This allows Tivoli Workload Scheduler for z/OS to control unexpected work. Because Tivoli Workload Scheduler for z/OS checks the processing status of this work, automatic recovery facilities are also available.

Interfacing with Other Programs Tivoli Workload Scheduler for z/OS provides a program interface (PIF). Using this interface, you can automate most actions that you can perform online through the dialogs. This interface can be called from CLISTs, user programs, and via TSO commands. The application programming interface (API) lets your programs communicate with Tivoli Workload Scheduler for z/OS from any compliant platform. You can use Common Programming Interface for Communications (CPI-C), advanced program-to-program communication (APPC), or your own logical unit (LU) 6.2 verbs to converse with Tivoli Workload Scheduler for z/OS through the API. You can use this interface to query and update the current plan. The programs can be running on any platform that is connected locally, or remotely through a network, with the z/OS system where the engine runs.

Manual Control and Intervention Tivoli Workload Scheduler for z/OS lets you check the status of work and intervene manually when priorities change or when you need to run unplanned work. You can query the status of the production workload and then modify the schedule if needed.

Status Inquiries With the ISPF dialogs or with the Job Scheduling Console, you can make queries online and receive timely information on the status of the production workload. Time information that is displayed by the dialogs can be in the local time of the dialog user. Using the dialogs, you can request detailed or summary information on individual job streams, jobs, and workstations, as well as summary information concerning workload production as a whole. You can also display dependencies graphically as a network at both job stream and job level. Status inquiries: v Provide you with overall status information that you can use when considering a change in workstation capacity or when arranging an extra shift or overtime work. v Help you supervise the work flow through the installation; for instance, by displaying the status of work at each workstation.

30

General Information

v Help you decide whether intervention is required to speed the processing of specific job streams. You can find out which job streams are the most critical. You can also check the status of any job stream, as well as the plans and actual times for each job. v Enable you to check information before making modifications to the plan. For example, you can check the status of a job stream and its dependencies before deleting it or changing its input arrival time or deadline. See “Modifying the Current Plan” for more information. v Provide you with information on the status of processing at a particular workstation. Perhaps work that should have arrived at the workstation has not arrived. Status inquiries can help you locate the work and find out what has happened to it.

Modifying the Current Plan Tivoli Workload Scheduler for z/OS makes status updates to the plan automatically, using its tracking functions. However, it lets you change the plan manually to reflect unplanned changes to the workload or to the operations environment, which often occur during a shift. For example, you may need to change the priority of a job stream, add unplanned work, or reroute work from one workstation to another. Or you may need to correct operational errors manually. Modifying the current plan may be the best way to handle these situations. You can modify the current plan online. For example, you can: v Include unexpected jobs or last-minute changes to the plan. Tivoli Workload Scheduler for z/OS then automatically creates the dependencies for this work. v Manually modify the status of jobs. v Delete occurrences of job streams. v Graphically display job dependencies before you modify them. v Modify the data in job streams, including the JCL. v Respond to error situations by: – Rerouting jobs – Rerunning jobs or occurrences – Completing jobs or occurrences – Changing jobs or occurrences v Change the status of workstations by: – Rerouting work from one workstation to another – Modifying workstation reporting attributes – Updating the availability of resources – Changing the way resources are handled v Replan or extend the current plan In addition to using the dialogs, you can modify the current plan from your own job streams using the program interface or the application programming interface. You can also trigger Tivoli Workload Scheduler for z/OS to dynamically modify the plan using TSO commands or a batch program. This enables unexpected work to be added automatically to the plan.

Management of Critical Jobs Tivoli Workload Scheduler for z/OS exploits the capability of the Workload Manager component of z/OS to ensure that critical jobs are completed on time. If a critical job is late, Tivoli Workload Scheduler for z/OS favors it using existing Workload Manager interface. Chapter 3. Tivoli Workload Scheduler for z/OS

31

Security Today, DP operations increasingly require a high level of data security, particularly as the scope of DP operations expands and more people within the enterprise become involved. Tivoli Workload Scheduler for z/OS provides complete security and data integrity within the range of its functions. It provides a shared central service to different user departments even when the users are in different companies and countries. Tivoli Workload Scheduler for z/OS provides a high level of security to protect scheduler data and resources from unauthorized access. With Tivoli Workload Scheduler for z/OS, you can easily organize, isolate, and protect user data to safeguard the integrity of your end-user applications (see Figure 5 on page 32). Tivoli Workload Scheduler for z/OS can plan and control the work of many user groups, and maintain complete control of access to data and services.

Audit Trail Scheduler JCL

TSO User

JCL

TSO User

TSO User

JCL RACF

Scheduler Data

Finance

Sales

Manufact.

Figure 5. Security

Audit Trail With the audit trail, you can define how you want Tivoli Workload Scheduler for z/OS to log accesses (both reads and updates) to scheduler resources. Because it provides a history of changes to the databases, the audit trail can be extremely useful for staff that works with debugging and problem determination. A sample program is provided for reading audit-trail records. The program reads the logs for a period that you specify and produces a report detailing changes that have been made to scheduler resources.

System Authorization Facility Tivoli Workload Scheduler for z/OS uses the system authorization facility (SAF), a function of z/OS, to pass authorization verification requests to your security system, for example RACF. This means that you can protect your scheduler data objects with any security system that uses the SAF interface. Protection of Data and Resources: Each user request to access a function or to access data is validated by SAF. This is some of the information that can be protected: v Calendars and periods v Job stream names or job stream owner, by name v Workstation, by name

32

General Information

v Job stream-specific data in the plan v Operator instructions v JCL To support distributed, multi-user handling, Tivoli Workload Scheduler for z/OS lets you control the level of security you want to implement, right down to the level of individual records. You can define generic or specific RACF resource names to extend the level of security checking. If you have RACF Version 2 Release 1 installed, you can use the Tivoli Workload Scheduler for z/OS reserved resource class to manage your Tivoli Workload Scheduler for z/OS security environment. This means you do not have to define your own resource class by modifying RACF and restarting your system. Data Integrity During Submission: Tivoli Workload Scheduler for z/OS can ensure the correct security environment for each job it submits, regardless of whether the job is run on a local or a remote system. Tivoli Workload Scheduler for z/OS lets you create tailored security profiles for individual jobs or groups of jobs.

Configurations of Tivoli Workload Scheduler for z/OS Tivoli Workload Scheduler for z/OS supports many configuration options using a variety of communication methods: v The Controlling System v Controlled z/OS Systems v Remote Panels and Program Interface Applications v Scheduling Jobs That Are in Tivoli Workload Scheduler

The Controlling System The controlling system requires both the agent and the engine. One controlling system can manage the production workload across all your operating environments. The engine is the focal point of control and information. It contains the controlling functions, the dialogs, and the scheduler’s own batch programs. Only one engine is required to control the entire installation, including local and remote systems (see Figure 6 on page 34).

Chapter 3. Tivoli Workload Scheduler for z/OS

33

Job Scheduling Console

Job Scheduling Console Connector

Tracker Hot Standby controller Sysplex

3270 Interface Tracker Active Controller

Tracker Hot Standby Controller

Open Interfaces

Distributed Agents DOS Windows

VM VSE Others

Remote z/OS Tracker

Figure 6. Tivoli Workload Scheduler for z/OS Configurations

Controlled z/OS Systems An agent is required for every controlled z/OS system in an configuration. This includes, for example, local controlled systems within shared DASD or sysplex configurations. The agent runs as a z/OS subsystem and interfaces with the operating system (through JES2 or JES3, and SMF), using the subsystem interface and the operating system exits. The agent monitors and logs the status of work, and passes the status information to the engine via shared DASD, XCF, or ACF/VTAM®. You can exploit z/OS and cross-system coupling facility (XCF) to connect your local z/OS systems. Rather than being passed to the controlling system via shared DASD, work status information is passed directly via XCF connections. XCF lets you exploit all of production-workload-restart facilities and its hot standby function. See “Automatic Recovery and Restart” on page 27.

Remote Systems The agent on a remote z/OS system passes status information about the production work in progress to the engine on the controlling system. All communication between Tivoli Workload Scheduler for z/OS subsystems on the controlling and remote systems is done via ACF/VTAM.

34

General Information

Tivoli Workload Scheduler for z/OS lets you link remote systems using ACF/VTAM networks. Remote systems are frequently used locally “on premises” to reduce the complexity of the data processing (DP) installation.

Remote Panels and Program Interface Applications ISPF panels and program interface (PIF) applications can run in a different z/OS system from the one where the engine is running. Dialogs and PIF applications send requests to and receive data from a Tivoli Workload Scheduler for z/OS server which is running on the same z/OS system where the target engine is running, via advanced program-to-program communications (APPC). The server will communicate with the engine to perform the requested actions. The server is a separate address space, started and stopped either automatically by the engine or by the user via the z/OS start command. There can be more than one server for an engine. If the dialogs or the PIF applications run on the same z/OS system where the target engine is running, the server may not be involved.

Scheduling Jobs That Are in Tivoli Workload Scheduler Tivoli Workload Scheduler for z/OS also allows you to access job streams (schedules in Tivoli Workload Scheduler) and add them to the current plan in Tivoli Workload Scheduler for z/OS. In addition, you can build dependencies among Tivoli Workload Scheduler for z/OS job streams and Tivoli Workload Scheduler jobs. From Tivoli Workload Scheduler for z/OS, you can monitor and control the distributed agent. In the Tivoli Workload Scheduler for z/OS current plan, you can specify jobs to run on workstations in Tivoli Workload Scheduler. Tivoli Workload Scheduler for z/OS passes the job information to the Tivoli Workload Scheduler Symphony file, which in turn passes the jobs in the current plan to Tivoli Workload Scheduler to distribute and process. In turn, Tivoli Workload Scheduler reports the status of running and completed jobs back to the current plan for monitoring in Tivoli Workload Scheduler for z/OS.

Chapter 3. Tivoli Workload Scheduler for z/OS

35

36

General Information

Chapter 4. Tivoli Job Scheduling Console This chapter describes feature level 1.2 of the Tivoli Job Scheduling Console that is distributed as part of the Tivoli Workload Scheduling suite.

Overview The Tivoli Job Scheduling Console for the Tivoli Workload Scheduling suite is an interactive interface for creating, modifying, and deleting objects in the product database. It also enables you to monitor and control objects scheduled in the current plan. The Job Scheduling Console enables you to work with Tivoli Workload Scheduler for z/OS and with Tivoli Workload Scheduler. You can work with these products simultaneously from the same graphical console. To run the console, you only have to be able to log into a scheduling engine through a connector. This means that you can manage plan and database objects from any system, including a laptop, on which the Job Scheduling Console is installed and from which you can reach via TCP/IP a server running the connector for Tivoli Workload Scheduler or for Tivoli Workload Scheduler for z/OS. Connectors manage the traffic between the Job Scheduling Console and the job schedulers. Connectors are installed separately on a Tivoli management server and on managed nodes that have access to the scheduler. If you plan to use the Job Scheduling Console to schedule the workload with Tivoli Workload Scheduler for z/OS, you need to install the Tivoli Workload Scheduler for z/OS connector. If you plan to use the Job Scheduling Console to schedule the workload with Tivoli Workload Scheduler, you need to install the Tivoli Workload Scheduler connector. The Job Scheduling Console provides two main functions: Scheduling Enables you to define and list job streams, jobs, and resource availability in the scheduler database. Monitoring and control Enables you to monitor and control scheduled jobs and job streams in the scheduler plan. In the Job Scheduling Console, a scheduled job stream is called a job stream instance whereas a scheduled job is called a job instance. Extensions, built into the Job Scheduling Console, extend its base scheduling functions to specific scheduling functions of Tivoli Workload Scheduler for z/OS and of Tivoli Workload Scheduler. For each of these functions, you can use a list creation mechanism that enables you to list database or plan objects that you select according to filtering criteria. Filtering criteria narrow a list down to selected objects that you want to work with. You can list objects without using filtering criteria. In this case, the list displays all the existing objects of a kind. You can use both pre-defined lists that are packaged with the Job Scheduling Console and lists that you create.

37

Tivoli Workload Scheduler for z/OS Tasks This section describes what Tivoli Workload Scheduler for z/OS tasks can be accomplished with the Job Scheduling Console. The tasks are grouped according to whether they are run typically by an administrator or by an operator. The following figure shows the main Job Scheduling Console window. A Tivoli Workload Scheduler for z/OS engine is selected. The popup window lists what actions are available for the engine. The same actions can be done by clicking the corresponding icons at the top of the window. The icons display contextually with the engine.

Figure 7. Job Scheduling Console main window and Tivoli Workload Scheduler for z/OS tasks.

Scheduler Tasks From the Job Scheduling Console, you can define and manage the following objects in the scheduler database: v Job streams v Jobs v Workstations v Resources

Working with Job Streams Job streams are a collection of jobs, scheduling information, and the resources they require to run. The jobs that comprise a job stream usually follow a sequence where the execution of a job depends on the successful completion of another job. Creating a job stream involves: 1. Defining job stream properties. 2. Creating jobs, which includes defining what resources each job requires to run and the timing of its execution. 3. Defining the necessary dependencies, or sequencing, among the jobs of the job stream and with jobs that belong to other job streams.

38

General Information

4. Defining one or more run cycles, or the days on which the job stream must run and when it must start. Modifying a job stream involves adding, deleting, or modifying any of the jobs that comprise it, along with the dependencies and run cycles. You can also delete an entire job stream. Job stream definitions are stored in the job scheduler databases. To browse or update job streams you have created, you must make and run a list of job streams in the database.

Figure 8. Listing job streams in the database.

Working with Jobs Jobs are the units of work in a job stream. You cannot create jobs outside of a job stream. You must first create a job stream and define its properties before you can start to create the jobs that comprise it. Creating a job involves: 1. Defining job properties. 2. Specifying when the job must run (time restrictions) within its job stream’s run cycle. 3. Defining the properties of the task associated with the job, if applicable. 4. Specifying the resources that the job requires to run. Jobs are stored in the job scheduler database as parts of job streams. To browse, update, or delete a job definition, you must list the parent job stream in the database.

Working with Workstations Workstations describe how jobs have to be run. A workstation is not necessarily hardware. It is a stage in the processing that is controlled by the scheduler. To schedule a job instance, a workstation must have been defined beforehand. Before Chapter 4. Tivoli Job Scheduling Console

39

the scheduler can start a job instance, the workstation on which the job instance is defined must be available. So, by controlling workstation availability, you control the running of job instances that are defined on the workstation. Defining a workstation involves: 1. Defining the workstation’s general properties. 2. Specifying open time intervals, periods during which the workstation‘s resources and parallel servers are available to process work. Parallel servers and resources are usually necessary to run work at the workstation.

Working with Resources Resources represent physical or logical devices that jobs use in order to run. Defining a resource involves: 1. Defining the resource’s general properties. 2. Specifying availability intervals, periods during which the resource’s state and quantity available for running jobs differ from the values specified as general properties. Resource definitions are stored in the job scheduler database. To browse, update, or delete a resource definition, you must make and run a list of resources in the database.

Operator Tasks From the Job Scheduling Console, you can monitor and control the following objects in the current plan: v Job stream instances v Job instances v Workstations v Resources To monitor and control these objects, you must first display them in a list in the Job Scheduling Console.

Working with Job Stream Instances Job streams that are scheduled in the plan are job stream instances. You can browse, modify, and delete job stream instances, provided you display them in a job stream instance list. Modifying a job stream instance includes changing some of its general properties and the start and deadline times. You can also change the status of a job stream instance to Waiting or to Complete.

Working with Job Instances Jobs belonging to a job stream that is scheduled in the plan are job instances. You can browse, modify, and delete job instances after displaying them in a job instance list. Modifying a job instance involves: v Changing its state, resource dependencies, and time restrictions. v Deleting predecessor and successor jobs in the job instance’s dependency chain. v Mark job instances s for monitoring with Tivoli Business Systems Manager.

40

General Information

Figure 9. Listing job instances.

In addition, you can: v Hold or release job instances. v Remove or restore job instances in the current plan. v Run a job instance immediately regardless of normal scheduling rules. v Browse the job log. v View and modify operator instructions. v Tailor job statements. v Restart a job instance and perform cleanup operations.

Working with Workstations in the Plan A workstation instance is a workstation that is allocated to the plan. By using filtered lists of workstations in the plan, you can: v Monitor the status of a workstation in the plan and of the job instances scheduled to run on it. v Modify the settings and availability of the workstation. v Change the status of a workstation. v Reroute the job instances that are scheduled to run on a workstation. v Display by status the job instances scheduled on a workstation and, eventually, modify the job instances.

Working with Resources in the Plan A resource instance is a resource that is allocated to the plan. The resource reserved for the duration of the plan for use by the jobs that depend on it. status and quantities of the resource are specified in the general properties availability intervals definitions in the database. You can: v Modify a resource’s availability intervals and quantity after the resource been allocated to the plan.

is The and has

Chapter 4. Tivoli Job Scheduling Console

41

v Use lists of resource instances to view and modify job instances associated with the resources. v Specify connected workstations.

Tivoli Workload Scheduler Tasks The Job Scheduler Console provides conman and composer functionality for Tivoli Workload Scheduler engines. This section describes what Tivoli Workload Scheduler tasks can be accomplished with the Job Scheduling Console. The tasks are grouped according to whether they are run typically by an administrator or by an operator. The following figure shows the main Job Scheduling Console window. A Tivoli Workload Scheduler engine is selected. The popup window lists what actions are available for the engine. The same actions can be done by clicking the corresponding icons at the top of the window. The icons display contextually with the engine.

Figure 10. Job Scheduling Console main window and Tivoli Workload Scheduler tasks.

Scheduler Tasks From the Job Scheduling Console, you can define and manage the following objects in the scheduler database: v Job streams v Jobs v Calendars v Prompts v Parameters v Domains

42

General Information

v Workstations and workstation classes v Resources v Users

Working with Job Streams You can use the Job Scheduling Console to work with job streams and job stream templates. Job stream templates contain only scheduling information. When you define a job stream as belonging to a template, you imply that it must share the template calendar and run cycles. You can: v Create, update, or delete job stream templates. v Add or remove a job stream from a job stream template. v List job stream templates in the scheduler database. v Mark job streams for monitoring with Tivoli Business Systems Manager.

Working with Jobs When you create or modify a job, the Tivoli Workload Scheduler extension adds the following features to a basic job definition: v Assigning the necessary special (logical) and workstation resources for the execution of the job. v Defining the job’s automatic and feedback options.

Working with Calendars A calendar is a list of scheduling dates defined in the scheduler database. Assigning a calendar run cycle to a job stream causes that job stream to be run on the days specified in the calendar. Since a calendar is defined to the scheduler database, it can be assigned to multiple job streams. You can create as many calendars as required to meet your scheduling needs.

Working with Prompts Prompts can be used as dependencies for jobs and job streams. A prompt must be answered affirmatively for the dependent job or job stream to launch. For example, you can issue a prompt to make sure that a printer is online before a job that prints a report runs. There are two types of prompt: Ad hoc prompt Is defined within the properties of a job or a job stream and is unique to that job or job stream. Predefined prompt Is defined in the Tivoli Workload Scheduler database and can be used by any job or job stream. You can create, modify, and delete prompts in the Tivoli Workload Scheduler database.

Working with Parameters Parameters are useful to substitute values into your jobs and job streams. Since parameters are stored in the Tivoli Workload Scheduler database, all jobs and job streams that use the particular parameter are updated automatically when the value changes. For scheduling, a parameter can be used as a substitute for all or part of: v File dependency path names v Text for prompts Chapter 4. Tivoli Job Scheduling Console

43

v Logon, command, and script file names You can create, modify, and delete parameters in the Tivoli Workload Scheduler database.

Working with Domains A domain is a named group of Workload Scheduler workstations, consisting of one or more workstations and a domain manager acting as the management hub. All domains have a parent domain, except for the master domain. You can create, modify, and delete domain definitions in the Tivoli Workload Scheduler database.

Working with Workstations You can create, update, and delete workstation definitions in the scheduler database. You define the following workstation characteristics: v General properties v Availability status during specific periods of time v Available quantities during specific periods of time You can list workstations defined in the scheduler database, selected according to filtering criteria, and browse or modify their properties. You can also delete workstations from the database.

Working with Workstation Classes A workstation class is a group of workstations. Any number of workstations can be placed in a class. Job streams and jobs can be assigned to execute on a workstation class, making replication across many workstations easy. If a job stream is defined on a workstation class, each job added to the job stream must be defined either on a single workstation or on the exact same workstation class that the job stream was defined on. You can create, modify, and delete workstation classes.

Working with Resources You can create, update, and delete resource definitions in the scheduler database. You define the following resource characteristics: v General properties v Availability status on a given workstation during specific periods of time v Available quantities on a given workstation during specific periods of time v Workstations connected to the resource You can list resources defined in the scheduler database, selected according to filtering criteria, and browse or modify their properties. You can also delete resources from the database.

Working with Users The users for whom Tivoli Workload Scheduler will launch jobs must be defined in the database. This is required for Windows NT users only. From the Job Scheduling Console you can: v Create, modify, and delete user definitions in the database. v Change user passwords.

44

General Information

Operator Tasks From the Job Scheduling Console, you can monitor and control the following objects in the daily plan: v Job stream instances v Job instances v Workstations v Domains v File dependencies v Prompt dependencies v Resource dependencies To monitor and control these objects, you must first display them in a list. You can also select a different plan to use, other than the current plan.

Working with Job Stream Instances You can use the console to do the following on job stream instances: v Modify the properties. v Mark or unmark for monitoring with Tivoli Business Systems Manager. v Display, add, and delete predecessors. v Display successors. v Hold or release. v Cancel, or re-submit. v Change the job limit or the priority. v Release from dependencies. v Submit additional job streams to the current plan.

Working with Job Instances You can use the console to do the following on job instances: v v v v

Modify the properties. Mark or unmark for monitoring with Tivoli Business Systems Manager. Display, add, and delete predecessors. Display successors.

v v v v v

Hold or release. Cancel, kill, or rerun. Confirm as successful or abended. Release from dependencies. Submit additional jobs to the current plan.

In addition, you can browse job logs and get job outputs (STDLST).

Working with Workstations You can use the console to do the following on workstation instances: v Display the properties and the status. v Change the job limit or the job fence. v Start or stop. v Link or unlink.

Chapter 4. Tivoli Job Scheduling Console

45

Figure 11. Changing the job limit of a workstation in the plan.

Working with Domains You can use the console to do the following on domains in the Tivoli Workload Scheduler plan: v List the domains. v Start or stop the workstations in a domain. v Link or unlink the workstations in a domain. v Switch the domain manager in a domain.

46

General Information

Figure 12. Listing Tivoli Workload Scheduler domains involved in the plan.

Working with File Dependencies A file dependency is when a job or job stream needs to verify the existence of one or more files before it can begin execution. You can display the status of file dependencies.

Working with Prompt Dependencies A prompt dependency is when a job or job stream needs to wait for an affirmative response to a prompt before it can begin execution. From the Job Scheduling Console you can: v Display the status of prompt dependencies. v Reply to a prompt dependency.

Working with Resource Dependencies Resources represent any type of resources on your system such as tape drives, communication lines, databases, or printers, that are needed to run a job. Resources can be physical or logical. After defining a resource in the Workload Scheduler database, it can be used as a dependency for jobs and job streams that run on the workstation or workstation class for which the resource is defined. Use the Job Scheduling Console to: v Display the properties and the status of resource dependencies. v Change the number of units of a resource dependency.

Chapter 4. Tivoli Job Scheduling Console

47

Common Tasks The Common view is an additional selection at the bottom of the tree view of the scheduling engines. It provides the ability to list job and job stream instances in a single view and regardless of the scheduling engine, thus furthering integration for workload scheduling on the mainframe and the distributed platforms. The following figure shows the main Job Scheduling Console window where the common plan lists are selected. With these you can run common lists of job and job stream instances from all the engines displayed in the Job Scheduling tree.

Figure 13. Job Scheduling Console main window and common tasks.

You can list job or job stream instances from all the engines to which the Job Scheduling Console is connected. As it is for individual engines, default lists are provided, but you can also create and save filtered lists that respond to your needs. The Common view implementation considers only the common properties of job and job stream instances. This means that you can filter your queries only on common characteristics and the resulting lists will have only columns that display the common attributes. You can select the engines to query on.You can also modify the objects by selecting the actions that are allowed by the specific scheduler engine

48

General Information

Chapter 5. End-to-end Scheduling End-to-end scheduling allows you to schedule and control jobs on mainframe, Windows®, and UNIX environments, for truly distributed scheduling. In the end-to-end configuration, Tivoli Workload Scheduler for z/OS is used as the planner for the job scheduling environment. Tivoli Workload Scheduler Domain Managers and fault-tolerant agents are used to schedule on the distributed platforms. The agents replace the use of tracker agents.

How End-to-end Scheduling Works End-to-end scheduling directly connects a Tivoli Workload Scheduler domain manager to Tivoli Workload Scheduler for z/OS. The Tivoli Workload Scheduler for z/OS engine creates the production plan also for the distributed network and sends it to the domain manager. The domain manager sends a copy of the plan to each of its linked agents and subordinate domain managers for execution. The domain manager functions as the broker system for the entire distributed network by resolving all its dependencies. It sends its updates (in the form of events) to Tivoli Workload Scheduler for z/OS so that it can update the plan accordingly. Tivoli Workload Scheduler for z/OS handles its own jobs and notifies the domain manager of all the status changes of the Tivoli Workload Scheduler for z/OS jobs that involve the Tivoli Workload Scheduler plan. In this configuration, the domain manager and all the distributed agents recognize Tivoli Workload Scheduler for z/OS as the master domain manager and notify it of all the changes occurring in their own plans. At the same time, the agents are not permitted to interfere with the Tivoli Workload Scheduler for z/OS jobs, since they are viewed as running on the master that is the only node that is in charge of them.

Distributed Agents A distributed agent is a computer that is part of a Tivoli Workload Scheduler domain on which you can schedule jobs from Tivoli Workload Scheduler for z/OS. Examples of distributed agents are the following: standard agents, extended agents, fault-tolerant agents, and domain managers. The following is a description of the types of distributed agents: Domain Manager The management hub in a domain. All communications to and from the agents in a domain are routed through the domain manager. Backup Domain Manager A fault-tolerant agent or domain manager capable of assuming the responsibilities of its domain manager for automatic workload recovery. Fault-tolerant Agent (FTA) A workstation capable of resolving local dependencies and launching its jobs in the absence of a domain manager. Standard Agent A workstation that launches jobs only under the direction of its domain manager. Extended Agent A logical workstation definition that enables you to launch and control jobs

49

on other systems and applications, such as Baan, Peoplesoft, Oracle Applications, SAP, and MVS JES2 and JES3. Distributed agents replace tracker agents in Tivoli Workload Scheduler for z/OS. The distributed agents enable you to schedule on non-z/OS systems with a more reliable, fault-tolerant and scalable agent. In the Tivoli Workload Scheduler for z/OS plan, the logical representation of a distributed agent is called a fault-tolerant workstation.

Supported End-to-end Configurations The following table lists the agents that you can use in the end-to-end distributed network. Platform

Domain Manager

Fault-tolerant Agents and Standard Agents

IBM AIX

U

U

HP-UX PA-RISC

U

U

Sun Solaris

U

U

Microsoft Windows NT

U

U

Microsoft Windows Professional, Server, Advanced Server

U

U

®

U

Compaq Tru64 IBM OS/400

U

®

U

SGI Irix

U

®

IBM Sequent Dynix Linux/INTEL Linux/390

U

U U

Benefits of End-to-end scheduling The benefits that can be gained from using end-to-end scheduling are the following: v Connecting fault-tolerant agents Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS. v Scheduling on additional operating systems. v Synchronization of work in mainframe and distributed environments. v The ability for Tivoli Workload Scheduler for z/OS to use multi-tier architecture with domain managers. v Extended planning capabilities, such as the use of long-term plans, trial plans and extended plans, also to the distributed part of the network.

50

General Information

|

|

Notices

| | | | | | | | |

This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

| | |

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents.You can send license inquiries, in writing, to:

|

IBM Director of Licensing

|

IBM Corporation

|

North Castle Drive

|

Armonk, NY 10504-1785 U.S.A.

| |

For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to:

|

IBM World Trade Asia Corporation

|

Licensing

|

2-31 Roppongi 3-chome, Minato-ku

|

Tokyo 106, Japan

| |

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law:

| | | | |

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION ″AS IS″ WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

| |

Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement might not apply to you.

| | | | |

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

51

| | | |

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

| |

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

| | | |

Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact:

|

IBM Corporation

|

2Z4A/101

|

11400 Burnet Road

|

Austin, TX 78758 U.S.A.

| |

Such information may be available, subject to appropriate terms and conditions, including in some cases payment of a fee.

| | | |

The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us.

| |

Trademarks

| | | | | | | | |

The following terms are trademarks of International Business Machines Corporation in the United States, other countries, or both: ACF/VTAM, AIX, BookManager, CICS, DB2, DB2 Universal Database, DFSMSdss, DFSMShsm, GDDM, Hiperbatch, Hiperspace, IBM, the IBM logo, IMS, MVS, NetView, OS/2, OS/390, OS/400, Parallel Sysplex, RACF, RMF, RISC System/6000, SAA, S/390, Sequent, Sysplex Timer, System Application Architecture, Tivoli Enterprise Console, Tivoli Management Environment, VTAM, and z/OS, Tivoli, the Tivoli logo are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both.

| |

Microsoft, Windows, and Windows NT are registered trademarks of Microsoft Corporation in the United States, other countries, or both.

| |

UNIX is a registered trademark of The Open Group in the United States and other countries.

| |

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries.

| |

Other company, product, and service names might be trademarks or service marks of others.

52

General Information

Index A

D

advanced program-to-program communication (APPC) 30 alerts, passing to NetView 27 API (application programming interface) 30 APPC (advanced program-to-program communication) 30 application definition of 22 application programmer 7 application programming interface (API) 30 audit-trail facility 32 authority checking 32 automatic job submission 26 status checking 29 status reporting 29 automatic job and started-task recovery 27, 29 automation 4 availability 5

Data Facility Hierarchical Storage Manager (DFSHM) 3 Decision Support 3 dependencies defining 23 graphic display of 23 dependency 15 DFHSM (Data Facility Hierarchical Storage Manager) 3 domain manager 11

B

E

fault-tolerant agent 11 file dependency 18 final job stream 17 freeday rule xii

C

helpdesk 7 Hyperbolic view xiii

calendar 14 definition of 24 freedays xii CICS 3 Common Programming Interface for Communications (CPI-C) 30 configurations 33 Conman 14 connector 9 for Tivoli Workload Scheduler 9 Tivoli Workload Scheduler for z/OS 9 console operator 7 controlled systems 34 controlling system description 33 recovery of 28 CPI-C (Common Programming Interface for Communications) 30 cross-system coupling facility (XCF) 27, 29, 34 current plan 25 Customer Support x customers, queries from 7

global options 19 graphic display of dependencies

23

I IMS 3 integration 2 ISPF (Interactive System Productivity Facility) dialog 25

JCL editor xiii Jnextday 17 job completion checker (JCC) 30 job dependencies See operation dependencies job log 41 job recovery automatic 27 manual 31 job stream editor 16 job stream instance editor 18 job streams 35

ix

N

H

J

local options 20 long-term plan 25 LookAt message retrieval tool

mailman 14 manual status control 31 master domain 10 master domain manager 11 message retrieval tool, LookAt ix monitoring the workload 4 multi-tier architecture 50

F

G

L

M

end users, queries from 7 EQQAUDIT xi extended agent 11

backup domain manager 11 backup master 11 backup system 28 batchman 14 benefits 1, 8 business processing cycle 24

job submission automatic 26 manual 31 job tailoring 26 jobman 14 JSconsole client 11

national language features netman 13 NetView alerts 27 description of 2 RODM 2 network Agent 11

21

O occurrences 25 online publications accessing ix operation dependencies 23 operations manager 6 operator, workstation 7 ordering publications ix

P parameter 15 periods 24 PIF (program interface) PIF applications applications 35 plan current 25 definition of 24 detailed 25 long-term 24 modification of 31 trial 22 types 21

30

53

planning trial plans 22 production control file 10 production day 15 production workload restart program interface (PIF) 30 prompt 15, 43 prompt dependency 18

T 27, 29

R RACF (Resource Access Control Facility) 3, 32 recovery 27, 29 recovery job 17 recovery prompt 17 remote dialogs dialogs 35 Removable Media Manager xi Report Management and Distribution System (RMDS) 3 resource 15 Resource Access Control Facility (RACF) 3, 32 Resource Object Data Manager (RODM) 2 restart 27, 29 restart management 27, 29 RMDS (Report Management and Distribution System) 3 RODM (Resource Object Data Manager) 2 run cycle 15

S SA/390 (System Automation for OS/390) 3 SA/390 Automation Feature 3 SAF (system authorization facility) 32 schedule 24 scheduling manager 6 security 32 shift supervisor 7 simulation with trial plans 22 special resources definition of 23 standard agent 11 standard list file 19 status checking, automatic 29 status control manual 30, 31 status inquiries 30 status reporting automatic 29 from heterogeneous environments 29 from user programs 29 step-level restart 27 symphony 10, 15 SYSOUT, checking of 30 system authorization facility (SAF) 32 System Automation OS/390 (SA/390) 3 system failures 27 Systems Application Architecture Common Programming Interface for Communications 30

54

General Information

time restrictions 40 Tivoli Business Systems Manager xi, xii, 18 Tivoli Management Framework 9 Tivoli Service Desk for OS/390 (TSD/390) 3 Tivoli Workload Scheduler 21, 35 Tivoli Workload Scheduler/NetView 18 tracker agents xi, 49 trial plans 22 TSD/390 (Tivoli Service Desk for OS/390) 3 TWS connector 9

U unplannable work 30 user 15 user authority checking

32

W work submission, automatic 26 Workload Manager (WLM) 2, 29 workload monitoring 4 workload restart 27, 29 workstation changing the status of 31 definition of 22 operator 7 workstation class 14 writer 14

X XCF (cross-system coupling facility) 27, 29, 34

򔻐򗗠򙳰

Program Number: 5697-WSZ and 5698-WKB

Printed in Denmark by IBM Danmark A/S

GH19-4539-01

Suggest Documents