Wednesday, July 17, 2024

Machine Learning for IBM z/OS v3.2 provides AI for IBM Z

On the IBM Z with Machine Learning for IBM z/OS v3.2, speed, scale, and reliable AI
Businesses have doubled down on AI adoption, which has experienced a phenomenal growth in recent years. Approximately 42% of enterprise scale organizations (those with more than 1,000 workers) who participated in the IBM Global AI Adoption Index said that they had actively implemented AI in their operations.

IBM Application Performance Analyzer for z/os

Companies who are already investigating or using AI report that they have expedited their rollout or investments in the technology, according to 59% of those surveyed. Even yet, enterprises still face a number of key obstacles, including scalability issues, establishing the trustworthiness of AI, and navigating the complexity of AI implementation.

A stable and expandable setting is essential for quickening the adoption of AI by clients. It must be able to turn aspirational AI use cases into reality and facilitate the transparent and trustworthy generation of real-time AI findings.

For IBM z/OS, what does machine learning mean?

An AI platform designed specifically for IBM z/OS environments is called Machine Learning for IBM z/OS. It mixes AI infusion with data and transaction gravity to provide scaled-up, transparent, and rapid insights. It assists clients in managing the whole lifespan of their AI models, facilitating rapid deployment on IBM Z in close proximity to their mission-critical applications with little to no application modification and no data migration. Features include explain, drift detection, train-anywhere, and developer-friendly APIs.

Machine Learning for IBM z/OS

IBM z16

Many transactional use cases on IBM z/OS can be supported by machine learning. Top use cases include:

Real-time fraud detection in credit cards and payments: Large financial institutions are gradually incurring more losses due to fraud. With off-platform alternatives, they were only able to screen a limited subset of their transactions. For this use case, the IBM z16 system can execute 228 thousand z/OS CICS credit card transactions per second with 6 ms reaction time and a Deep Learning Model for in-transaction fraud detection.

IBM internal testing running a CICS credit card transaction workload using inference methods on IBM z16 yield performance results. They used a z/OS V2R4 LPAR with 6 CPs and 256 GB of memory. Inferencing was done with Machine Learning for IBM z/OS running on WebSphere Application Server Liberty, using a synthetic credit card fraud detection model and the IBM Integrated Accelerator for AI.

Server-side batching was enabled on Machine Learning for IBM z/OS with a size of 8 inference operations. The benchmark was run with 48 threads conducting inference procedures. Results represent a fully equipped IBM z16 with 200 CPs and 40 TB storage. Results can vary.

Clearing and settlement: A card processor considered utilising AI to assist in evaluating which trades and transactions have a high-risk exposure before settlement to prevent liability, chargebacks and costly inquiry. In support of this use case, IBM has proven that the IBM z16 with Machine Learning for IBM z/OS is designed to score business transactions at scale delivering the capacity to process up to 300 billion deep inferencing queries per day with 1 ms of latency.

Performance result is extrapolated from IBM internal tests conducting local inference operations in an IBM z16 LPAR with 48 IFLs and 128 GB memory on Ubuntu 20.04 (SMT mode) using a simulated credit card fraud detection model utilising the Integrated Accelerator for AI. The benchmark was running with 8 parallel threads, each pinned to the first core of a distinct processor.

The is CPU programmed was used to identify the core-chip topology. Batches of 128 inference operations were used. Results were also recreated using a z/OS V2R4 LPAR with 24 CPs and 256 GB memory on IBM z16. The same credit card fraud detection model was employed. The benchmark was run with a single thread executing inference operations. Results can vary.

Anti-money laundering: A bank was studying ways to include AML screening into their immediate payments operating flow. Their present end-day AML screening was no longer sufficient due to tougher rules. IBM has shown that collocating applications and inferencing requests on the IBM z16 with z/OS results in up to 20x lower response time and 19x higher throughput than sending the same requests to a compared x86 server in the same data centre with 60 ms average network latency.


Performance from IBM internal tests using a CICS OLTP credit card workload with in-transaction fraud detection. Credit card fraud was detected using a synthetic model. Inference was done with MLz on zCX on IBM z16. Comparable x86 servers used Tensorflow Serving. Linux on IBM Z LPAR on the same IBM z16 bridged the network link between the measured z/OS LPAR and the x86 server.

Linux “tc-netem” added 5 ms average network latency to imitate a network environment. Network latency improved. Outcomes could differ.

IBM z16 configuration: Measurements were done using a z/OS (v2R4) LPAR with MLz (OSCE) and zCX with APAR- oa61559 and APAR- OA62310 applied, 8 CPs, 16 zIIPs and 8 GB of RAM.

x86 configuration: Tensorflow Serving 2.4 ran on Ubuntu 20.04.3 LTS on 8 Sky lake Intel Xeon Gold CPUs @ 2.30 GHz with Hyperthreading activated on, 1.5 TB memory, RAID5 local SSD Storage.

Machine Learning for IBM z/OS

Machine Learning for IBM z/OS with IBM Z can also be utilized as a security-focused on-prem AI platform for additional use cases where clients desire to increase data integrity, privacy and application availability. The IBM z16 systems, with GDPS, IBM DS8000 series storage with Hyper Swap and running a Red Hat Open Shift Container Platform environment, are designed to deliver 99.99999% availability.

IBM z16, IBM z/VM V7.2 systems or above collected in a Single System Image, each running RHOCP 4.10 or above, IBM Operations Manager, GDPS 4.5 for managing virtual machine recovery and data recovery across metro distance systems and storage, including GDPS Global and Metro Multisite Workload, and IBM DS8000 series storage with IBM Hyper Swap are among the required components.

Necessary resiliency technology must be configured, including z/VM Single System Image clustering, GDPS xDR Proxy for z/VM and Red Hat Open Shift Data Foundation (ODF) 4.10 for administration of local storage devices. Application-induced outages are not included in the preceding assessments. Outcomes could differ. Other configurations (hardware or software) might have different availability characteristics.

IBM Developer for z/os

  • The general public can now purchase Machine Learning for IBM z/OS via IBM and approved Business Partners.
  • Furthermore, IBM provides a LinuxONE Discovery Workshop and AI on IBM Z at no cost.
  • You can assess possible use cases and create a project plan with the aid of this workshop, which is an excellent place to start.
  • You can use machine learning for IBM z/OS to expedite your adoption of AI by participating in this workshop.
Since June 2023, Drakshi has been writing articles of Artificial Intelligence for govindhtech. She was a postgraduate in business administration. She was an enthusiast of Artificial Intelligence.


Please enter your comment!
Please enter your name here

Recent Posts

Popular Post Would you like to receive notifications on latest updates? No Yes