TECH_183: Generative AI for Mainframe Real-World LLM Benchmarking, Cost Management, and Practical Results
Project and Program:
Machine Learning / AI
Tags:
Proceedings,
SHARE Orlando 2026,
2026
As Generative AI and Large Language Models (LLMs) move from hype to hands-on application, mainframe development teams face unique challenges and opportunities. This session explores how to design and execute realistic LLM benchmarks tailored for mainframe environments, including the selection of relevant tasks, datasets, and evaluation metrics. We’ll discuss practical strategies for managing the costs of LLM experimentation and deployment, from infrastructure choices to prompt engineering and model selection. Real-world case studies will showcase measurable outcomes—highlighting productivity gains, quality improvements, and lessons learned from early adopters. Attendees will leave with a blueprint for responsible, cost-effective LLM adoption in mainframe workflows, and a clear understanding of how to interpret and act on benchmark results.
Back to Proceedings File Library