Pages

Monday, October 9, 2017

Launching a Secure Environment: Applying IBM’s LinuxONE Encryption

By Bill Moran and Rich Ptak

Courtesy of IBM


The other day we attended an excellent presentation by Dr. Rheinhardt Buengden of IBM Germany on applying the encryption in LinuxONE[1]. He provided extensive technical detail on installing and implementing a secure IBM LinuxONE Emperor II system (or one of the other IBM Linux mainframe system). It was a highly informative session.

First, nothing that we learned contradicts our earlier blog[2] on IBM’s announcement. We continue to believe that LinuxONE combined with its associated hardware represents the best commercial alternative for security on the Linux market. But, we did get some greater insight into implementing a high-security system.

 We now have a much better appreciation of the level of effort necessary to achieve a secure operating environment. As one might expect, much of the work revolves around having to choose among the many options in Linux. But, it also requires effort fit the new system into the way business is currently organized and done. To accomplish this requires significant skills in Linux and security methods as well as a detailed knowledge of the company’s current processes.

 We provide some specifics here. There are certain to be others. First, consider the interactions between the security key management and the existing disaster recovery mechanism. Some types of keys are system specific and will not work on another system. Careful planning is necessary to identify and handle inconsistencies and conflicts[3]. The LinuxONE system can automatically recover from an abnormal situation but only if the preparation work has been done.  Similarly, backup and archive policies will need a review for similar inconsistencies. The whole issue of key management will need careful study and decisions made in choosing among the various types of keys that can be implemented. Several types of keys are available; each type has its own different properties, advantages, etc.

There are choices to be made over how to handle the encryption applied to files, file systems and disks. Understanding the relative advantages and choosing the best one requires knowledge of the Linux facilities and their interactions with the security facilities. Failure here could result in an intruder being able to access the most sensitive information in the clear; fatally compromising all system security.

The last topic concerns the Linux kernel. Typically, the Linux kernel included security APIs that invoke certain software functions. LinuxONE hardware will speed up these functions. For this to work, the Linux kernel must be updated with code that supports the LinuxONE hardware. IBM has submitted a fix for inclusion in a future Linux kernel release.

This points to a bigger, more significant problem. LinuxONE relies on some Open source modules such as Open SSL, all such dependencies need to be monitored and updated or modified as necessary if security is to be maintained. We mention this point because the Equifax security breach has been tied to a lack of maintenance to open source module. The lesson is that maintenance for all modules in the system must be carefully monitored and applied. Open source code updates cannot and should not be ignored.

In sum, we think that anyone planning an installation of a LinuxONE system should understand the magnitude of the task they are undertaking and plan accordingly.

For a security project of this scope, seriously consider establishing a security subcommittee of the Board of Directors. This group needs to learn enough to ask the hard questions and supervise security audits of the organization’s activities.

A review of the presentation would benefit any group interested in security. And, be most helpful for groups considering purchase of the new LinuxONE system.  However, nothing will substitute for a knowledgeable and active staff handling the installation and operation of a LinuxONE system. Senior management support is critical. We hope our notes here make that clear.



[1] Here is the URL for the presentation: http://www.vm.ibm.com/education/lvc/LVC0927.mp4
[3] Details on this topic are beyond our current scope. See Dr. Buengden’s discussion on the topic 

Monday, October 2, 2017

IBM LinuxONE Emperor II ™, IBM’s Newest Mainframe Linux solution

By Bill Moran and Rich Ptak

IBM LinuxONE Emperor II

Introduction

On September 12th, IBM announced the IBM LinuxONE Emperor II™, a new, dedicated Linux mainframe with significant upgrades from its z13-based predecessor, IBM LinuxONE Emperor. IBM positions Emperor II as “the world’s premier Linux system for highly secured data serving, engineered for performance and scale.” IBM chose the LinuxONE Emperor II “to anchor IBM’s Blockchain Platform cloud service.” We discuss features and provide some thoughts on evaluating the system for your own environment.


Performance Features

Emperor II is a z14-based Linux-only mainframe system designed as a highly reliable and scalable platform for secure data-driven workloads. Key performance improvements include:
·         A 2-3 x performance boost over the z13-based Emperor.
·         IBM described 2.6 x performance advantage over comparable x86 systems for Java work, a result of IBM moving some CPU intensive Java operations into hardware.
·         Powerful I/O processing capability with up to 640 cores devoted to I/O operations, a benefit for I/O limited applications.
·         Emperor II can operate at near 100% utilization with very low performance degradation. Typical competing systems can achieve 50% or 60% utilization before experiencing significant performance degradation.
IBM’s LinuxONE Emperor II is an impressive, powerful, high performance system. Do keep in mind that all performance numbers are application/environment dependent. Therefore, if performance is critical, do your own testing. Vendor numbers can only provide broad guidelines to potential performance improvement.


Security Features

IBM LinuxONE Emperor I enjoyed significant market acceptance for a variety of workloads. Recognizing the escalating interests in security and high-volume data computing, IBM initiated a large engineering effort to enhance and extend already legendary mainframe system security. The z14-based Emperor II takes security to a completely new level.
IBM states that the system represents the most advanced level of security commercially available today. We believe there exists some justification for the claim. Here’s why.
·         A major block to large-scale encryption has been the extraordinary time and effort needed for encryption/decryption. IBM dramatically[1] decreased both by using an on-chip cryptographic processor (CPACF). This allows users to implement pervasive, end-to-end encryption of all data throughout (and beyond) the system. If a hacker breaks in anywhere in the chain, they only get access to encrypted data, useless without the ability to decrypt. 
·         Hardware protected decryption keys. A hardware-assist feature assures keys are never available in memory in the clear. There is no way for a user, hacker or even an administrator to unlock or make the keys visible and useable.
·         All data can be automatically encrypted and remain so, at-rest, in-motion and during processing – end-to-end – from system to user.
·         Encryption security is implemented with no application changes. Security solutions that require any changes (applications) or actions by developer/user/programmer have been a stumbling block for encryption (and other) security approaches.
·         Finally, IBM has a new architecture called Secure Service Containers. These containers protect the firmware and the boot process as well as, the data and the software from any unauthorized change. A traditional weakness has been the potential for system admins to exploit their elevated system credentials or for those credentials to be exposed to internal or external threats and then used to gain access to locally running application code and data. With Secure Service Containers, the only access is via the web or an API granted to those specifically with access to this environment. This closes a hole long used by hackers gain access to critical and private data.


Other key features

Emperor II delivers enhanced vertical scalability (scale up) possibilities, i.e. it allows a collection of tightly coupled multiprocessors to communicate at very high speed using shared memory. This architecture provides a distinct advantage for applications doing sequential updates to a relational database over scale-out systems, such as most x86 systems.
A typical example would be a banking application handling customer accounts. To maintain a correct account balance, all debits and deposits must be processed sequentially. That is, in the order they were performed, e.g. earliest date and time first. An account can be “locked” to ensure accuracy, having shared memory minimizes the latency and associated delay that results from such lock management.  Attempting this via a scale-out collection of independent systems can result in a very complicated software environment and may also result in performance problems whereas, IBM’s Emperor II would have neither problem.


IBM Strategy

Enterprise concerns about data security have changed, now having dramatically increased in priority. While previously it was on everyone’s checklist, when the final purchase decision was made price and performance dominated. Now, security is a deciding factor, and IBM is positioning the Emperor II to win.

This signals a broader change in IBM’s messaging strategy. No longer is the focus on “speeds and feed” with its reliance on numbers, processing speed, price/performance, TCO, etc. to motivate a change of platform. IBM intends to drive the decision using a business case focused on platform design (architecture) targeting the solution of major business and operational problems, as IBM LinuxONE Emperor II does.

Of course, much depends upon the platforms being compared. In many cases, inherent mainframe security will be decisive. IBM’s Emperor II with LinuxONE security and its vertical scalability far exceeds anything a standard X86 platform[2] has.

While we applaud this change in strategy, it can complicate the selling task. Since IBM’s target is x86 systems, sales reps may find themselves competing with Window systems as opposed to a Linux x86 systems. A security discussion comparing LinuxONE to other systems will require a more knowledgeable sales force. Features and functions such as security, Blockchain technology, etc. will have to be explicitly linked to specific business requirements, problem resolution, etc.

One final word on security. The heavy emphasis on security also represents a risk as bad guys are likely to focus on exploiting weaknesses in applications or lax security procedures as the easiest point of vulnerability. Consumers, businesses and journalists are notoriously quick to indiscriminately point the blame to technology for failure. A successful penetration via, for example, an app accessing an Oracle database when the platform functioned perfectly – can quickly be blamed on the platform and the app overlooked. IBM effectively and economically addresses a real problem area. But, there exists much more to be done by the entire community.


Summary

IBM has done an excellent job in implementing security in this system. Anyone looking to achieve the highest level of security in a Linux environment should carefully examine the Emperor II system.  If they have not done so already, they also need to establish a security department to create and monitor organization-wide security policies.

It can’t be said that any system is truly impenetrable. This is true for reasons relating to the very real threat of internal compromise (e.g. carelessness, poor compliance practices, etc.), technological innovation as well as the subversive efforts of very, very sophisticated and clever people attempting to crack the system. We can say that we think that IBM has done an admirable job in creatively addressing a significant number and breadth of security vulnerabilities and problems. They have made it easier and economically affordable (in cost AND resource utilization) for enterprises of all sizes to use encryption techniques to secure systems and data.

We anticipate IBM’s LinuxONE Emperor II will appeal to high-end enterprises. They are familiar with mainframes and have the staff to manage them. IBM will have to work harder to win over those with less mainframe familiarity and without experienced staff. However, recent surveys indicate that efforts to modernize mainframe management and development tools along with the availability of JAVA, Linux, etc. are attracting new users to mainframes.  

Finally, the security that the system offers will be a powerful incentive for certain customers and the total package of the architecture and its features create a system that can deliver solutions to many customers that they cannot find anywhere else. Congratulations to IBM, we’ll watch and report on how this all develops.





[1] IBM did not provide performance or overhead numbers.
[2] By “standard” we mean that high end Oracle and HPE systems may have a scale up design that eliminates the problem that many x86 systems will encounter.

Monday, September 25, 2017

IBM Research on the road to commercial Quantum Computing

By Rich Ptak




Dario Gil, Vice President AI, IBM Research and Bob Sutor, Vice President AI, Blockchain, and Quantum Solutions, IBM Research recently provided a briefing IBM’s perspective on the state of Quantum Computing. They describe three phases in the evolution of Quantum Computing. They describe IBM efforts and contributions as well as a very recent and significant IBM Research breakthrough on the road to commercializing quantum computing.

The breakthrough is in practical Quantum Computing technology. It marks a significant advance towards commercialization of Quantum Computing. We’ll talk about why in a minute. First a few words about quantum computing. The building blocks of this technology are quantum bits, or qubits, which are the quantum informational equivalent of classical bits, the basis of contemporary computing. Bits have only two states. They are either 0 or 1, i.e. binary – from there all of computing is built.

Individual qubits can exist in much more complex states than simple 0’s and 1’s, storing information in phases and amplitudes. Additionally, the states of multiple qubits can be entangled, meaning that their states are no longer independent of each other. The fact that quantum information can be represented and manipulated in these ways allows us to approach algorithms (instructions that are used to solve problems) fundamentally differently, opening up opportunities for exponentially faster computation. A major challenge to be overcome is how to design algorithms that can make use of these properties to solve problems that are traditionally difficult for conventional machines, like efficiently simulating materials. In this case the molecules at the heart of chemistry and material science.

A cover story article in the September issue of Nature magazine details how IBM researchers demonstrated a highly efficient algorithm that simulates beryllium hydride (BeH2), and then implemented that algorithm on a real quantum computer. This demonstration was the largest molecular simulation on a quantum computer to date. You can link to the article here. Unfortunately, it is behind a paywall, but there are plenty of other highly interesting articles on Quantum technology and other topics available there. IBM’s announcement with a short explanation can be found here. Read the article for more details about the breakthrough.

What matters today to enterprises, business and more

The most enterprise-significant parts of the announcement are in the implication for commercial enterprises. These are exposed in the details of IBM’s vision and focus about the commercialization of Quantum Computing technology. It provides insightful information and structure for making decisions about when to begin investigating Quantum Computing and its potential to affect your enterprise or business.
Image Courtesy of IBM, Inc. 

IBM considers the initial commercialization of Quantum Computing to be within sight. It may be as much as a decade away, but can reasonably be considered to be close enough for some early enterprise movers with interest, resources, and vision to begin exploring the technology and its potential.

Let’s position where Quantum Computing is today. The speakers described three phases of Quantum Computing These are:
·        Phase 1 – development of Quantum Science – interest began in the 1920’s, it wasn’t until the 1970’s that the attention of computer scientists’ attention was captured. This led to a decades-long effort to discover and define the physics of quantum technology and then develop the theories and concepts to build-out the science leading to Quantum Computing technology. Quantum Science underlies the entire field, and will continue as long as there is research to be done to continue to advance the technology.
·         Phase 2 – emergence of Quantum Technology – began May 2016 when IBM provided free access to the first publicly-accessible Quantum Computing prototype, e.g. IBM Q experience, on the IBM Cloud. The opportunity to experiment on a real device led to the creation of new problem-solving tools, algorithms, and even games as real Quantum Computers became accessible to the first wave of users beyond theoretical physicists and computer theoreticians. These new users are practitioners; developers, engineers, thinkers and researchers including scientists, chemists, mathematicians, etc. Their efforts focus on understanding and articulating problems in quantum terms. The phase will end when the now-wider quantum information community discovers the first applications where the use of quantum computing offers an advantage for solving certain classes of problems. This leads to the next phase…
·         Phase 3 – the age of Quantum Advantage – the age of full commercialization of Quantum Computing. It will be marked with the delivery of apps able to fully exploit Quantum Technology to solve commercial problems. Quantum Computing begins to compete, in some areas, with traditional computing methods by offering multiple orders of magnitude increases in processing speeds and computational complexity for certain classes of problems.

Things to keep in mind and conclusions:

Quantum Computing systems that can handle commercial-scale problems don’t exist yet. A considerable amount of research and development work needs to be done before you can begin to contemplate configuring a system of software and infrastructure. But the first serious prototype systems that lay the foundations for the more mature machines of the future do exist. Is it time to begin to develop some understanding of Quantum Computing, how it functions and how it is currently being used?
Quantum Computing will complement, not replace, traditional computing. By its nature, it is best suited to solving certain classes of problems that are traditionally-difficult to solve with conventional machines. These are problems where solving them requires evaluating many alternatives to find the best solution, each of which alternatives may be computationally intensive to evaluate. Today, many problems are addressed (and will remain so) with traditional computing simulation, modeling and statistical analysis, albeit while making simplifying assumptions. For many applications, solutions obtained with traditional computing techniques will be adequate. Also, despite some recent claims, Quantum Computing does not invalidate or decrease the need for recently announced advances in computing security. Such protections will remain critical to secure computing long into the future. 
For other applications, computing alternatives are needed, especially in cases that require simulating quantum behaviors. These include modeling chemical compounds, which requires the ability to predict molecular-level interactions. It is believed that wherever the analysis involves evaluating an incredibly large number of combinations of items, Quantum Computing will have a distinct advantage. Some other examples of nearer-term applications of Quantum Computing include optimization and machine learning.
So, what’s the conclusion? First, as we said, commercialized Quantum Computing is still in the future. It is not ready to address short- or medium-term issues. But, that day is coming. At this stage, most can ignore this technology. But, there also are some that should allocate a portion of their resources (time, budget) to get educated about Quantum Computing. Quantum Computing will realize its biggest advantages when users can define problems in its terms. That requires an understanding of the technology.
Clearly, the level of recommended activity varies with the potential to impact. You need to get a realistic idea of that potential. One approach would be to take advantage of IBM’s offer for free access to its Quantum Computing prototype[1]. Another approach would be to fund a sandbox project, or an off-hours task to learn more about and explore quantum technology.  AND thinking about problems in Quantum terms. IBM is making a considerable amount of resources available to do so, much of which is free, some not.  
In summary, our advice is to concentrate on:
·         Understanding the basics of Quantum Computing approach to determine its potential to impact you and your business. We expect most will find its potential optimization benefits too attractive to resist.
·         Learning about and understanding how Quantum Computing will change how problems are viewed, articulated and programmed for solutions.
·         Considering encouragement of “sandbox” or “off-hours” efforts to learn more about Quantum Computing; formal or informal depending on organizational resources and culture.
·         If the potential impact is significant (and we think it is for many), assign a senior executive the responsibility to keep current on the status of Quantum Computing. 
Finally, there exists no single standard for comparing Quantum Computing status today. The metric of the number of qubits available in an array (that makes up a system) – is insufficient.  For a time, conventional “wisdom” posited it as ‘horse-race’ with more qubits being better.
However, the number of qubits alone don’t work if there isn’t time to execute an algorithm (application) before a qubit array ‘ages’ to a bit and loses the data. A way needs to be found to control/correct such error rates. There are three issues: 1) the life of the qubit array, 2) the time for an algorithm to execute, 3) error correction/avoidance.
Researchers are working on these but no single metric yet exists to measure and relate progress. More about these efforts and other issues appear in IEEE Spectrum and Nature magazine, mentioned earlier. 




Publication Date: September 25, 2017
This document is subject to copyright.  No part of this publication may be reproduced by any method whatsoever without the prior written consent of Ptak Associates LLC. 

To obtain reprint rights contact associates@ptakassociates.com

All trademarks are the property of their respective owners.

While every care has been taken during the preparation of this document to ensure accurate information, the publishers cannot accept responsibility for any errors or omissions.  Hyperlinks included in this paper were available at publication time. 

About Ptak Associates LLC
We cover a breadth of areas to bring you a complete picture of technology trends across the industry. Whether it's Cloud, Mobile, Analytics, Big Data, DevOps, IoT, Cognitive Computing or other emerging trend, we cover these trends with a uniquely deep and broad perspective.

Our clients include industry leaders and dynamic newcomers. We help IT organizations understand and prioritize their needs within the context of present and near-future IT trends, enabling them to apply IT technology to enterprise challenges. We help technology vendors refine strategies, and provide them with both market insight and deliverables that communicate the enterprise values of their services. We support clients with our understanding how their competitors play in their market space, and deliver actionable recommendations.




Friday, August 11, 2017

IBM + Partners breathe new life into Moore’s Law with 5NM chip technology

By Rich Ptak and Bill Moran


When IBM exited the chip foundry business several years ago most industry watchers were sad to see the company go. A key player for decades in semiconductor research, IBM would definitely be missed. We thought that the industry had suffered a real loss. We and others thought statements of IBM’s commitment to making further investments in semiconductor research were to be written off as an essentially meaningless face saving gestures.

As it turns out, we couldn’t have been more wrong. In fact, IBM research continued the work on semiconductor research that it had been doing for nearly 50[1] years.  The IBM-organized consortium of IBM, Global Foundries, and Samsung based at NY State’s SUNY campus in Albany, is delivering a significant breakthrough in semiconductor technology research.  Exiting the chip foundry business was not a sufficient reason for IBM Research to cease its efforts.

Here’s some background on what IBM and its partners have accomplished. Moore’s law[2] says that the number of transistors on a chip will double approximately every two years. The results of that law drove the semiconductor industry for decades.
A meter being roughly a yard.  It might have been more fun if the industry has used the term “nanoyard”. However, as the topic is worldwide technology, the metric system is used, as is the practice in global technical and scientific circles.
Recently, much published commentary (ours and others) discussed how the law was reaching the end of its useful life. A major reason being the physical limits of chip geometry. Incidentally, one of the effects of law is that today’s cellphones (which fit in a pocket) have more processing power than the 1960’s computers used for the Moon visit (which occupied an entire very large room).

To understand what is going on, we need a little computer industry technology background. The industry initially measured processing speeds in seconds. Things moved faster so the term “milliseconds” (one-thousandth of a second) became standard. As the speed-up continued, the “microsecond” (one-millionth of a second) became the standard. One might imagine that things could not get much faster, wrong. Today’s process speeds are measured in “nanoseconds”, i.e. billionths of a second.   

Moving to semiconductors, chip size is measured in terms of the distance between identical features in an array.  The current unit for this distance is “nanometer”, i.e.  a billionth of a meter. The leading edge for productions semiconductor chips today is 10 nanometers.

Moore’s law depends on shrinking the size of the chips while increasing the number of transistors on the chip which increases processing power. Conventional wisdom was that it wouldn’t be possible to push the FinFET technology (which underlies chip manufacturing today) to much smaller chips without losing efficiency. Thus, the comments about the end of Moore’s Law.

However, IBM along with its partners have now developed a process around the Stacked Nanosheet Gate-All-Around Transistor.  This allows eventually building 5 nm chips with improved efficiency, which could not be achieved with FinFETs. The details of the process exceed the scope of this paper. (Those interested can start here[3].) The chart below provides a simplified view of the new IBM process compared with the existing industry standard 10 nm process.


Chart 1 This chart is an adaption of a copyrighted IBM chart.
There are several items of note. IBM's  new chips orient transistors horizontally versus today’s vertical arrangement. This allows transistors to be stacked and have more on each chip. Also, IBM chips use a new way to form the sheet material that is much more efficient in power consumption. It delivers a 75% saving in power compared to existing 10 nm chip architectures.  Finally, the new sheet formation process allows continuous fine-tuning for power and performance of specific circuits during manufacturing. Something not possible with FinFET technology

In summary, it appears to us that the IBM consortium has breathed new life into Moore’s law. With this new architecture, it looks to be applicable for the next decade or so just when many were pronouncing the law dead. However, we expect the interest and investments in such new technologies as quantum, data-centric computing and other approaches to grow.

[1] Publication in 1974 of a paper by Robert Dennard et al on MOSFET scaling rules for improving transistor density. See http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.334.2417
[2] Not actually a law, it was a prediction about how the semiconductor industry would evolve in terms of density and cost per transistor.

Ptak Associates Tech Blog: Do two positive quarters signal a major turnaround...

Ptak Associates Tech Blog: Do two positive quarters signal a major turnaround...: By Bill Moran and Rich Ptak Although we don’t usually cover IBM storage, it’s worth calling attention to what appears to be a significa...

Thursday, August 10, 2017

Do two positive quarters signal a major turnaround for IBM Storage?

By Bill Moran and Rich Ptak


Although we don’t usually cover IBM storage, it’s worth calling attention to what appears to be a significant positive development.   Ed Walsh joined IBM as general manager of IBM Storage and Software Defined Infrastructure on July 11 of 2016.  He joined IBM from Catalogic Software, where he was CEO since 2014.  IBM Storage which had endured 21 consecutive quarters of declining revenue has turned a corner.  Revenue results since then appear below.

Storage Revenue by Quarter

1Q15 
2Q15 
3Q15
4Q15
1Q16
2Q16
3Q16
4Q16
1Q17
2Q17
-4%        
-14%
-7%
-7%
-6%
-13%
-9%
-10%
+7%
+8%
The change represents a dramatic 17-point swing from -10% in 4Q16 to a positive 7% growth in 1Q17. Growth that continued with another 2Q17 revenue increase of 8%.

With corporate IBM posting 20 or so quarters of declining revenues, IBM Storage reversing the trend by increasing revenue is great news. Storage requires proper investments to maintain such growth. Recent indicators, e.g. announcement of a successful collaboration with Sony using sputtered[1] magnetic tape to advance toward a dramatic increase in tape storage capacities (to 330 TB), suggests they will get what’s needed. And, IBM corporate will shift focus to other problem areas.

We expect IBM storage customers to feel reassured about existing IBM storage investments and benefits. They will view continued investment in and growth with IBM storage as good business sense. 

We fully understand that the storage marketplace remains intensely competitive. We recognize two-quarters of growth doesn’t guarantee success. Any benefits IBM enjoyed from the confusion resulting from the EMC/Dell merger, will disappear. The IBM Storage team will need to remain very motivated and highly competitive.

We are comfortable ending on a positive note. Clearly, IBM management were hoping for exactly what Ed Walsh and his team are delivering. In July alone, they made significant storage announcements, one was a new all flash solution for Exabyte Data analysis for Hortonworks[2]. The other was about a family of new flash arrays[3]. One model, when attached to the new z14 mainframe, delivers less than 20 microsecond response times. With more new offerings coming in 2H17, we look forward to 3Q17 revenue numbers to see if the trend continues. We’re inclined to think it will.




[1] The word “sputtered” is not a misprint. The new tape format is exactly that. It is a major breakthrough in tape technology. Google it for more information.

Monday, July 17, 2017

IBM z14 Mainframe = Trust and Security Benchmark

By Rich Ptak

         Figure 1 z14 Design Goals        (Image courtesy of IBM, Inc.)
IBM's introduction of the z14, the next generation mainframe raises the bar not only for enterprise security, scalability and performance, but also addresses the pricing issues. The first three with pervasive encryption and technological innovation. The latter with highly flexible container-based pricing models. 

In their announcement details, IBM focused on enterprise and business relevance of the z14.
There are too many new features, capabilities, and innovative aspects to cover in one article.
We will highlight the design goals and provide a quick overview of the perennially interesting new pricing models. Then, look at the Open Enterprise Cloud aspects in a little more detail.

It's the z14 For Trusted Computing - Overview

The amount of business-critical data collected for rapid analysis and feedback continues to explode. Digital transformation is well-on its way to reality for enterprises of all sizes. Data sharing includes an increasing number of partners and customers. The issues around data security, data integrity, data authentication, and the risk of compromise become of increasing concern. At the same time, an operating model built on the hybrid cloud (with collocation, shared infrastructure, multi-tenancy, etc.) is clearly establishing itself as the preferred enterprise computing infrastructure model for the foreseeable future. This results in enormous pressures on existing security and data handling approaches to adapt and change to be more innovative and reliable.

In the increasingly interconnected, interactive world, trust, security, risk reduction and management to serve are critically important. It is such an operating environment that IBM aims to serve as it introduces the z14, the latest generation of mainframe computing.

So, IBM operated with three basic design goals and one major pricing innovation for the z14.
The design goals (see Figure 1) first:
  1. A new security model - pervasive encryption as the new standard for data protection and processing with no changes to apps or impact on SLA's - the security perimeter extends from the center to the edge - designed-for security, processing speed and power; the most efficiently secure mainframe ever. 
  2. Fully leverage continuous, in-built intelligence - complement and extend human-machine interaction with direct application of analytics and machine learning capabilities to data where it resides - leverage continuous intelligence across all enterprise operations.
  3. Provide the most open enterprise operating environment - new hardware, open standard firmware, operating system, middleware and tooling that simplifies systems management for admins with minimal IBM z knowledge - more Open Source software supports agile computing, e.g. leverage and extend existing API's as service offerings; easier scaling of cloud services.

Next, pricing innovation:

After some extensive research with customers, IBM is introducing three new pricing models.
The goal is to provide increased operational flexibility with prices that are significantly more
competitive and attractive for modern digital workloads. Container Pricing for IBM z is designed
to provide "simplified software pricing for qualified solutions, combining flexible deployment
options with competitive economics that are directly relevant to those solutions." We provide
some details later. First, a look at the Open and Connected aspect of the z14.

Open and Connected

Today's market demands open, agile operating environments, and services with new or
extended capabilities being introduced rapidly and seamlessly. All to be delivered through an
agile, open enterprise cloud. The z14 software environment is designed to those expectations.
Advanced DevOps tools that leverage new and existing APIs can cut service build times by
90%. To speed innovation, IBM's extensive ecosystem of partners are developing and
delivering thousands of enterprise-focused, open source software packages to support the
mainframe in accelerating the "delivery of new digital services through the cloud." Let's look at
this a little more closely.

The new z14 is about leveraging APIs to speed development and ease access to mainframe
capabilities. The goal is to make the efforts of developers and users to exploit the powers of
the mainframe to be easier to access, simpler to use and more quickly deliverable to the
market. This is to be achieved with new hardware, firmware, operating system, middleware
and tooling that simplifies systems management tasks. These also make the process easier for
system administrators with minimal IBM z System experience and knowledge.

The procedure breaks down into four tasks:

  1. Discover - leverage existing investments by helping developers to quickly, automatically discover existing applications and services that can then be converted to API services. 
  2. Understand -  prior to going into production or implementing application changes, identify the dependencies and interactions between the applications and API's to identify how they are affected by any changes. Know where and what an API touches to avoid down time and re-working of changes. It also minimizes the risk of removing protection of critical data by exposing an API. 
  3. Connect - provide easy, automated creation of RESTful services based on industry standard tooling to rapidly create new business value, e.g. link a vacation search to destination appropriate clothing, hotels, interesting sites, etc. Or, associate an order for heavy equipment to a link that suggests purchasing insurance, maintenance, installation or operating services. 
  4. Analyze - use operational analytics and data collection to create an enterprise view of the mainframe and the surrounding operational environment. Integrate the z System data with data from over 140 different data sources in any format. Search, analyze and create a visual representation of service activities and interactions using SIEM tools, such as Splunk or open source Elasticsearch. This helps in early identification of potential problem areas such as performance bottlenecks or operational conflicts.

New capabilities dramatically increase the performance and scalability to already impressive
mainframe abilities. These include such new capabilities as zHyperLink a new direct connect,
short-distance link. It is designed for low latency connectivity between the z14 and FICON
storage systems. It can lower latency by up to 10x which can reduce response time up to 50%
in I/O sensitive workloads, without any code changes. The z14 has available, as a purchasable
option, Automatic Binary Optimizer for z/OS(r), which will automatically optimize binary code for
COBOL applications which can reduce their CPU usage by 80% without a recompilation. One
z14 can scale out to support an impressive 2 million Docker containers. Now, let's look at
pricing.

Container Pricing for IBM z

Any mainframe discussion is bound to include a discussion of pricing policies, management,
and control. Customers want predictability - to know what the bill will be. They want
transparency - knowing how billing is calculated. They want visibility - to understand the
impact of changing or moving workloads. They want managerial flexibility - ability to adjust
workload processing and scheduling to balance their needs with computing costs.

IBM's solution is the concept of Container Pricing for IBM z, which provides line-of-sight pricing
to make the true cost highly visible. It applies to a collection of software collocated in a single
container. It determines a fixed price which applies to that single container[1] of software with no impact to the pricing of anything external to the container.



[1] A container is a collection of software treated for pricing purposes as a single item. The collection is priced separately and independently of any other software on the system.

A container pricing solution can be within a single logical partition or a collection of partitions.
Multiple, collocated and/or stacked containers are permitted. Separate containers with different
pricing models and metrics can reside in the same logical partition. Container deployment is
flexible to allow the best technical fit, independent of the costs. Three types of Container
Pricing solutions are offered now:
  1. Application Development and Test solution (DevTest) - provides DevTest capacity that can be increased (up to 3x) at no additional MLC cost. Clients choose the desired multiplier and set the reference point for MLC and OTC software. Additional DevOps tooling with unique, discounted prices are available. 
  2. New Application solutions - special, competitive pricing for those adding a new z/OS workload to existing environments. There is no impact on existing workload prices. The container size determines the billing for capacity-priced IBM software.Payments 
  3. Pricing solution - offers on-premise, Payments-as-a-Service on z/OS based on IBM Financial Transaction Manager. It applies to software or software plus hardware combinations. 
This is a simplified review of the new model. Contact IBM for more detailed information. IBM
will be refining and adding models to meet customer needs. Moving on to the other design goals.

Trust + Security thru Pervasive Encryption

Data and application security in enterprise IT have taken a beating in the last few years. Traditional security techniques and barriers have fallen victim to numerous attacks as well as rapidly evolving threats and scams. Successful attacks and breaches came from sophisticated external criminals as well as maliciously or accidentally by insiders. Victims include large, sophisticated financial institutions to national governments and ministries. Even blockchain ledgers have proven vulnerable to weak implementations and clever hackers.

With data widely recognized as an asset of escalating value, the risks and costs of such breaches increases. Traditional security methods focused on trying to prevent successful intrusions or minimizing damage with selective encryption, rapid detection, and blocking. Selective data encryption proved too expensive, resource intensive and inconsistent in application. And, significant risks remain when leaving some data un- or weakly protected as hackers and intruders became more sophisticated. Also, new policies or evolving compliance requirements can make critical once non-critical data, further weakening selective methods.

IBM's solution was to design the z14 with hardware technology and software protections that make pervasive encryption from the edge to the center including the network affordable, efficient and rapid. All data is encrypted all the time without requiring any changes to applications and without impacting Service Level Agreements (SLA's).

Application of Machine Learning

Successfully leveraging artificial intelligence (AI) in the enterprises had been an elusive goal
for decades. Early attempts were frustrated by limitations in expertise, processing power, high
costs and the sheer amount of effort required to build and test models.

Today, the maturation and automation of modeling techniques along with improvements in
infrastructure and technology have allowed AI, more accurately described as machine learning,
to come into its own in the enterprise. Examples in the z14 include optimized instructions,
faster processing of Java code, and improved math libraries that speed and improve analytics.
The 32TB of memory means the z14 can process more information and analyze larger
workloads and in-memory databases in real-time. The results come in the form of prompt
availability of actionable business insights that result in better customer services. The
announcement contains much more about machine learning applications as well Blockchain
capabilities. Topics for future coverage.

The Final Word

The new z14 is an impressive and worthy addition to the IBM mainframe family. It promises
"Trusted" computing on the platform that has been the benchmark for processor security. That
is a much-desired deliverable in a highly integrated, totally connected, rapidly evolving world of
digital enterprise. There are many more attractive features to the new z14. These include
unique to IBM Blockchain services which provide significant protection against fraud. There's
the ability to rapidly build microservices choosing from over 20 different languages and
databases to use. There's the free access to the mainframe for those interested in testing the
ease of use features or expanding their mainframe skillset. (See https://ibm.biz/ibmztrial).

By delivering efficient, affordable, speedy 100% end-to-end encryption of all application and
data base data it pushes infrastructure boundaries to achieve a uniquely secure environment;
without requiring any changes to applications, services or data. IBM has also implemented
unique encryption key protection that removes any risk of it being exposed. To do so without
changing or impacting the ability SLA's is remarkable. IBM estimated encryption overhead at
"low-to-mid" single digits.

IBM's focus on automating and facilitating the utilization and optimization of API services is a
very smart move on their part. An on-going 'critique' of the mainframe has been that it is
inaccessible, living and operating in its own isolation. True in the past, the last few years have
seen a dramatic alteration with the emergence of the "Open, Connected and Innovative"
mainframe. The change has been rapid and significant.

The significant impact of the introduction of Linux on Z and the proliferation of numerous Open
Standard solutions, APIs, tools and interfaces cannot be ignored. The introduction and
movement of numerous Open Stack products to the mainframe along with the addition of agile,
Open Source DevOps tools and APIs have made the mainframe's extensive capabilities easier
to access and faster to exploit by a much wider audience. This is reflected in the growth of the
highly diverse ecosystem of mainframe partners, ISVs and developers working with IBM. The
z14 looks to accelerate that process.

The mainframe, IBM's longest running product, has seen its ups and downs over the last 50+
years. Anticipation and predictions of its death have filled column space of way too much IT
commentary, stories and speculation. The z14 fills a well-defined, valuable place in the IT
infrastructure.