Tuesday, November 14, 2017

Digital Transformation Drives Mainframe’s Future


Digital Transformation is amplifying mainframe as mission critical to business growth more than ever before. With 70% of the world’s corporate data and over half of the world’s enterprise applications running on mainframe computers, they are at the core of just about every transaction. A single transaction can, in fact, drive up to 100 system interactions. The continued increase in mainframe transaction volumes, growing on average 7-8% a year for 78% of customers,  has even led to a new buzzword: The Connected Mainframe.

According to IDC’s research, connected mainframe solutions generate almost $200 million in additional revenue per year while simultaneously improving staff productivity and cutting operational costs. Over 50% of the benefit value ciomes from higher transaction volumes, new services, and business expansion. Businesses rely on mainframes to:
  • Perform large-scale transaction processing (thousands of transactions per second)
  • Support thousands of users and application programs concurrently accessing numerous resources
  • Manage terabytes of information in databases
  • Handle large-bandwidth communication


The growth of transaction volumes and diversity of applications connecting into the mainframes can lead to significant operational challenges. With more mobile to mainframe applications tio manage and more data to transact, including eventually blockchain data, organizations need to improve their mainframe operations model drastically. Reactive approaches to mainframe management just can’t keep up with the velocity of change and dramatic growth. Enterprises are losing an average $21.8 million per year from outages and 87% of these enterprises expect this downtime cost to increase in the future. An astounding 66% of enterprises surveyed admit that digital transformation initiatives are being held back by unplanned downtime.

Improving the enterprise’s ability to support increased mainframe workloads is why machine learning, augmented intelligence, and predictive analytics are critical to the CA Mainframe Operational Intelligence solution. Embedded operational intelligence proactively detects abnormal patterns of operation by ingesting operational data from numerous sources. This helps to anticipate and avoid problems through:
  • Detecting anomalies quickly and delivering proactive warnings of abnormal patterns
  • Using advanced visualization and analysis that accelerates issue triage and root-cause analysis
  • Deploying multiple data collectors that work synergistically to provide broad visibility, more in-depth insights and increased accuracy of predictions
  • Delivering dynamic alerts that improve mean time to resolution (MTTR)
  • Combining simplified visualization of time-series data with deep-dive analysis tools
  • Clustering alerts automatically to correlate related alerts and symptoms
  • Removing irrelevant data points from reports to provide more actionable insights

CA Mainframe Operational Intelligence consumes data from multiple CA solutions and directly from the IBM® z Systems® environment through SMF records. Raw alerts from performance, network and storage resource management tools are automatically correlated to surface specific issues and provide predictive insights for each issue. With machine learning and intelligence, wide data sets lead to more accurate predictions, and better relationship and pattern analysis. This insight also includes drill-down and probabilities which can also trigger automated problem remediation. This capability is uniquely embedded into the management environment to more proactively optimize mainframe performance and availability with fewer resources.

This modern approach to operational management will help organizations on-board new IT staff to manage the mainframe moving forward, while also protecting limited mainframe experts to focus on essential tasks. Using machine learning and advanced analytics, your entire team can now act on potential issues much earlier, isolate the real root-cause faster and ultimately remediate issues before they become revenue-impacting incidents.


( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)





Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Sunday, November 12, 2017

The Endpoint Imperative: Global Security Compliance. Are you ready?



China has its Cybersecurity Law. Next May, the General Data Protection Regulation – or GDPR –goes into effect for the European Union. Research shows most organizations just aren’t ready for these and other measures.

Tune into this episode of “The End Point Imperative: A Podcast Series from Intel,” to hear from Intel’s Yasser Rasheed, Director of Business Client Security on how a combination of protection at the hardware and software level can help organizations gain compliance and avoid breaches, fines, and financial impact.

Kevin L. Jackson: Hi everyone and welcome to this episode of the Endpoint Imperative. A podcast series from Intel. My name is Kevin L. Jackson and I'll be your host for this series. The topic for this episode is "Global Security Compliance. Are you ready?" With me is YasserRasheed, director of Business Client Security with Intel. Yasser, welcome.

Yasser Rasheed: Thank you for hosting me today. I'm very excited for this talk.

Kevin L. Jackson: It's really our pleasure. Let's get started on this. The security world is really abuzz. We talk about GDPR or the General Data ProtectionRegulation. This is Europe's looming security regulation. Can you tell us a little more about it?

Yasser Rasheed: Absolutely. You know Kevin, the industry is shifting and evolving very quickly in this space. We're excited about the positive changes taking place in the industry. The GDPR or General Data Protection Regulation coming out of Europe is really a replacement for the European directive that they had in the past. It covers a whole slew of data protection and security regulations that allows - but really caters to protecting the end user and the end user data.

Kevin L. Jackson: I understand it's really the hefty fines that have the information security officers worried. I'm told that they can be the greater of either 20 million Euro or 4% of global annual revenues. Why is this putting the spotlight on security and compliance in North America? I thought this is a European thing, right?

Yasser Rasheed: It is not a European thing only. It affects anyone that deals with the European citizens or in business in Europe so global companies are really impacted by this regulation and they need to pay attention to it.



                     (When viewed on a mobile device, please press select "Listen in browser")

Kevin L. Jackson: This is really important to you. From your point of view, at the IT and operations level, what should these companies be really focused on?

Yasser Rasheed: The companies need to first get educated on the new regulations. It is going to be applicable in May or enforced starting May 2018. It is really coming very soon. The GDPR regulation is really a legal framework that comprehends a number of data security and privacy guidelines for organizations. For example, they need to make sure that they look at how the data is processed, how the data is protected. Who gets access to the data at what point in time and under what tools? Is everything audited and logged in the right way so that they can have the right traceability. There are a number of things that the organizations and especially IT and chief information security officer teams need to pay attention to in this case.

Kevin L. Jackson: With all that in mind, what should these enterprises be thinking about when it comes to data protection at the hardware and the software level?

Yasser Rasheed: That's a great question. First, let's head back and look at what's happening in the industry nowadays. The whole space of cybersecurity is full with hackers and really malicious users trying to get access to information and this is impacting everyone. We see breaches every day. Solutions today are available in software, however, we believe that the software alone cannot protect and cannot enforce the level of readiness for GDPR and the likes. What we really look for is the role of the hardware to augment and compliment the role of the software in the space. More specifically in the security space, there are many hardware products that companies like Intel is offering in this space to protect the identity of the user, to protect the data of the user. These are tools that our key organizations can take advantage of to be ready for GDPR compliance and in general, to have a more healthy and stronger security posture in the environment.

Kevin L. Jackson: Thank you very much for sharing that important point. Unfortunately, though, we're at the end of our count for this episode. Many thanks to Yasser Rasheed with Intel for his insights and expertise.



( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)





Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Sunday, November 5, 2017

The Endpoint Imperative: IT Spending: Setting Priorities in a Volatile World




Fast-evolving trends are changing the way IT thinks about security. To stay secure and productive, IT operations must excel at the fundamentals: PC refreshes for security, and optimizing end-user computers with Microsoft Windows 10. In “The Endpoint Imperative,” a podcast series from Intel, learn from the experts how hardware and software together make for optimized security.

In Episode 1,"IT Spending: Setting Priorities in a Volatile World," Intel’s Kaitlin Murphy talks PC refresh, security, and productivity. Here she also addresses the key trends that drive IT spending decisions. As Director of Marketing for business clients with Intel, Kaitlin leads the business client marketing organization responsible for mobile and desktop platforms, vPro, Intel Unite, and other products.]

Kevin L. Jackson: Let’s get started. Kaitlin, IT spending is up and this is being driven by cloud computing digital transformation. What could this mean to Intel?

Kaitlin Murphy: Digital transformation is really the changes associated with applying digital technology to people. In the case of my group, it's about businesses and employees, the end user, IT, facilities, and even other groups. Digital transformation touches all of the aspects of a business like, smart office or smart workspaces. How do you make your environment aware and then have it take action on your behalf?

Kevin L. Jackson: Can you give us some examples of this?

Kaitlin Murphy: Sure. This could be something as simple as air conditioning. When the room's unoccupied, the air conditioning is off, but when it sees somebody come in, it knows to adjust the temperature to their preference. It could even be something more complex like, the room knows who you are and it can contextually retrieve information based on your conversation in real time, knowing that you're allowed to access that information.

 (When viewed on a mobile device, please press select "Listen in browser")


Kevin L. Jackson: That's amazing. One of the real key driving components of this spending has been the personal computer or PC sales. This is also driving the PC refresh cycle. Can you talk about those drivers and their impact on organizations?

Kaitlin Murphy: Absolutely, totally agree. PC refresh or PCs, in general, are a huge piece of digital transformation. Today, it's heavily influenced by a variety of things, one of which is security manageability. In general, a newer PC with a newer operating system is more secure and more manageable. That means less burden on IT resources, lower lifetime costs, and higher employee productivity and satisfaction. Having performane, secure, managed up to date devices is critical for a business of any size. Not only does it help with the items we talked about above but more and more that we're seeing in a company that has a digital transformation strategy is better able to attract and retain the target talent that they want. It literally affects every single aspect of a business.

Kevin L. Jackson: Let’s zoom in on security. How do you see that factoring in on the spending decisions?

Kaitlin Murphy: The corporations are a major target for bad actors. Literally, in one place you've got the crown jewels. You've got IP. You've got customer information. You've got employee information and more. Because of this, companies have to have a comprehensive security strategy in place and then the products to execute that. Part of the executing their strategy means having secure PCs. Like we mentioned before, newer PCs are typically more secure and that's for a variety of reasons. First, you've got the latest and greatest technologies and solutions and the PC ecosystem behind it. Second, with an older PC bad actor have simply had more time to find the holes and to exploit them.

Kevin L. Jackson: It really seems like you’re focusing on the PC instead of the data center. Why is that?

Kaitlin Murphy: You need to focus on both. You're right, the PC is a critical piece. One thing that not everyone knows, is that when an attack is launched on an enterprise the most common route into that company is through the endpoint. What happens is a bad actor captures the credentials of an employee and they can access their PC. When they can access the PC, they can access all the data on that PC. Typically, any place that PC is authorized to access as well. Newer PCs have solutions to help minimize this risk.

You can protect your credentials and hardware, for example, so they're harder to be spoofed or otherwise exploited. When we look at IP support desk calls, the one type of call that's grown the most in the past years' security-related incidents, like viruses or malware. These incidents place a resource burden on the company, not to mention the security risk. IT now has to make a decision. Is the cost of that older PC protecting it, securing it plus the safety risk worth more or less than just buying a new PC that has new security?



Kevin L. Jackson: Now, let’s zoom out to 18 or 24 months from now. What considerations do you see impacting IT budgets, especially, the spending on PCs and other endpoint devices?

Kaitlin Murphy: Well, while technology moves quickly, sometimes, it often moves a little bit slowly as well. I think the trends we talked about today are very firmly entrenched and the ones that we're going to continue to see in the next 18 to 24 months, security, manageability, even the value of local compute performance will all be relevant.

Kevin L. Jackson: [chuckles] Wait a minute. Why do you have to worry about local compute? Everyone's going to the cloud.

Kaitlin Murphy: Local compute's going to continue to be important. There are some things you just don't want in the cloud and some things you can do better locally, not to mention that when you have performance on the endpoint you can run some of these security solutions we've talked about today. I also think there's a trend around security innovation and that's definitely not going anywhere. Look at Off Network Solutions and Loan. The average US company has to use six different endpoint solutions just to secure a single device.

There's also a lot of trends around unified endpoint management. How can an IT organization manage its entire fleet, but usually at this point is more than one PC per person with a single set of tools? This coupled with more ambient compute devices, think of workplace transformation, devices that don't necessarily have a dedicated user, are going to increase the need for a single out of band management solution. The reason why I say out of band management is because you need to be able to manage your device regardless of OS State.

Especially as organizations become more geographically dispersed, it is increasingly important. Collectively, it seems like there's going to be a continued strain of IT resources. Budgets might be up but they aren't necessarily keeping pace with the number of new trends that IT have to track, make decisions on and execute against. This is going to pose an important question and decision for IT, on how to best allocate the resources to serve both as strategic and operational initiatives in the organization.

Kevin L. Jackson: Unfortunately, we are at the end of our time for this episode but thanks to Kaitlin Murphy with Intel, for her insights and expertise.


( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)





Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)



Tuesday, October 24, 2017

Top 1000 Tech Bloggers

The Rise "Top 1000 Tech Bloggers" leaderboard recognizes the most inspiring Tech journalists and bloggers active on social media. They use Klout scores (50%) and the blogger's twitter conversations on "tech" (50%) to rank these leaders on their social media influence. The first 100+ tech writers and bloggers on this board were picked from a Twitter search.  The board curator's goal is to grow this to 1000+ names and make it the definitive go-to list for people to find who they can follow to get all tech news and analysis.

This Week's Top 5!




If you are a tech blogger creating fresh content regularly but not on the list already, visit the site and join the list. If you would rather nominate nominate someone, please send an email to support@rise.global with your nomination's Twitter handle.

( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)





Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)



Saturday, October 14, 2017

IBM - The Power of Cloud Brokerage

Hybrid cloud adoption is now mainstream and you are making decisions every day about how to transform application and infrastructure architectures, service delivery, DevOps, production operations and governance. With Cloud and Systems Services you rethink how technology can be used to give you more power than ever before.

Cloud and Systems Services, part of IBM Services Platform with Watson, are infused with automation and cognition so you stay ahead of the needs of your every-changing business.







To lean more or to schedule 30 mins to discuss your Enterprise IT issues, click here: https://ibm.co/2g7lHR3



This post was brought to you by IBM Global Technology Services. For more content like this, visit Point B and Beyond.







Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)







Friday, September 29, 2017

More SMB Love Needed



In a recent post, titled “10 Surprising Facts About Cloud Computing and What It Really Is”, Zac Johnson highlighted some interesting facts about cloud computing in the SMB marketplace:
  • Cloud Computing is up to 40 times more cost-effective for an SMB, compared to running its own IT system.
  • 94% of SMBs have experienced security benefits in the cloud that they didn’t have with their on-premises service
  • Recovery times for SMB are four times faster for businesses using cloud computing when compared to those not utilizing cloud services.
  • For SMB, energy use and carbon emissions could be cut by 90% by using cloud computing, saving the environment and energy costs.

These advantages show a strong indication that SMB information technology should be dominated by the adoption of cloud computing services.  Although one of the most prominent of these cloud services is Microsoft’s Office 365 (O365), a recent survey cited by CIO.com suggests that 83% of U.S. small and medium businesses (SMBs) have yet to use any form of O365.  If cloud services can deliver such remarkable improvements, why are SMBs holding back?

According to the survey, part of the reason is that SMBs often lack the required internal resources needed to analyze the cloud migration opportunity.  This type of analysis often requires the testing of multiple cloud-based business and productivity services as well as more focused attention on data protection capabilities.  Many SMB executives see cloud computing as nothing but marketing hype and are more focused on running their businesses.  Cloud services may also be perceived as being very confusing, technically overwhelming, and even frightening.  Another key technical challenge is dealing with a more sophisticated networking environment that may require virtual private network (VPN) management and remote infrastructure access.

The networking challenge is further exacerbated by the requirement to support a distributed mobile workforce with secure mobile device access to company network resources.  NETGEAR is making an impressive bid to address this challenge by their recent release of a new line of small business switches, access points, and NAS devices equipped for native cloud management via a new mobile application.  The app, called Insight, is designed to let administrators or unskilled end users discover and configure multiple wired and wireless network devices.  The users can then monitor and manage these network resources remotely through an intuitive touchscreen interface.  Insight is designed to fill a critical gap in the networking market for simple SMB solutions that provide robust functionality.


Switching from software or CPU license-based pricing to the subscription-based utilization models offered by cloud service providers can also require an SMB to conduct a careful economic analysis of the change.  This change can potentially divert finance and IT staff from their core jobs. The reality is that most cloud services aren’t designed for SMB consumption.  Small businesses are therefore likely postponing cloud migration because they don’t know where to start or don’t possess the internal resources to manage through the transition.

This small business industry challenge is bound to become harder. According to International Data Corporation (IDC), the small and medium business spending on IT hardware, software, and services, including business services,  is expected to increase at a compound annual growth rate (CAGR) of 4.2%, reaching $668 billion in 2020.

As SMB cloud adoption grows, the need for more cloud transition support for the SMB marketplace will also continue to grow.  As a historically underserved market, more SMB tailored cloud services and cloud adoption support are desperately needed.  Unfortunately, the SMB market is typically seen as an afterthought by enterprise vendors, and small business solutions are designed as dumbed down versions of the enterprise solutions, let’s hope that more companies like NETGEAR will wake up and serve this clear and growing SMB marketplace need.

( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)



Monday, September 4, 2017

ATMs Are IT Too!


That world of homogenous IT technology managed entirely by the internal IT organization has long disappeared.  Operations today require efficient and global management of technologically heterogeneous environments. The challenges and mistakes organizations make when tackling this important task include:
  • Operational disconnects caused by ineffective internal communications;
  • Resource contention when multiple, independently developed project plans compete;
  • Incompatible technical documentation; and
  • Inconsistent communications with vendors.
A case in point is the finance industry which has endured some rather unique pains in this area, especially when it comes to ATM Fleet Management. According to Diebold Nixdorf, a world leader in connected commerce, this problem has been caused by three major trends that have changed the nature of ATM network management.
The first, and broadest driver of these changes has been the rapid adoption of newer and more sophisticated technology. Some reports cite that in 2014, up to 95% of the world’s ATMs were running Windows XP. That year, the entire industry was basically forced to transition to Windows 7 and this was when some banks were still using OS/2!
“These more sophisticated systems, requiring updates, patches, and support in real-time, along with software and hardware that can operate nimbly in an agnostic ecosystem. And as more and more transactions are migrated to self-service terminals, the devices must advance in complexity, too.”
Security challenges, the second key trend, are also morphing daily as threats become more and more diverse. Specific problems include physical security of the cash inside the terminal, malware threats to software and the use of data skimming devices. As banks expand their self-service networks, competition around their ability to deliver greater functionality and more complex transactions within an even tighter personally identifiable information regulatory environment is daunting.
The final trend is around management and overhead. As the traditional focus of IT support groups has changed from PCs, firewalls, routers towards the administration of an extensive network of remote self-service terminals, the scope of the required core competencies has changed tremendously. These teams must now deal with multi-vendor hardware, software, security, and services. To deal with these tectonic shifts, financial institutions are now looking to partner with technology services companies.
In this strategy shift, they are looking for a provider that brings broad multi-vendor management skills and analytics-based, proactive technical support. Additional criteria for selecting a multi-vendor management partner include:
  • Global presence with the ability to provide on-site engineering support to any ATM site;
  • Demonstrated continuity of support as exhibited by an ability to dispatch the same customer engineers on most occasions;
  • Customer engineers with proven and demonstrable experience with the same type of installation and configuration;
  • Support organizations with the breadth and depth of resources necessary to deliver high-quality support with minimal service disruption; and
  • A global logistics infrastructure capable of providing the timely delivery of parts from any vendor, if required.
IBM has proven to be a major player in this space. Their ATM and branch services support provides a predictive maintenance solution that uses advanced analytics to identify potential concerns. They then work with financial organization’s IT teams to schedule proactive support services. This proactive approach ensures proper intervention before customer service is disrupted. As a proven, global provider of multi-vendor service support, IBM can be your single agnostic vendor supporting your multi-vendor ATM environment. If your team is in need of a multi-vendor support partner, consider IBM.

This post was brought to you by IBM Global Technology Services. For more content like this, visit Point B and Beyond.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)


Tuesday, August 29, 2017

Digital Transformation Asset Management



Today’s businesses run in the virtual world. From virtual machines to chatbots to Bitcoin, physical has become last century’s modus operandi.  Dealing with this type of change in business even has its own buzzword – Digital Transformation.  From an information technology operations point of view, this has been manifested by organizations increasingly placing applications, virtual servers, storage platforms, networks, managed services and other assets in multiple cloud environments.  Managing these virtual assets can be much more challenging than it was with traditional physical assets in your data center.  Cost management and control are also vastly different than the physical asset equivalent.  Challenges abound around tracking and evaluating cloud investments, managing their costs and increasing their efficiency.  Managers need to track cloud spending and usage, compare costs with budgets and obtain actionable insights that help set appropriate governance policies.

The cloud computing operational expenditure (OPEX) model demands a holistic management approach capable of monitoring and taking action across a heterogeneous environment.  This situation is bound to contain cloud services from multiple vendors and managed service providers.  Enterprises also need to manage services from a consumption point of view. This viewpoint looks at the service from the particular application down to the specific IT service resources involved, such as storage or a database. Key goals enterprises need to strive for to be successful in this new model include:


  • Obtaining ongoing visibility into true-life cloud inventory;
  • Viewing current and projected costs versus industry benchmarks;
  • Establishing and enforcing governance control points using financial and technical policies;
  • Receiving and proactively responding to cloud cost and operational variances and deviations;
  • Gaining operational advantages through advanced analytics and cognitive computing capabilities;
  • Simulating changes to inventory, spend goals and operational priorities before committing;
  • Managing policies through asset tagging across providers and provider services; and
  • Identifying and notifying senior managers about waste and opportunities for cost savings.
Accomplishing these goals across a hybrid IT environment will also require timely, accurate and consistent information delivery to the organizations, CIO, CFO, IT Financial Controller and IT Infrastructure and Operations Managers.  Ideally, this information would be delivered via a “single pane of glass” dashboard.

One path towards gaining these capabilities would be through the use of a cloud services brokerage
platform like IBM® Cloud Brokerage Managed Services – Cost and Asset Management. This “plug and play” service can assist in the management of spending and assets across hybrid clouds by visualizing data that provides focus on asset performance.  Through the use of predictive analytics, it can also provide insight-based recommendations that help in the prioritization of changes according to their expected level of impact.  Analytics enables an ability to recalibrate cost by comparing planned versus actual operational expenditures.  The built-in cloud service provider catalog, pricing, and matching engines can also help organizations find alternative providers more easily.  Using IBM Watson® cognitive capabilities, IBM Cloud Brokerage Managed Services – Cost and Asset Management will also highlight cloud best practices and expected results based on IBM’s rich knowledge base of cross-industry cloud transition experience.

Operating a business from a virtual IT platform is different.  That is why advanced cost and asset management skills, capabilities and tools are needed.  According to Gartner, more than US$1 trillion in IT spending will be directly or indirectly affected by the shift to cloud during the next five years. This makes cloud computing one of the most disruptive forces of IT spending since the early days of the digital age.  You and your organization can be ready for these tectonic changes by implementing the straightforward five-step process supported by IBM Cloud Service Brokerage capabilities:


  1. Establish governance thresholds and policies for services;
  2. Connect the advanced management platform across all cloud service accounts;
  3. Track the costs of the services, including recurring and usage-based costs;
  4. Enforce compliance on the costs and asset usage using the purpose-built cost analytics engines; and
  5. Simulate and optimize the control and compliance actions and better control your costs.



This post was brought to you by IBM Global Technology Services. For more content like this, visit IT Biz Advisor
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)

Sunday, July 30, 2017

The Game of Clouds 2017

The AWS Marketplace is growing at breakneck speed, with 40% more listings than last year! This and more insights were revealed when CloudEndure used their custom tool to quickly scan the over 6,000 products available on AWS Marketplace. The top offerings are highlighted in the image below but additional detail is available on their blog


"So whether you are a Stark, a Targaryen, or even a Lannister, the Game of Clouds map will help you attain the crown of AWS cloud computing perfection."




( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)




Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Friday, July 14, 2017

Managing Your Hybrid Cloud

Photo credit: Shutterstock

Runaway cloud computing cost may be causing an information technology industry crisis.  Expanding requirements, extended transition schedules and misleading marketplace hype have made “Transformation” a dirty word.  Questions about how to manage cost variances and deviations with assets and cost across different suppliers abound. A  recent Cloud Tech article explained that while public cloud offers considerable cost savings in comparison to private or on-premises based alternatives, there may also be significant hidden costs. Operational features like auto-scaling can cause costs to soar in line with demand for resources, making predicting costs difficult and budgeting even harder. There is also an acute need for a holistic and heterogeneous system that can track the costs of cloud services from the point of consumption (e.g., an application or business unit) down to the resources involved (e.g., storage or compute service).
Sitting at the apex of all of these issues is the CFO or corporate Vice President of Finance. As the key budget manager for most organizations, it is where many of the key financial decisions are made. This is also where the spectrum of IT cost responsibility extends from the pure financial analytics tasks of:
  • Optimization;
  • Forecasting and projection; and
  • Financial reporting
To the pedestrian but crucial accounting tasks like:
  • Show-backs and charge-backs
  • Charge reconciliation; and
  • Budgeting policy management
The most prevalent cause of these financial problems is a failure to keep track of virtual assets in the cloud.  Many companies have lost complete visibility and control of cloud computing cost simply because they failed to tag and track these assets.  Unfortunately, this error is typically realized after hundreds or even thousands of cloud based assets have been instantiated.
Experts have also outlined a five-step process that help enterprises bring control and governance to hybrid cloud IT cost.
Step 1: Establish governance thresholds and policies for services
Step 2: Access your cloud service provisioning accounts
Step 3: Track the costs of the services, including recurring and usage-based costs
Step 4: Enforce compliance on the costs and asset usage using the purpose built cost analytics engines; initiate and track changes
Step 5: Simulate and optimize the control and compliance actions and better control your costs
Managing spend and assets across hybrid clouds also requires the availability of actionable data. This will help the CFO focus on which assets are performing as expected and which are not. Predictive analytics and insight-based recommendations can also help to drive the prioritization of changes that can have the most effective impact.
These sort of challenges can certainly be acute but the solution for helping organizations gain control of these issues will typically include holistic hybrid cloud management. In fact, financial organizations are just now realizing their critical role in managing the operational expenditure model embraced by cloud computing. Services specifically designed to address the financial management aspects of cloud metering, billing, workload management and service provisioning policies are just now hitting the marketplace.
One of these leading financial management services is provided by IBM. Their newly launched Cost and Asset Management application helps companies address escalating cloud costs and complexity while offering guidance into the next steps of hybrid cloud transformation. Through the use of predictive analytics to monitor and provide recommendations on a single dashboard, this service can provide finance and IT on one system of reference for hybrid cloud governance. This particular service can establish and enforce governance control points using financial and technical policies. Its ability to easily combine asset tags with policies can help the CFO identify and respond to financial variances before they become problems. Through the innovative use of Watson Cognitive services, this particular application can tap into a years of IBM experience to offer recommendations using built-in advanced analytics and cognitive capabilities. Acting on these suggestions can streamline cloud usage, predict future trends and identify waste.
If your company is currently experiencing these digital transformation challenges, learn more about managing hybrid IT finances at ibm.biz/ExploreCloudBrokerage. Establishing a focus on cloud governance, cost and asset management is a truly essential step towards expanding the operational benefits of hybrid cloud.


This post was brought to you by IBM Global Technology Services. For more content like this, visit IT Biz Advisor.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2016)



Thursday, June 29, 2017

American Airlines Adopts Public Cloud Computing


Did you know that the reservations systems of the biggest carriers mostly run on a specialized IBM operating system known as Transaction Processing Facility (TPF). Designed by IBM in the 1960’s it was designed to process a large numbers of transactions quickly. Although IBM is still updating the code, the last major rewrite was about ten years ago. With all the major technologies changes since then, it’s clear that IBM has already accomplished a herculean task by keeping an application viable for over 50 years!

Just like Americas aging physical infrastructure, the airlines are suffering from years of minimal investment in their information technology. This critical failure has been highlighted by a number of newsworthy incidents including:

·         Delta, April 4, 2017 - Following storms that affected its Atlanta hub, Delta's crew-scheduling systems failed, causing days of operational issues for the airline. Buzzfeed reports that flight staff were left stranded and unable to log in to internal systems. There were reportedly hours-long wait times on the crew-scheduling phone system.

·         United, April 3, 2017 - A problem with a system used by pilots for data reporting and takeoff planning forced United to ground all flights departing from George Bush Intercontinental Airport in Houston for two hours. This is the third time that this system has been blamed for causing operational problems at United. Around 150 flights operated by United or its regionally flying partners out of IAH were delayed on the day, and about 30 were canceled, according to flightaware.com.

·         ExpressJet, March 20, 2017 - A system-wide outage at ExpressJet delayed flights it operates as Delta, United, and American Airlines for hours. The FAA issued a ground-stop at the airline's request, preventing its planes from taking off. On the day, it had 423 delays and 64 cancellations, about a third of its scheduled operations, according to flightaware.com.

·         JetBlue, Feb. 23, 2017 - An outage at JetBlue forced the airline to check in passengers manually in Ft. Lauderdale and Nassau. Passengers were unable to use mobile boarding passes and check-in kiosks

While these incidents can be scary, American Airlines has recently taken a major step towards avoiding such events by migrating a portion of its critical applications to the cloud. In a recent announcement the carrier said that it will be moving it’s its customer-facing mobile app and their global network of check-in kiosks to the IBM Cloud. In addition, other workloads and tools, such as the company’s Cargo customer website, will also be moved to there. In a parallel effort, all of these applications will be rewritten so that they can leverage the IBM Cloud Platform as a Service (PaaS). This will be done using a micro-services architecture, design thinking, agile methodology, DevOps, and lean development.

“In selecting the right cloud partner for American, we wanted to ensure the provider would be a champion of Cloud Foundry and open-source technologies so we don’t get locked down by proprietary solutions” said Daniel Henry, American’s Vice President Customer Technology and Enterprise Architecture. “We also wanted a partner that would offer us the agility to innovate at the organizational and process levels and have deep industry expertise with security at its core. We feel confident that IBM is the right long-term partner to not only provide the public cloud platform, but also enable our delivery transformation.”

This latest announcement demonstrates why cloud computing is the future of just about every industry.  The cost savings, operational improvements, data security and business agility delivered by cloud based According to Patrick Grubbs, IBM's vice president of travel and transportation, American Airlines will also be able to reduce cost by leveraging an inherent cloud computing ability of matching compute resources to the variable requirements that come from seasonal peaks.

This move by American Airline is sure to spur others towards a quicker adoption of cloud computing.  I look forward to the stampede.

( This content is being syndicated through multiple channels. The opinions expressed are solely those of the author and do not represent the views of GovCloud Network, GovCloud Network Partners or any other corporation or organization.)
 



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)



Wednesday, May 31, 2017

Crisis Response Using Cloud Computing



Cloud computing is more than servers and storage. In a crisis situation it can actually be a lifesaver. BlackBerry, in fact, has just become the first cloud-based crisis communication service to receive a Federal Risk and Authorization Management Program (FedRAMP) authorization from the United States Government for its AtHoc Alert and AtHoc Connect services. If you’re not familiar with FedRAMP, it is a US government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. The Blackberry certification was sponsored by the US Federal Aviation Administration.

While you may not need a US Government certified solution in an emergency, your organization may really want to consider the benefits of cloud computing for crisis response. From a communications point of view, companies can use cloud based services to quickly and reliably send secure messages to all members of staff, individual employees or specific target groups of people. Smartphone location-mapping functions can also be easily installed and used. One advantage of using application-based software installed on an employee’s smartphone is that it can be switched off when an employee is in a safe-zone, providing a balance between staff privacy and protection. Location data can be invaluable and result in better coordination, a more effective response and faster deployment of resources to those employees deemed to be at risk. 


Using the cloud for secure two-way messaging enables simultaneous access to multiple contact paths which include SMS messaging, emails, VOIP calls, voice-to-text alerts and app notifications. Cloud-based platforms have an advantage over other forms of crisis communication tools because emergency notifications are not only sent out across all available channels and contact paths, but continue to be sent out until an acknowledgement is received from the recipient. Being able to send out notifications and receive responses, all within a few minutes, means businesses can rapidly gain visibility of an incident and react more efficiently to an unfolding situation. Wi-Fi Enabled devices can also be used to keep the communications lines open when more traditional routes are unusable.  


While you’re thinking about your corporation’s crisis response plans, don’t forget about the data. Accessing data through cloud-based services can prevent a rescue effort from turning into a recovery operation. Sources for this life-saving resource include:
  • Data exhaust - information that is passively collected along with the use of digital technology
  • Online activity - encompasses all types of social activity on the Internet such as email, social media and internet search activity
  • Sensing technologies – used mostly to gather information about social behavior and environmental conditions
  •  “Small Data” - data that is 'small' enough for human comprehension and is presented in a volume and format that makes it accessible, informative and actionable
  • Public-related data - census data, birth and death certificates, and other types of personal and socio-economic data
  • Crowd-sourced data - applications that actively involve a wide user base in order to solicit their knowledge about particular topics or events

Can the cloud be of assistance when you’re in a crisis? Cloud-enabled crisis/incident management service from IBM may be just what you need to protect your business. IBM Resiliency Communications as a Service is a high availability, cloud-enabled crisis/incident management service that protects your business by engaging the right people at the right time when an event occurs, through automated mission-critical communications. The service also integrates weather alerts powered by The Weather Company into incident management processes to provide the most accurate early warning of developing weather events and enable proactive response



This post was brought to you by IBM Global Technology Services. For more content like this, visit ITBizAdvisor.



Cloud Musings
( Thank you. If you enjoyed this article, get free updates by email or RSS - © Copyright Kevin L. Jackson 2017)