Wednesday, October 22, 2014

A Fresh Approach to Extending Enterprise HPC to Public Clouds

 By Sam Mitchell, Senior Solution Architect
This article first appeared in HPC Wire. Read the original here.

Public cloud is an easy choice for enterprises looking to extend high performance workloads, reduce infrastructure costs and increase flexibility. The cloud offers the chance to reduce the capital cost of owning and managing excess compute capacity and storage for all workloads. Enterprises can avoid the hidden costs of unused compute capacity by “cloud bursting” or shifting some peak demand toward the cloud-based HPC grid extensions.  But how do you connect to existing grid resources and attest to security compliance? 

For the security and network management needs of HPC users considering the cloud, the best solution is connecting to the existing grid with overlay networks. An overlay network simply creates a private, sealed network on top of any existing network.  Using overlay networks over top of public cloud resources can add the flexibility, high availability and the robust security that HPC grid operators need to cope with unforeseen capacity demands. 

Best practices for HPC in the Public Cloud incorporate trust and security from the beginning. 
With the following tried and true best practices for high performance computing (HPC) in the public cloud, enterprises - even in regulated industries such as healthcare and financial services - can manage a secure cloud-based HPC environment and still benefit from cloud’s economies of scale.  Once HPC nodes are set up and secured in cloud, connecting between existing HPC grids and new deployments can be complex. By using a manageable and compliant cloud network topology, enterprises can ease the transition into cloud-based HPC.

The path to HPC in the public cloud starts by selecting trustworthy cloud providers and creating secure cloud deployments.

Historically, HPC environments have been expensive to own, manage and operate as entirely on-premise compute capacity. One reason this happens is that organisations often require extra compute resource for irregular one-off jobs containing sensitive data such as intellectual property. Cloud infrastructure is an excellent way to expand quickly for unexpected one-off projects.

HPC grid extensions can ensure one-off projects do not break the bank and, with added encryption from an on-premise grid to a cloud-based grid extension, that the projects comply with regulatory requirements.  Ultimately, HPC cloud best practices can help an enterprise save capital costs, prevent vendor lock-in, conserve IT resources and prevent organisations from having to change HPC vendors.

This article appears in full on HPC Wire. Read the original version here.

HPC Best Practices in Action: US Mutual Fund
A large mutual fund based in Boston uses the elasticity of public cloud to compute financial metrics that never had been possible in their internal infrastructure. The large public cloud they selected had the required elements of capacity, on-demand flexibility, and pay-as-you-go pricing. But they also wanted added security and the agility to prevent vendor lock-in. 

What the public cloud offered, on its own, could not provide the security and control needed for this financial institution to extend their existing HPC grids on the same datacentre-based network. The mutual fund required VLAN isolation to ensure customer traffic was separate from all other data traveling to and within the cloud. They also wanted to ensure resilient file storage and data validity beyond the cloud providers’ offerings. 

Rather than rebuild their HPC grid, the mutual fund wanted to rapidly connect and scale up the public cloud and determined that the most efficient strategy was to use an overlay network. Their solution also included full end-to-end and data-in-motion encryption required to meet the financial industry data protection regulations. The overlay network allowed the new HPC workloads to act like the existing HPC grid network and pass internal and external security tests.  

With an overlay network, the mutual fund securely burst into public cloud IaaS as a natural extension to their grid. The HPC grid extension also ensured all data-in-motion was encrypted from the on premise grid to the cloud-based grid extension. The mutual fund could then incorporate their cloud HPC results into on-demand reports for their clients. 

Public cloud saved expensive physical servers from sitting idle. Best practices prevented vendor lock-in and saved IT teams from re-architecting or changing HPC vendors. Now, the mutual fund company uses public cloud infrastructure to create a secure and automated natural HPC grid extension in which they flex up their processing power in seconds and back down when no longer needed.

Friday, October 17, 2014

Weekly news roundup for Cloud and Networking: October 13 - 17

Cloud and Networking news for the week of October 13th
  • Google researchers found a flaw in SSL 3.0 (CVE-2014-3566) that allows the POODLE attack (Padding Oracle On Downgraded Legacy Encryption). VNS3 customers do not need patches or new builds, but check for our latest security updates and best practices here
  • Andreessen Horowitz blog: What All the Recent Tech Company Splits Say about the Future of Cloud Computing
  • InforWorld: You can rise with public clouds or sink with private >> David Linthicum argues that the benefits of public cloud are starting to outshine private cloud costs. He writes, "Owning a private cloud still means owning hardware and software."
  • Bloomberg:  EMC Acquiring Cloud-Computing Startup Cloudscaling 
  • CloudWedge: Docker Gets Windows Server Support
CohesiveFT in the news:
Catch up with the CohesiveFT team:
  • Oct 28 - 29 Chris Swan and Patrick Kerpan attending OpenNetworking User Group  (ONUG) in New York
  • Nov 4 - 6 sponsoring Web Summit in Dublin
  • Nov 6 -7 Chris Swan presenting at GOTO Berlin "Docker - A Lot Changed in a Year" in the DevOps & Continuous Delivery track. Friday 11:30 - 12:20 Location: Hall 2
  • Nov 11 - 14 sponsoring AWS re:Invent in Las Vegas at stand 1033

Wednesday, October 15, 2014

How to: using network intrusion detection on VNS3 with Docker

In past posts, CohesiveFT CTO Chris Swan wrote about why we put Docker into VNS3, and how to use it for SSL termination, content caching and load balancing. Before he hit the road to speak at IPExpo in London, QCon Shanghai, and ONUG in New York, he made a video how-to guide for Suricata network intrusion detection (NIDS) on our cloud networking device, VNS3.

Containerized network functions: now with Suricata 
Suricata is an open source, community run Network IDS, IPS and Network Security Monitoring engine. Suricata is developed by the Open Information Security Foundation (OISF), and you can find more about it here.

Now you can detect network intrusions by using NIDS technologies, such as Suricata, to prevent any data security breaches with the end-to-end encryption available from VNS3.

Friday, October 10, 2014

Weekly news roundup for Cloud and Networking: October 6 - 10

Cloud and Networking news for the week of October 6th
  • Business Insider: The US Government Is Going To Store Top Secret Documents In The Cloud
  • Network World: Cloud giants do battle backed by distinctly different partner networks >> Amazon, Google and Microsoft all offer partner solutions, but how easy is it to start using partner software directly from the provider?
  • High Scalability: That's Not My Problem - I'm Renting Them >> That's not my problem, I'm renting them" is the perfect mantra when you find yourself worried about things you don't need to be worried about when using cloud
  • Monktoberfest was a hit - Steven O'Grady wraps up the conference in a blog post 
  • Talkin' Cloud released their Top 100 list >> according to the site, "78% of the ranked companies leverage IaaS platforms/service providers, and the most commonly used provider is Amazon Web Services (EC2) (33%), followed by Microsoft Windows Azure (23 %)"
CohesiveFT in the news:
  • CTO Chris Swan spoke at IPExpo in London on Wednesday. His talk, "What is Devops and why should infrastructure ops care?" was in the DevOps stream at the conference. Slides:
Catch up with the CohesiveFT team:
  • Oct 15 - 17 Chris Swan presenting "Docker, DevOps and Security" at QCon Shanghi
  • Oct 28 - 29 Chris Swan attending OpenNetworking User Group  (ONUG) in New York
  • Nov 4 - 6 sponsoring Web Summit in Dublin
  • Nov 6 -7 Chris Swan presenting at GOTO Berlin "Docker - A Lot Changed in a Year" in the DevOps & Continuous Delivery track. Friday 11:30 - 12:20 Location: Hall 2
  • Nov 11 - 14 sponsoring AWS re:Invent in Las Vegas

Wednesday, October 8, 2014

Koop's Hierarchy of Cloud

Most are familiar with Maslow's hierarchy of needs, the flow of human needs from physiological basics of food and shelter, up through physiological needs of esteem and self-actualization.

When we were at lunch yesterday, we talked about how there are similar thresholds or priorities for cloud-based applications.

When we talk about VNS3, we try to highlight connectivity, integration and security. We talk about the 3 functions deliberately in that order - if you can't connect you cloud application, it doesn't really matter how your application deployment integrates to the underlying cloud infrastructure or what level of security that deployment is employing.  Similar to the example from Maslow's hierarchy, a child won't focus on school work if he's too hungry to pay attention. Likewise, cloud application owners looking for strong security need to work from the bottom of the cloud hierarchy before making complex deployment decisions around access control and policy enforcement.

So I give you Koop's Hierarchy of Cloud:

First and most basically, you need a cloud in order to deploy a cloud application.  I'm not going to play the "Is it a Cloud" game show here.  There are plenty of opinions out there about who has the best IaaS Cloud offering (see Gartner Magic Quadrant or similar).  My basic requirements for IaaS cloud are on-demand flexible infrastructure you can quickly order up on a credit card. Or re-read the oldies but goodie from James Governor in 2008, or Patrick Kerpan's post on why your cloud should be fast, fat and flat.

This should go along with the basics of cloud, but it's worth emphasizing. APIs, software, and machines that are more like cattle than pets. This is the layer that lets application owners stop worrying about the underlying setup and just focus on the application being deployed. Similar to how Zipcar changed car ownership, cloud is removing the worries and complexities of data center ownership/management.

Once you have cloud basics in place, you can thinking about the connectivity profile for your application.  This has profound implications later on in the process but for now it's simple.  What does my app need access to and who needs to access my app?  This is where failover, high availability, cloud bursting, and disaster recovery use cases shine in public clouds.

We talk about integrating to both the existing systems that might be running outside of the cloud as well as with the service features/functions provided by the underlying cloud provider.  With more and more resources connected, applications owners can hook in more features as well.  This is extremely important to save IT teams' time and energy from having to re-architect systems to "fit" with cloud. Instead of rebuilding, integration should ease cloud migrations and promote multi-region, multi-cloud interoperability.

Once your cloud application is connected and integrated into the ideal environment, then you can start applying the required security policies. Obviously, it all happens together in real time, but the priority should be on first building a connected and integrated deployment. Once done the security implications and requirements of your topology will be obvious.

Security measures should always include encryption, and should be a combined effort between cloud provider, vendors and application owner. But on top of provider-owned and provider-controlled security, applications should have their own access controls, key management, application firewalls, and IPsec tunnels to be the most secure.

Just running in a VLAN is NOT secure.  VLAN traffic still moves in plain text around an untrusted network (your cloud provider) despite the layer 2 isolation.  No computer should talk to any other server through a "naked" switch (switch provided by your cloud provider). There must be an intervening security and mediation device in path for every network interaction... that is, if you care at all about security in the cloud.

VNS3 is a software-only networking and security appliance that allows application owners use Network Function Virtualization (NFV) to build application-centric networks in the cloud.  In the same way cloud users customize their instances to perform particular business functions, VNS3 users build custom networks as the foundational framework for their use-case.  VNS3 gets your cloud application secured.

Over 1,000 connected customers in more than 20 countries have been using enhanced network services on top of the clouds since 2008. Find out how VNS3 can connect, integrate and secure your cloud application. 

Share this Post

Related Posts Plugin for WordPress, Blogger...