Thursday, September 25, 2014

Jury Duty and Data Center Over Provisioning

This past Friday I have the distinct pleasure of waiting in the Chicago courthouse all day as a “standby juror.” I reported to the waiting area in the morning, was assigned group numbers, and waited to see if I got called to be interviewed and possibly selected for a jury. The process is slow and the cases could be anything from a murder trial to injury claims. Each group might or might not get called to actually serve a purpose for our city, but in the meantime we all wait together in disgruntled silence.

Just waiting around... Image credit: Wikimedia
While spending 6 hours without wifi, I got to thinking about the jury selection process and traditional computing.

Jury selection is a lot like traditional data center-based computing. 
Wait for it…

The court only needs 12 people at a time for a jury. But, there might be several trials going on that day. And, the lawyers need the flexibility to select or not select jurors for each specific case. So to compensate for the unknown number of jurors called in a day, the courts call more than enough people to sit around and wait, just in case. People are sleeping, reading newspapers, and blankly looking out the window. Not exactly using any of our time well by just waiting.

Traditional data center servers are just like me. Instead of standby juror, the servers are over-provisioned in a data center. Data center administrators and IT teams cannot possibly estimate their compute needs over the lifespan of a dedicated data center (upwards of 15 years, even in a well managed data center with rapid refresh cycles). Instead of risking running out of server capacity, IT teams over-provision and keep servers on hand for added capacity and unique use cases.

What’s the solution? 
In no way could I claim to know the solution to the US justice system, much less propose something in a short blog post. But I can propose a solution to data center over-provisioning: cloud computing.

Cloud is the easy answer for use cases ranging from capacity expansion to new POCs and deployments outside the data center. Rather than build a physical building with the proper HVAC and lighting requirements, buy the very expensive servers and racks, and install all the requisite security and staffing measures public cloud can take care of that for you. For a fee, of course.

Cloud pricing still claims to be "pay-per-use functionality, when really
they’re offering pay-per-allocation" as Chad Lawler points out in our ebook.  But there are great improvements in a cloud-provisioned world compared to data centers. Autoscaling is a cloud-only feature that uses actual load data to either scale up or down, depending on the need for better resource utilization (read: not wasting money on over-provisioning or losing customers by not predicting correctly).

Cloud compute capacity still slightly over-provisions, but has made improvements to use real time data to predict usage. Plus, cloud-based resources use OpEx rather than CapEx, and you get instant gratification for the one-off high availability projects. Cloud computing lets IT organizations spend more time building the current projects rather than worrying about if and when they should plan for future unknowns.

So while I sat there waiting for the possibility of serving on a jury, I at least wrote this blog post. I’ve been productive - can you say the same for the over-provisioned servers in a private data center? 


Good news. I didn't have to resort to this.
Image credit: Paigekellerman

VNS3 and the Shellshock bug: New images and live patching ready to roll.


Bash Command Interface Vulnerability  (CVE-2014-6271) VNS3 supported versions 2.7, 3.0, 3.01, 3.03, 3.04, and 3.5 are potentially affected by the Bash command interpreter bug being labeled the “shellshock” bug (CVE-2014-6271). NIST Notification. Read more about Shellshock via The Register

We've written a full security alert on our security news page: cohesiveft.com/support/security-responses
image credit: Wikimedia Commons

Is VNS3 vulnerable?
Attacks on VNS3 instances are theoretically possible – the standard VNS3 deployment practices do not make such attacks likely. Regardless of this limited attack vector, we are still making new VNS3 images available for all supported cloud environments. For the full security report and available patches, visit cohesiveft.com/support/security-responses

Live Patches: 
We are available to live patch any running Manager. Send a message to our support team to receive instructions on providing access via our multi-factor/multi-party remote support authentication system. The live patch takes less than a minute to apply but requires our involvement.

Monday, September 22, 2014

Secure container provisioning and the VNS3 API - part 3

In the first part of this series we created a simple Docker container and then a separate customisation Dockerfile. In the second part we gave a brief walkthrough on how to write a simple Java utility to interface with the VNS3 API. In this part we’ll wrap things up with a bit more code and lot of exercises for the reader.
Image credit: Docker


At this point the code makes the connection and can handle GET requests but it doesn’t allow POSTs, so we need to extend our code a little. In particular, in the Vns3ApiConnection class we’ll add a post() function:

String post(String api_path, String query) throws Exception {
    String response = "";
    try {
        HttpsURLConnection httpsConnection = setConnection(api_path, "POST");
        httpsConnection.setDoOutput(true);
        httpsConnection.setRequestProperty("Content-length", String.valueOf(query.length()));

        OutputStream outputStream = httpsConnection.getOutputStream();
        outputStream.write(query.getBytes());
        outputStream.flush();

        if (httpsConnection.getResponseCode() != 200) {
            throw new RuntimeException("HTTP PUT Request Failed with Error code : "
                    + httpsConnection.getResponseCode());
        }

        BufferedReader responseBuffer = new BufferedReader(new InputStreamReader(
                (httpsConnection.getInputStream())));

        String output;
        while ((output = responseBuffer.readLine()) != null) {
            response += output;
        }

        httpsConnection.disconnect();
    } catch (Exception e) {
        e.printStackTrace();
    }
    return response;
}

This is a straightforward function. It makes the connection, sets the payload (via JSON in the case of the API calls we’ll be using) and sets a return string with the result from the call.
So far, so good. We also need to add some image-specific functionality to our main class. We’ll create two functions:
private static void get_images(Vns3ApiConnection api) throws Exception {
    String imagesURL = "/api/container_system/images";
    String response = api.get(imagesURL);

    System.out.println("Output from Server:  \n" + response);
}

private static void create_image(Vns3ApiConnection api, String query) throws Exception {
    String imageURL = "/api/container_system/images";
    String response = api.post(imageURL, query);

    System.out.println("Output from Server:  \n" + response);
}
Finally, we modify the main() function to look like:
public static void main(String[] args) throws Exception {
    try {
        Vns3ApiConnection api = new Vns3ApiConnection(host, user, password);
        String secure_host="http://172.16.174.160";

        String base_image_url = secure_host + "/base_images/base_ssh_image.tar.gz";
        String customise_image_url = secure_host + "/personalisation/customised.tar.gz";

        get_config(api);
        get_status(api);

        // Do this properly via a JSON object
        String query = "{\"name\":\"ssh-server\",\"url\":\"" +
                       base_image_url + "\"}";
        create_image(api, query);

        // At this point we would query the images using get_images()
        // until we know that our latest image either succeeded or failed.
        // We'll fake it by sleeping for 20 seconds
        Thread.sleep(20000);
        get_images(api);

        // Then we would create a new image from a Dockerfile referencing the first image
        // Note that "buildurl" is used here because we're using a Dockerfile.
        query = "{\"name\":\"customised\",\"buildurl\":\"" + customise_image_url + "\"}";
        create_image(api, query);

        Thread.sleep(20000);
        get_images(api);
    } catch (Exception e) {
        e.printStackTrace();
    }
}

At this point we have enough that we can create an image and then recreate one, or to import a base image and then customise it.

The files for this post can be downloaded here.

Once we have this running we can start a container. This container will have an IP address tied to the VNS3 manager and will only be accessible if the firewall rules have been set. One of the exercises for the reader is to adapt the code so that firewall rules can be added. (The URL is /api/firewall/rules and the usual REST verbs of GET, POST and DELETE apply.)

The rules needed will look similar to:
-o eth0 -s 0.0.0.0/0 -j MASQUERADE
PREROUTING_CUST -i eth0 -p tcp -s 0.0.0.0/0 --dport 9022 -j DNAT --to 172.0.10.2:22

Where the first line is enabling masquerading, 9022 is the port from the outside (port 22 is already in use by default) and 172.0.10.2 is the IP address of the container.

In this series of posts we’ve looked at creating a simple method of customising Docker containers within VNS3 and using the API rather than the GUI. I’ve deliberately left out a lot of steps and the code is sparse by design. Take the code, fix it up, flesh it out and put it to use in a real library or application. Let us know your results.

The VNS3 Manager API documentation can be found here: http://www.cohesiveft.com/dnld/CohesiveFT-VNS3-3.5-API.pdf

Friday, September 19, 2014

Weekly news roundup for Cloud and Networking: September 15 - 19

Cloud and Networking news for the week of Sept 15th
  • InfoWorld: 4 ways Docker fundamentally changes application development
  • From Tim Fernholz on Quartz: How two “holy grails” of cryptography can make the cloud safe everywhere
  • The Programmable Web: The Naked Truth about Internet Security >> recent attacks were just another successful hack in a long line of them. The news is proof of "(1) the lengths to which hackers with nefarious intent will go to achieve their objectives, (2) the fundamental problems with the way the Internet is secured, and (3) how APIs are increasing the Internet's vulnerable surface area and what API providers must do about it"
  • Talkin' Cloud: The 3 Most Dangerous Misconceptions About the Cloud
  • Venture Beat: Docker, the hottest cloud startup in the West, adds $40M
This is not what SSL keys look like, but you get the idea. Reuters/Akhtar Soomro via Quartz


CohesiveFT in the news:

Catch up with the CohesiveFT team:

Monday, September 15, 2014

To NAT or not to NAT: that is the question...or is there a better way?

By Senior Cloud Solutions Architect Sam Mitchell

In our journey though public cloud vendors and in our mission to give public cloud users back attest-able control and security of public cloud networks we have more or less worked with them all - from the very big to the more “boutique”.  When it comes to publicly addressable hosts we have noticed 2 main approaches, with variants on both approaches depending on the cloud vendor.
Image credit: Yellowstone, US Department of the Interior

The first is to directly assign a public IP address to an interface of a VM.  The second is to use Network Address Translation (NAT); using NAT results in traffic routed to the Public IP translated via 1:1 NAT and forwarded onto the Private IP address of an VM.
Both approaches have their advantages and disadvantages.

Approach 1 - Assigning a public IP directly to an interface.
This has the advantage of your traffic not traveling through any NATing, which some protocols and applications can have trouble with.  SNMP jumps to mind. If you have lots of over lapping private IPs and you’re using NATing to make them all unique then it is often hard to understand the actual source host when you get trap or an notification as the private IP (non-NATed) address is listed as the source IP.

This approach has the disadvantage of limited elasticity, if you assign a public IP to a VM and then at later stage you need to move that IP to another VM then you usually need to shut that VM down in order move the IP address from the interface of that VM.

We’ve also seen some clouds that have the ability to only assign one interface to VM. So for example you assign a public IP address to that single interface on the VM, routing can become tricky if you want to pass inbound public IP traffic onwards to other VMs which are in a private IP space or private VLAN / subnet inside the same cloud.

Approach 2 - Use NAT.
This has the major advantage of being more “cloud friendly” and elastic with the ability to move the public IP from VM to VM without the need to shut down VM’s.

This does have the disadvantage of having some sort of NAT device “in-between”, sometimes not all ports and protocols are universally open; which can result in a limitation on what you can actually do with with your public IP. IPsec comes to mind… not all clouds allow native IPsec and or NAT-T IPsec though their NATing device.

NATing also comes with the usual caveats when NAT - SMNP does work so will when a layer of NAT is in-between (see example in Approach 1).

So both approaches have pros and cons.  But in general NATing seems to be the best approach, and if implemented well by the cloud vendor you shouldn't face any major issues. The old adage of “not all clouds are created equally” comes to mind...

Amazon Web Services (AWS) have done a pretty good job with their Elastic IPs (EIPs) using approach 2.

How does an Elastic IP address work?
Elastic IP addresses are public IP addresses designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account, not a particular instance (VM). You control that address until you choose to release it.  Elastic IP addresses allow you to mask instance or availability zone failures by remapping your public IP addresses to any instance associated with your account. AWS EIPs enables you to engineer around problems with your instance or software by remapping your Elastic IP address to a replacement / another instance.


Did you know that EIPs can be used for VPN IPsec endpoint IPs?
Yup, this has the major advantage of the EIP address becoming the IPsec VPN endpoint IP.  EIPs are just IPs and its very rare if ever that they are taken down for maintenance.

For example - if you use AWS’s native VPN hardware gateway for VPN connectivity to your instances you have no choice but to use a shared endpoint IP on a shared device. The device is inside of Amazons control, meaning AWS decides:

  •  your pre shared keys, 
  • when to take this service down for maintenance, 
  • the level and type of encryption you use, 
  • the pre-shared keys
Ultimately you can't attest to the security of your data in AWS. 

Although the VPN connection is configured and consumed by you the customer its still partly controlled by AWS (a third party network).  When using an EIP as the endpoint IP address for your IPsec connection it’s not shared with any other AWS accounts or customers. The EIP endpoint belongs to your AWS account and is inside of your control.  VNS3 allows you to use both native IPsec and NAT-T IPsec.

Enter VNS3

We developed VNS3 back in the days before the AWS VPN gateway existed. We noticed that AWS users needed a way to securely talk to their applications in the cloud.

VNS3 is an virtual machine appliance which you can start in an AWS instance from an AMI.  Once started you assign an EIP to the VNS3 instance, this allows you (the user) to create IPsec tunnels from this instance to your data centre / firewall / IPsec concentrator without the need to use the AWS VPN gateway.

Once the IPsec tunnel is established you can easily integrate the VPN’s remote network to the rest of your AWS private instances by using the VNS3 secure overlay network clients which route traffic to your VPN remote network at the end of the VNS3 instance by switching traffic from your private instances to the VNS3 appliance via secure encrypted tunnels.

For the less security intense users out there, you can also configure an AWS VPC to route traffic to the VPN remote network via the VNS3 appliance without the need to use the VNS3 secure overlay network on your private IP instances.  But using the VNS3 secure overlay network in conjunction with the VNS3 IPsec VPN gateway means that all your data in motion into the cloud and inside of the cloud and back out again is encrypted. Plus, you can attest to your data’s security from end to end, including inside the AWS network.

VNS3 also supports a much higher level of encryption such as AES 256 bit compared to AWS who provide only 128 bit with AES encryption.  ASE 256 bit is required for most PCI and HIPAA compliant solutions, many people don't know that the AWS VPN gateway is not PCI or HIPAA compliant due to its max of 128 bit encryption.
More on what VNS3 is and how it works with AWS VPC and EC2.




FIN.
--

As Senior Cloud Solutions Architect, Sam Mitchell leads all technical elements of the global sales cycle. Mitchell runs demos, technical qualification, technical account management, proof of concepts, technical and competitive positioning, RFI/RFP responses and proposals.
Before CohesiveFT, Mitchell was a Cloud Solution Architect at Platform Computing, which was recently acquired by IBM. He was also a Lead Architect at SITA, where he headed up OSS BSS Architecture, Design and Deployment activities on SITA's cloud offerings.
Mr. Mitchell recently presented at CloudCamp London "It's All About the App!” in January 2014 and the IASA UK Application Defined Network (ADN) Summit in October 2013, and AWS UG UK #7: Let's talk Security in October 2013.  is a regular blog contributor, and hosts the CohesiveFT webinars (available here: http://www.cohesiveft.com/about/news/webinars/)  He has contributed to several technical white papers at CohesiveFT, Platform, and SITA.

Share this Post

Related Posts Plugin for WordPress, Blogger...