Blog

container, devops, docker, Java, openshift, PaaS, Red Hat OpenJDK 8, S2I

Deploy Java SpringBoot artifact to OpenShift Container Platform

What is OpenShift?

OpenShift  is Red Hat’s  platform-as-a-service (PaaS) product, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux.

Why PaaS?

PaaS automates the hosting, configuration, deployment, and administration of application stacks in an elastic cloud environment. It gives app developers self-service access so they can easily deploy applications on demand.

Getting Started

Sign-up for OpenShift and install the CLI.

Below example shows how to deploy a SpringBoot Java artifact into OpenShift via S2I .

Step 1: If  VPN ( Virtual Private Network ) is needed to access OpenShift log into VPN. Else skip this step

Step 2: Sign into OpenShift url

Step 3Connect to OpenShift CLI. 

Note: If proxy is involved, NO_PROXY must be set in order to connect from OpenShift CLI.  Windows users can set it in .bash_rc and MAC users in ~/.profile

Eg:

export NO_PROXY=openshift-starter-us-west-1.openshift.com

Openshift CLI config file resides in /Users/<user>/.kube/config .

Step 4: Copy Login Command from OpenShift console and paste it in CLI.

Screen Shot 2018-05-23 at 3.04.56 PM

Eg:

oc login https://api.starter.us-west-1.openshift.com –token=bkG0ec5vp1234556778ffss

Step 5:   Create a New Project using the below command

                oc new-project curtis-tech

Step 6: From open shift console , browse catalog and select “Red Hat OpenJDK 8” to build and deploy java from git repository. We are following the S2I ( Source to Image Concept)

Screen Shot 2018-05-23 at 3.17.11 PM

Screen Shot 2018-05-23 at 3.18.20 PM

When you hit the create button, it will trigger a build under Builds->Build

Screen Shot 2018-05-23 at 3.20.22 PM

When the build is successfully complete, check Applications->Pod ( Log tab to see if the springboot application is started )

Step 7: Click on Applications->Route and hit on the url. You will see the below

Screen Shot 2018-05-23 at 3.49.01 PM

Now you have successfully deployed your application on OpenShift!

AWS, CloudFormation, devops, Elastic BeanStalk, H2 in-memory DB, Java, json, mac, S3, Spring Boot

Deploy Java Spring Boot Application in AWS Elastic BeanStalk using AWS CloudFormation Scripts

In this blog we are going to explore how to deploy a Java Spring Boot Application in AWS Elastic BeanStalk using AWS CloudFormation Scripts.

To start, we need a s3 bucket with the jar file.

Step 1 : Create a s3 bucket called “catalog_springboot” using AWS S3 console or using CloudFormation scripts from my previous blog.

Step 2 : Download catalog-springboot project from GitHub. catalog-spring-boot-0.0.1-SNAPSHOT.jar is found under /src/main/resources/jar.

Step 3 : Upload catalog-spring-boot-0.0.1-SNAPSHOT.jar to “catalog_springboot” s3 bucket. [ This can be done via AWS Console or AWS CLI  as below]

aws s3 cp /Users/home/catalog-spring-boot-0.0.1-SNAPSHOT.jar s3://catalog_springboot/catalog-spring-boot-0.0.1-SNAPSHOT.jar

Step 4 : We will be creating beanstalk-catalog-springboot-application.json. It has details regarding beanstalk environment and SolutionStackName which is “64bit Amazon Linux 2017.09 V2.6.8 running Java 8” . It also specifies autoscaling and load-balancing details.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Parameters": {
    "S3BucketName": {
      "Description": "S3 BucketName",
      "Type": "String"
    },
    "S3FileName": {
      "Description": "Name of the jar/war file",
      "Type": "String"
    }
  },
  "Resources": {
    "sampleApplication": {
      "Type": "AWS::ElasticBeanstalk::Application",
      "Properties": {
        "Description": "AWS Elastic Beanstalk Sample Java SpringBoot Application"
      }
    },
    "sampleApplicationVersion": {
      "Type": "AWS::ElasticBeanstalk::ApplicationVersion",
      "Properties": {
        "ApplicationName": { "Ref": "sampleApplication" },
        "Description": "AWS ElasticBeanstalk Sample Java SpringBoot Application Version",
        "SourceBundle": {
          "S3Bucket": { "Ref" : "S3BucketName"},
          "S3Key": {"Ref" : "S3FileName"}
        }
      }
    },
    "sampleConfigurationTemplate": {
      "Type": "AWS::ElasticBeanstalk::ConfigurationTemplate",
      "Properties": {
        "ApplicationName": { "Ref": "sampleApplication" },
        "Description": "AWS ElasticBeanstalk Sample Java SpringBoot Configuration Template",
        "OptionSettings": [
          {
            "Namespace": "aws:autoscaling:asg",
            "OptionName": "MinSize",
            "Value": "2"
          },
          {
            "Namespace": "aws:autoscaling:asg",
            "OptionName": "MaxSize",
            "Value": "6"
          },
          {
            "Namespace": "aws:elasticbeanstalk:environment",
            "OptionName": "EnvironmentType",
            "Value": "LoadBalanced"
          }
        ],
        "SolutionStackName": "64bit Amazon Linux 2017.09 v2.6.8 running Java 8"
      }
    },
    "sampleEnvironment": {
      "Type": "AWS::ElasticBeanstalk::Environment",
      "Properties": {
        "ApplicationName": { "Ref": "sampleApplication" },
        "Description": "AWS ElasticBeanstalk Sample Java SpringBoot Environment",
        "TemplateName": { "Ref": "sampleConfigurationTemplate" },
        "VersionLabel": { "Ref": "sampleApplicationVersion" }
      }
    }
  },
  "Outputs": {
    "DevURL": {
      "Description": "The URL of the DEV Elastic Beanstalk environment",
      "Value": {
        "Fn::Join": [
          "",
          [
            {
              "Fn::GetAtt": [
                "sampleEnvironment",
                "EndpointURL"
              ]
            }
          ]
        ]
      },
      "Export": {
        "Name": {
          "Fn::Sub": "${AWS::StackName}-EndpointURL"
        }
      }
    }
  }
}

Step 5 :  Create beanstalk-catalog-parameters.json which has details of s3 bucket and jar file name.

[
  {
    "ParameterKey": "S3BucketName",
    "ParameterValue": "catalog-springboot"
  },
  {
    "ParameterKey": "S3FileName",
    "ParameterValue": "catalog-spring-boot-0.0.1-SNAPSHOT.jar"
  }
]

 

Step 6 : Create beanstalk_creation_tags.json. It is a best practice to tag AWS resources for billing purposes.

[
  {
    "Key": "owner",
    "Value": "xxxxx"
  },
  {
    "Key": "contact-email",
    "Value": "xxx.yyy@zzz.com"
  }
]

 

Step 7 : Now, run the command from AWS CLI

aws cloudformation create-stack –stack-name catalog-beanstalk –template-body file://beanstalk-catalog-springboot-application.json –parameters file://beanstalk-catalog-parameters.json –tags file://beanstalk_creation_tags.json

Screen Shot 2018-05-04 at 2.17.14 PM

AWS CloudFormation console is as follows.

Screen Shot 2018-05-04 at 2.14.36 PM

We can see Elastic BeanStalk creation as follows:

Screen Shot 2018-05-04 at 2.26.51 PM

Screen Shot 2018-05-04 at 2.27.04 PM

Step 8 : When complete, it will show the URL to access the application

Screen Shot 2018-05-04 at 2.26.25 PM

Step 9 : Hit the health check url [ URL/api/catalog/health ] from any browser.

Screen Shot 2018-05-04 at 2.18.17 PM

Step 10 : Hit the catalog url [ URL/api/catalog ] to get results stored in H2 in-memory database.

Screen Shot 2018-05-04 at 1.44.25 PM

Step 11 : To terminate beanstalk, run the below command from AWS CLI

aws cloudformation delete-stack –stack-name catalog-beanstalk

 

AWS, CloudFormation, devops, json, mac, S3

AWS CloudFormation Script for S3 bucket creation in json format

Step 1: Setup AWS CLI

https://docs.aws.amazon.com/cli/latest/userguide/cli-install-macos.html

Read CloudFormation Template Basics

Step 2: Template Creation

Create s3_bucket_creation.json as follows:

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "Template to create a S3 bucket",
  "Parameters": {
    "S3BucketName": {
      "Description": "S3 BucketName",
      "Type": "String"
    },
    "S3BucketUsers": {
      "Description": "Comma separated names of S3 Bucket Users",
      "Type": "CommaDelimitedList"
    }
  },
  "Resources": {
    "S3Bucket": {
      "Type": "AWS::S3::Bucket",
      "Properties": {
        "AccessControl": "Private",
        "BucketName": {
          "Ref": "S3BucketName"
        },
        "VersioningConfiguration": {
          "Status" : "Enabled"
        }
      }
    },
    "S3BucketPolicy": {
      "Type": "AWS::S3::BucketPolicy",
      "Properties": {
        "Bucket" : { "Ref": "S3Bucket" },
        "PolicyDocument" : {
          "Version":"2012-10-17",
          "Statement":[
            {
              "Sid":"BucketPolicy",
              "Effect":"Allow",
              "Principal": {
                "AWS" : {"Ref": "S3BucketUsers"}
              },
              "Action":"*",
              "Resource": { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "S3Bucket" } , "/*" ]]}
            }
          ]
        }
      }
    }
  },
  "Outputs": {
    "S3BucketNameUsed":{
      "Description": "S3 bucket name",
      "Value" : { "Ref" : "S3BucketName"}
    },
    "S3BucketArn" :{
      "Description" : "S3 Bucket Arn",
      "Value" : {
        "Fn::GetAtt": [
          "S3Bucket",
          "Arn"
        ]
      }
    }
  }
}

Create s3_bucket_creation_parameters.json file as follows:

[
  {
    "ParameterKey": "S3BucketName",
    "ParameterValue": "s3-my-bucket"
  },
  {
    "ParameterKey": "S3BucketUsers",
    "ParameterValue": "arn:aws:iam::xxxxxxxxxxxx:user/xxx,arn:aws:iam::xxxxxxxxxxxx:user/yyy"
  }
]

Create s3_bucket_creation_tags.json file as follows:

[
  {
    "Key": "owner",
    "Value": "xxxxx"
  },
  {
    "Key": "contact-email",
    "Value": "xxx.yyy@zzz.com"
  }
]

Note: You need to know AWS account ID and the users to restrict access to the S3 bucket

Step 3: Run the CloudFormation template from aws cli

aws cloudformation create-stack –stack-name s3_bucket_creation –template-body file://s3_bucket_creation.json –parameters file://s3_bucket_creation_parameters.json –tags file://s3_bucket_creation_tags.json –capabilities CAPABILITY_IAM

Step 4: Login to AWS Console. In CloudFormation screen, you can see the status of the template.

Screen Shot 2018-05-03 at 4.06.02 PM

Step 5: If the template creation is successful, the S3 bucket can be seen in AWS S3 console.

Screen Shot 2018-05-03 at 4.08.02 PM

 

CaaS, container, devops, docker, mac, redis

Docker

Docker is a container platform provider to address every application across the hybrid cloud. Docker enables true independence between applications and infrastructure and developers and IT ops to unlock their potential and creates a model for better collaboration and innovation.

Get Started

Once Docker is installed , we’ll start by creating the Dockerfile. Then we’ll create the Docker image from the Dockerfile. We’ll run it as a containerized service. Then finally, we’ll learn how to use the Docker hub to share your Docker images.

Step 1: Install Docker CE

Step 2: Open ‘Docker QuickStart Terminal  (In MAC Spotlight, Search for ‘Docker QuickStart Terminal’ )

Scenario:  Say for example, we are creating a new application and need a Redis service. In a distributed environment, we want everyone to use the same set of configured services to prevent surprises during deployment. A simple solution is to create a Redis Docker image and distribute it across the teams.

Step 3: Create a Docker File

Create a new folder called “docker_image”. After creating the folder, create the empty Dockerfile:

$ mkdir docker_image

$ cd docker_image

$ sudo touch Dockerfile

Now that the Dockerfile is created, open it in a text editor and set the base image and maintainer information. Next, update application to have the latest version of Redis and Install the Redis Server. Finally, expose the default port for Redis using the EXPOSE instruction and then set the entrypoint for the image.

You can now save the file. Here’s how it should look:

# Set the base image
FROM ubuntu

# Dockerfile author / maintainer
MAINTAINER Name <email.id@here>

# Update application repository list and install the Redis server.
RUN apt-get update && apt-get install -y redis-server

# Expose default port
EXPOSE 6379

# Set the default command
ENTRYPOINT [“/usr/bin/redis-server”]

Step 4: Build the Docker Image

Dockerfile is ready, let’s create the image. Run this command:

$ docker build -t redis-server .

Path is a mandatory argument for the build command. We use . as the path because we’re currently in the same directory.  -t flag is used to tag the image.

Step 5: Run a Redis-Server Instance

With the image we just created, we can now create a container running a Redis server instance inside. Run this command:

$ docker run –name redis_instance -t redis-server

Screen Shot 2018-05-01 at 5.43.56 PM

This creates a container with the name redis_instance. It is a good practice to assign the name for the container or else you will need to deal with complex alphanumeric container IDs.

Useful Docker Commands:

Command to list docker image

$ docker image ls

Command to list docker containers

$ docker ps

Command to kill Conatiner

$ docker kill redis_instance

Command to save docker image

$ docker save redis-server > redis-server-image.tar

Step 6: Upload to Docker Hub

Now that we’ve successfully built an image and created a container, let’s share the images using Docker Hub.

Create login credentials in Docker Hub 

$ docker login

$ docker tag b538350ff2a3 curtistechnologies/redis-server:version1.0

$ docker image push curtistechnologies/redis-server:version1.0

$ docker image pull curtistechnologies/redis-server:version1.0