Learn about Centmin Mod LEMP Stack today
Become a Member

WebPerf Docker Tutorial – Getting Started with Containers

Discussion in 'All Internet & Web Performance News' started by eva2000, Jul 20, 2017.

  1. eva2000

    eva2000 Administrator Staff Member

    30,166
    6,784
    113
    May 24, 2014
    Brisbane, Australia
    Ratings:
    +10,136
    Local Time:
    7:32 AM
    Nginx 1.13.x
    MariaDB 5.5
    Working on web projects with a team often presents technical challenges, especially when team members use their own personal computers that run different hardware and operating systems. If you’re constantly testing an app on different devices, it’s difficult to ensure that everyone is having a consistent experience. Virtual machines can help us get around these issues, but there is a lightweight solution that most developers prefer: Docker containers.


    What Is Docker?


    Docker lets you pack, ship and run applications in lightweight containers that are totally hardware independent. The open source software container platform makes it much easier for developers who use different machines to collaborate on code. Docker also allows operators to simultaneously run and manage apps side-by-side in multiple isolated containers, and it can be used to set up agile software delivery pipelines for faster and more secure shipping. This Docker tutorial will cover everything you need to know to start creating and using containers.

    What Is a Container?


    Containers bundle everything an application needs (e.g. libraries, dependencies, etc) all in one package. They are similar to virtual machines; however, unlike VMs, containers do not fully emulate an operating system. A container is also much easier to set up than a VM. As their name implies, containers are self-contained systems that include the libraries and other basic elements required to make software function optimally on any machine running any OS. Containers are lightweight and extremely useful for large-scale collaboration since they ensure that an application will perform the same for everyone.

    You need a platform like Docker to fully take advantage of containers. Containers have been around for a long time, but Docker made them mainstream by providing standard APIs and creating a community for developers to share container libraries. Today, tech companies of all sizes use containers in some capacity. In 2014, The Register reported that Google was running over two billion containers on any given week. Using Docker has allowed the tech giant to streamline their development process and provide us with a better web at a faster pace.

    Thanks to containers, teams in different departments can cooperate seamlessly without worrying about operating systems or other technical limitations. Docker containers can be deployed on any device, any VM or even on the cloud. Because they are so lightweight, containers allow for easy scalability.

    You can even separate parts of your system into multiple containers. For example, you could designate individual containers for Nginx, MongoDB and Redis respectively. The aforementioned programs all have free Docker images that can be installed with a shell command.

    [​IMG]

    Docker Terminology


    Before we go any further, here is a list of terms you should be familiar with when talking about Docker:

    • Images: Docker images are like blueprints for containers. They are made up of
      intermediate layers designed to increase reusability, decrease disk usage and speed up the building process by caching each step along the way.
    • Docker Hub: The hub is a massive registry of Docker images that anyone can use. You can also host your own Docker registries to pull images from.
    • Docker Daemon: The Docker Daemon is a background service that runs on the host responsible for building, running and distributing containers.
    • Docker Client: This is the command line tool used to interact with the Docker Daemon. You can also use tools like Kitematic to get a graphical UI that makes managing containers a little easier.
    Installation


    While running applications that have been packaged into containers requires no setup, actually using the Docker platform takes some effort to configure. There are two editions of Docker: a free Community Edition and a premium Enterprise Edition.

    For the purposes of this tutorial, visit the Docker Community Edition webpage and follow the instructions for setting up Docker on your operating system of choice.

    Introduction to Images


    Every Docker container is based on an image. Think of a Docker image as a snapshot, or an instance, of a running application. Images in Docker are sort of like Git repositories in that they can exist in multiple versions with committed changes.

    You can obtain images from the Docker Hub, or you can create your own. For an in-depth explanation on how to make images from scratch, you can read the Docker docs on the subject; however, there are plenty of available images that you can build upon, and this tutorial will guide you through creating a container using an existing Docker image. The docker search command allows you to search for images from the command line, and you can use the docker images command to view a list of locally available images at any time.

    There are two main types of images:

    1. Base images: Sometimes called parent images, which simply provide an OS user space
    2. Child images: Which build upon base images to provide additional functionality

    There are also official images that are maintained by the Docker team and user images made by community members. When using Docker, you’ll typically use base images built by others to create your own child images for your containers.

    Dockerfiles


    Before you can use an image, you need to make a Dockerfile. A Dockerfile is a script with the instructions for creating a new child image from an existing base image. These files simplify the deployment process by keeping everything organized. The commands used in Docker should look familiar to any Linux user, so there’s not much of a learning curve involved in creating your own Dockerfiles.

    How to Create a Container Using Docker


    Speaking of which, we’ll now explore how you can use Docker to package an application into a container. For the sake of this example, our goal will be to create a container that runs MongoDB in the latest version of Ubuntu. Digital Ocean has a full guide to Docker’s syntax and commands for reference, but you should be able to intuitively follow the directions in this guide.

    Step 1 – Create a Dockerfile


    You can start creating your Dockerfile in the text editor of your choice, so open a new file and begin by specifying a base image. To do this, use the FROM keyword followed by our chosen base image:

    FROM ubuntu

    If you try to pull an image without providing a version number, the client will default to the latest one. Next, you must declare the maintainer, or author, of the Dockerfile:

    MAINTAINER Darth Vader

    After that, it’s a good idea to update the application repository list. This step is not always necessary, but you should make it a habit while you’re learning:

    RUN apt-get update

    Next, you will set the arguments and commands for downloading MongoDB. You can check the MongoDB docs for a full explanation of the installation process, but the code is pretty straightforward. First, add the package verification key:

    RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10

    Then, add MongoDB to the repository sources list:

    RUN echo 'deb Index of /repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list

    Update the repository sources list one more time:

    RUN apt-get update

    Install the MongoDB package and create the default data directory:

    RUN apt-get install -y mongodb-10gen
    RUN mkdir -p /data/db

    The next step is setting the default port for MongoDB:

    EXPOSE 27017
    CMD ["--port 27017"]

    Finally, set a default container command:

    ENTRYPOINT usr/bin/mongod

    You can now save the file. Excluding any documentation, your Dockerfile should look as follows:

    FROM ubuntu
    MAINTAINER Darth Vader
    RUN apt-get update
    RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
    RUN echo 'deb Index of /repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/mongodb.list
    RUN apt-get update
    RUN apt-get install -y mongodb-10gen
    RUN mkdir -p /data/db
    EXPOSE 27017
    CMD ["--port 27017"]
    ENTRYPOINT usr/bin/mongod
    Step 2 – Build an Image


    Now that you’ve got the hard part out of the way, you can create an image by typing the following into the Docker command line:

    docker build -t my_mongodb .

    The -t flag tags the image. You can see all of your options for images by running:

    docker build --help
    Step 3 – Running MongoDB


    Finally, we can create a container that runs an instance of MongoDB in Ubuntu. Be sure to give it a name, or else Docker will assign a random alphanumeric ID:

    sudo docker run -name my_first_mdb_instance -i -t my_mongodb

    The application should now function perfectly on any computer. To see all of your Docker IDs, get a list by running the following code:

    sudo docker ps -l
    Tips for Using Docker Containers


    If you’re new to Docker there are a few things you need to know. Below are a few tips that you should keep in mind for using Docker containers.

    1. Know Your Docker Commands


    There are dozens of helpful commands that you should take advantage of to get the most out of Docker. For example, by using docker exec, you can run commands inside a running Docker container. You can stop a running container at any time by using docker stop [name] and restart it using docker start [name]. Study the Docker docs to get a full sense of everything you can do with Docker.

    2. Keep Your Containers Ephemeral


    Ephemeral means short lasting, so you should create containers that they are easy to stop, destroy and build anew with minimum configuration. Ideally, you’ll be reusing a lot of the same code. The Linux website has an insightful article about managing persistent data storage while keeping your containers ephemeral.

    3. Add a .dockerignore File


    You should generally put each Dockerfile in an empty directory and include only the files that are necessary for your build. However, if you want an extra performance boost, you can add a .dockerignore file to exclude certain larger files from going to the Docker Daemon. These files work similarly to .gitignore files, and you can read about how to use them in the Docker Docs.

    4. Don’t Install Unnecessary Packages


    If you install something and never use it, it’s only slowing you down. For example, your database image doesn’t need a text editor. Avoid the instinct to install extra packages that you don’t really need.

    5. Limit Your Layers


    When making Dockerfiles, strive to maintain a balance between readability and complexity while keeping layers to a minimum. To take advantage of layer caching, keep the code that is least likely to change at the top of your Dockerfiles.

    6. Sort Multi-line Arguments Alphanumerically


    Sorting your multi-line arguments this way helps you avoid duplicate packages, and it makes future alterations much easier.

    7. One Concern Per Container


    While you don’t always have to dedicate an individual container to each operating system process, you should try to decouple applications as much as possible to keep your containers scalable and reusable. For example, you could have an application stack consisting of separate containers and images for your database and in-memory cache. If you have multiple containers that depend on one another, use Docker container networks to facilitate communication between them.

    Docker Tutorial – In Summary


    Learning how to use Docker will certainly come in handy if you find yourself working with a large team of developers. This Docker tutorial has only covered the basics of containers. If you’re truly captivated by containers, you can join the Open Container Initiative community, which is working toward establishing industry standards for container formats.

    The post Docker Tutorial – Getting Started with Containers appeared first on KeyCDN Blog.

    Continue reading...
     
    Last edited: Jul 21, 2017
    • Informative Informative x 3