Skip to main content

Copy Pastes from Docker’s Get Started doc

Copy Pastes from Docker’s Get Started doc

Create image, manage containers, docker cloud

https://docker.github.io/get-started/part2/#log-in-with-your-docker-id

Image result for docker

Dockerfile

Create an empty directory. Change directories (cd) into the new directory, create a file called Dockerfile, copy-and-paste the following content into that file, and save it. Take note of the comments that explain each statement in your new Dockerfile.

# Use an official Python runtime as a parent image

FROM python:2.7-slim

 

# Set the working directory to /app

WORKDIR /app

 

# Copy the current directory contents into the container at /app

ADD . /app

 

# Install any needed packages specified in requirements.txt

RUN pip install -r requirements.txt

 

# Make port 80 available to the world outside this container

EXPOSE 80

 

# Define environment variable

ENV NAME World

 

# Run app.py when the container launches

CMD ["python", "app.py"]


requirements.txt

Flask

Redis

app.py

from flask import Flask

from redis import Redis, RedisError

import os

import socket

 

# Connect to Redis

redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)

 

app = Flask(__name__)

 

@app.route("/")

def hello():

    try:

        visits = redis.incr("counter")

    except RedisError:

        visits = "<i>cannot connect to Redis, counter disabled</i>"

 

    html = "<h3>Hello {name}!</h3>" \

           "<b>Hostname:</b> {hostname}<br/>" \

           "<b>Visits:</b> {visits}"

    return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits)

 

if __name__ == "__main__":

    app.run(host='0.0.0.0', port=80)


Create image

$ ls

Dockerfile app.py requirements.txt


Now run the build command. This creates a Docker image, which we’re going to tag using -t so it has a friendly name. 

$ docker build -t friendlyhello .

Where is your built image? It’s in your machine’s local Docker image registry:

$ docker images

REPOSITORY            TAG                 IMAGE ID

friendlyhello         latest              326387cea398


Figure 1: Each container image seems to be stored in a dedicated repository, in the local Docker Registry.


Run container from image

Run the app, mapping your machine’s port 4000 to the container’s published port 80 using -p:

$ docker run -p 4000:80 friendlyhello

Now let’s run the app in the background, in detached mode:

docker run -d -p 4000:80 friendlyhello

list running containers

docker container ls

Stop container by ID from ("docker container ls")

docker stop Container-ID


Upload your image to a repo in Docker Cloud

Figure 2 cloud.docker.com hosts a cloud Docker Registry for registered users. It is linked with hub.docker.com

Loggin to cloud.docker.com

docker login


Create a new image from another one (by repo-name)

docker tag imageName username/repositoryName:tag


docker push username/repository:tag


If we are logged into Docker Cloud, but don’t have the image locally, it will be pulled from the remote repository


Part 2: Compose

 

Figure 3 Docker Compose (the octopus) arranges multiple containers on a Swarm (i.e. a cluster of  machines each running a docker engine). You can scatter those containers on different machines.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.

new load-balanced app

docker-compose.yml

version: "3"

services:

  web:

    # replace username/repo:tag with your name and image details

    image: suanik/get-shuifty:morty

    deploy:

      replicas: 5

      resources:

        limits:

          cpus: "0.1"

          memory: 50M

      restart_policy:

        condition: on-failure

    ports:

      - "80:80"

    networks:

      - webnet

networks:

  webnet:



Image result for swarm docker

Figure 4 Docker Swarm. The picture is misleading, because a container cannot be run by several machines at once.

Before we can use the docker stack deploy command we’ll first run:

docker swarm init

We now have a one-node  swarm running

Now let’sdeploy our stack. You have to give your app a name. Here, it is set to getstartedlab:

docker stack deploy -c docker-compose.yml getstartedlab

Our single service stack is running 5 container instances of our deployed image on one host. Let’s investigate.

Get the service ID for the one service in our application:

docker service ls

Docker swarms run tasks that spawn containers. Tasks have state and their own IDs:

docker service ps <serviceID>

Let’s inspect one task and limit the ouput to container ID:

docker inspect --format='{{.Status.ContainerStatus.ContainerID}}' <task>

Vice versa, inspect the container ID, and extract the task ID:

docker inspect --format="{{index .Config.Labels \"com.docker.swarm.task.id\"}}" <container>

Now list all 5 containers:

docker container ls -q

Scale the app

You can scale the app by changing the replicas value in docker-compose.yml, saving the change, and re-running the docker stack deploy command:

docker stack deploy -c docker-compose.yml getstartedlab

Docker will do an in-place update, no need to tear the stack down first or kill any containers.

Take the app down with docker stack rm:

docker stack rm getstartedlab

This removes the app, but our one-node swarm is still up and running (as shown by docker node ls). Take down the swarm with docker swarm leave –force


Cheatsheet:

docker stack ls                                            # List stacks or apps

docker stack deploy -c <composefile> <appname>  # Run the specified Compose file

docker service ls                 # List running services associated with an app

docker service ps <service>                  # List tasks associated with an app

docker inspect <task or container>                   # Inspect task or container

docker container ls -q                                      # List container IDs

docker stack rm <appname>                             # Tear down an application


Part 4: Swarms

https://docker.github.io/get-started/part4/

Image result for swarm docker

run docker swarm init to enable swarm mode and make your current machine a swarm manager, then run docker swarm join on other machines to have them join the swarm as workers.


Create a cluster

Now, create a couple of VMs using docker-machine, using the VirtualBox driver:

$ docker-machine create --driver virtualbox myvm1

$ docker-machine create --driver virtualbox myvm2

You can send commands to your VMs using docker-machine ssh. Instruct myvm1 to become a swarm manager with docker swarm init and you’ll see output like this:

$ docker-machine ssh myvm1 "docker swarm init"

As you can see, the response to docker swarm init contains a pre-configureddocker swarm join command for you to run on any nodes you want to add. Copy this command, and send it to myvm2 via docker-machine ssh to have myvm2 join your new swarm as a worker:

$ docker-machine ssh myvm2 "docker swarm join \

--token <token> \

<ip>:<port>"

Use ssh to connect to the (docker-machine ssh myvm1), and run docker node ls to view the nodes in this swarm:

Deploy your app on a cluster

Copy the file docker-compose.yml you created in part 3 to the swarm manager myvm1’s home directory 

docker-machine scp docker-compose.yml myvm1:~


docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml getstartedlab"


$ docker-machine ssh myvm1 "docker stack ps getstartedlab"

Accessing your cluster

You can access your app from the IP address of either myvm1 or myvm2. The network you created is shared between them and load-balancing. Run docker-machine ls to get your VMs’ IP addresses and visit either of them on a browser, hitting refresh (or just curl them). You’ll see five possible container IDs all cycling by randomly, demonstrating the load-balancing.

The reason both IP addresses work is that nodes in a swarm participate in an ingress routing mesh. This ensures that a service deployed at a certain port within your swarm always has that port reserved to itself, no matter what node is actually running the container. Here’s a diagram of how a routing mesh for a service called my-web published at port 8080 on a three-node swarm would look:

routing mesh diagram


Cleanup

You can tear down the stack with docker stack rm. For example:

docker-machine ssh myvm1 "docker stack rm getstartedlab"

At some point later, you can remove this swarm if you want to with docker-machine ssh myvm2 "docker swarm leave" on the worker and docker-machine ssh myvm1 "docker swarm leave --force" on the manager.

 

 

Cheatsheet

docker-machine create --driver virtualbox myvm1 # Create a VM (Mac, Win7, Linux)

docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1 # Win10

docker-machine env myvm1                # View basic information about your node

docker-machine ssh myvm1 "docker node ls"         # List the nodes in your swarm

docker-machine ssh myvm1 "docker node inspect <node ID>"        # Inspect a node

docker-machine ssh myvm1 "docker swarm join-token -q worker"   # View join token

docker-machine ssh myvm1   # Open an SSH session with the VM; type "exit" to end

docker-machine ssh myvm2 "docker swarm leave"  # Make the worker leave the swarm

docker-machine ssh myvm1 "docker swarm leave -f" # Make master leave, kill swarm

docker-machine start myvm1            # Start a VM that is currently not running

docker-machine stop $(docker-machine ls -q)               # Stop all running VMs

docker-machine rm $(docker-machine ls -q) # Delete all VMs and their disk images

docker-machine scp docker-compose.yml myvm1:~     # Copy file to node's home dir

docker-machine ssh myvm1 "docker stack deploy -c <file> <app>"   # Deploy an app



Part 5: Stacks

https://docker.github.io/get-started/part5/

Add a new service and redeploy

It’s easy to add services to our docker-compose.yml file. First, let’s add a free visualizer service that lets us look at how our swarm is scheduling containers.

  1. Open up docker-compose.yml in an editor and replace its contents with the following. Be sure to replace username/repo:tag with your image details.

version: "3"

services:

  web:

    # replace username/repo:tag with your name and image details

    image: username/repo:tag

    deploy:

      replicas: 5

      restart_policy:

        condition: on-failure

      resources:

        limits:

          cpus: "0.1"

          memory: 50M

    ports:

      - "80:80"

    networks:

      - webnet

  visualizer:

    image: dockersamples/visualizer:stable

    ports:

      - "8080:8080"

    volumes:

      - "/var/run/docker.sock:/var/run/docker.sock"

    deploy:

      placement:

        constraints: [node.role == manager]

    networks:

      - webnet

networks:

  webnet:


  1. Re-run the docker stack deploy command on the manager, and whatever services need updating will be updated:

  1. $ docker stack deploy -c docker-compose.yml getstartedlab


Now you can visit the visualizer service

Figure 5 The single copy of visualizer is running on the manager as you expect, and the 5 instances of web are spread out across the swarm.   

The visualizer is a standalone service that can run in any app that includes it in the stack. It doesn’t depend on anything else.

You can corroborate this visualization by running docker stack ps <stack>:

Persist the data

Save this new docker-compose.yml file, which finally adds a Redis service.

version: "3"

services:

  web:

    # replace username/repo:tag with your name and image details

    image: suanik/get-shuifty:morty

    deploy:

      replicas: 5

      restart_policy:

        condition: on-failure

      resources:

        limits:

          cpus: "0.1"

          memory: 50M

    ports:

      - "80:80"

    networks:

      - webnet

  visualizer:

    image: dockersamples/visualizer:stable

    ports:

      - "8080:8080"

    volumes:

      - "/var/run/docker.sock:/var/run/docker.sock"

    deploy:

      placement:

        constraints: [node.role == manager]

    networks:

      - webnet

  redis:

    image: redis

    ports:

      - "6379:6379"

    volumes:

      - /home/docker/data:/data

    deploy:

      placement:

        constraints: [node.role == manager]

    command: redis-server --appendonly yes

    networks:

      - webnet

networks:

  webnet:


  1. Create a ./data directory on the manager:

docker-machine ssh myvm1 "mkdir ./data"

  1. Run docker stack deploy one more time.

$ docker stack deploy -c docker-compose.yml getstartedlab

  1. Run docker service ls to verify that the three services are running as expected.


Comments

Popular posts from this blog

Oracle Database commands

Notes Leo Oracle DB Start sqlplus Look for orale_home and SID cat /etc/oratab # # A colon, ':', is used as the field terminator.  A new line terminates # the entry.  Lines beginning with a pound sign, '#', are comments. # # Entries are of the form: #   $ORACLE_SID:$ORACLE_HOME:<N|Y>: # ORCL : /u01/app/oracle /product/12.1.0/dbhome_1:Y Set the env variables: [oracle@LeoFareco opc]$ . oraenv ORACLE_SID = [ ORC L] ? The Oracle base remains unchanged with value /u01/app/oracle Start sqlplus [oracle@LeoFareco opc]$ sqlplus / as SYSDBA Avoid the wrapping of the text SQL> set lines 999; SQL> set pages 999; PDB Create a new PDB from the Seed SQL> CREATE PLUGGABLE DATABASE PDB2 ADMIN USER oracle IDENTIFIED BY oracle; Pluggable database created. List the available PDBs SQL> show pdbs OR SQL> select name, open_mode from v$pdbs; NAME OPEN_MODE ------------------------------ ---------- PDB$SEED READ ONLY PDB1 READ WRITE PDB2 MOUNTED Open a PDB in read-write mode

ORA-00018.Maximum number of sessions exceed

ORA-00018.Maximum number of sessions exceed Initiate the environment variables cat /etc/oratab . oraenv ISTCLOUD Control the number of connections in use loggin as PDBADMIN in your PDB sqlplus PDBADMIN/welcome1@129.152.132.249:1521/SALESPDB4XOK68WUOMMXBE1WPHN.ieoracle12645.oraclecloud.internal Check the number of connections Run: SELECT 'Currently, ' || (SELECT COUNT(*) FROM V$SESSION) || ' out of ' || VP.VALUE || ' connections are used.' AS USAGE_MESSAGE FROM V$PARAMETER VP WHERE VP.NAME = 'sessions' It displays “Currently n out of m connections are used.” Here n > m. It should be the opposite. Change the max number of sessions Loggin as sys in your PDB quit sqlplus / as sysdba alter session set container = SALESPDB4XOK68WUOMMXBE1WPHN; Change the number of sessions alter system set sessions=100 scope=both sid='*'; Then bounce the database Turn it off and on again SQL> shutdown immediate SQL> startup Open all the PDBs (closed by the boun

Firmar un PDF con el certificado digital de FNMT

 Como obtener un certificado digital Todos los detalles son explicado aquí: https://www.sede.fnmt.gob.es/certificados Una ves el certificado digital instalado en Firefox, podrás identificarte a servicios públicos online, por ejemplo para pagar tus impuestos, o pedir una cita prevía en tu centro de salud. Pero puedes utilisar el certificado digital para firmar PDFs también!  Como funciona? El fichero "algo.p12" que has importado en Firefox no solo contiene un certificado digital. Contiene: Un certificado digital (un "passaporte digital"), firmado por FNMT Este certificado digital continene también una clave pública Una clave privada, protegida por una contraseña Las llaves privadas y pública son utilisadas para firmar documentes: compartes el documento cifrado ("firmado") por la clave privada (que nunca compartes), con tu certificado digital adjuntado al documento. tu destinatorio verifica tu "firma", decifrando el documento por la clave pública i