docker搭建Hbase

单机配置Hbase

###下载并运行镜像###
我已经备好集成了HBase单机版的镜像,可以执行以下命令下载到本地:

docker pull bolingcavalry/centos7-hbase126-standalone:0.0.1

###运行容器###

执行以下命令可以用刚刚下载的镜像创建一个容器,容器名称hbase001,60010端口映射到本机:

docker run –name=hbase001 -p 60010:60010 -idt bolingcavalry/centos7-hbase126-standalone:0.0.1

###进入容器###
执行以下命令,可以进入hbase001容器:

docker exec -it hbase001 /bin/bash

###启动HBase服务###
进入hbase001容器后,输入以下命令就能启动hbase服务:

/usr/local/work/hbase/bin/start-hbase.sh

###进入HBase命令行###
执行以下命令,可以进入HBase的命令行模式:

hbase shell

记录今天的操作

docker启动springboot+Docker+NGINX+MySQL

启动的目录
/Users/squareface/IdeaProjects/dockercompose-springboot-mysql-nginx

docker-compose.yaml 文件内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38

version: '3'
services:
hk-nginx:
container_name: hk-nginx
image: nginx:1.13
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d
depends_on:
- app

hk-mysql:
container_name: hk-mysql
image: mysql/mysql-server:5.7
environment:
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: hellokoding
MYSQL_ROOT_HOST: '%'
ports:
- "3306:3306"
restart: always

app:
restart: always
build: ./app
working_dir: /app
volumes:
- ./app:/app
- ~/.m2:/root/.m2
expose:
- "8080"
command: mvn clean spring-boot:run
depends_on:
- hk-mysql

执行docker-compose up 指令,自动启动项目。就多出4个images
mysql
maven
nginx
dockercompose-springboot-mysql-nginx_app

SpringBoot和NGINX的配置在项目的哪里有写?

NGINX 反向代理 监听一个端口,进行转发。

docker 搭建Hbase完全分布式

使用的images: bolingcavalry/centos6.7-jdk1.8-ssh:0.0.1

1、 运行下列docker-compose.yaml文件,会创建三个容器。名字分别为master、slave1、slave2、

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
version: '2'
services:
master:
image: bolingcavalry/centos6.7-jdk1.8-ssh:0.0.1
container_name: master
ports:
- "19010:22"
- "50070:50070"
- "8088:8088"
- "16010:16010"
restart: always
slave1:
image: bolingcavalry/centos6.7-jdk1.8-ssh:0.0.1
container_name: slave1
depends_on:
- master
ports:
- "19011:22"
restart: always
slave2:
image: bolingcavalry/centos6.7-jdk1.8-ssh:0.0.1
container_name: slave2
depends_on:
- slave1
ports:
- "19012:22"
restart: always

查询容器的ip地址:
master 172.19.0.2
slave1 172.19.0.3
slave2 172.19.0.4

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
32481e89f8f8 bolingcavalry/centos6.7-jdk1.8-ssh:0.0.1 “/bin/sh -c ‘service…” 12 minutes ago Up 12 minutes 8088/tcp, 16010/tcp, 50070/tcp, 0.0.0.0:19012->22/tcp slave2
2c3c9dd989ad bolingcavalry/centos6.7-jdk1.8-ssh:0.0.1 “/bin/sh -c ‘service…” 12 minutes ago Up 12 minutes 8088/tcp, 16010/tcp, 50070/tcp, 0.0.0.0:19011->22/tcp slave1
bcd27e44e3d6 bolingcavalry/centos6.7-jdk1.8-ssh:0.0.1 “/bin/sh -c ‘service…” 12 minutes ago Up 12 minutes 0.0.0.0:8088->8088/tcp, 0.0.0.0:16010->16010/tcp, 0.0.0.0:50070->50070/tcp, 0.0.0.0:19010->22/tcp master

2、###配置hostname和hosts###

修改master的/etc/sysconfig/network文件,将原有的HOSTNAME=localhost.localdomain改成HOSTNAME=master,对slave1和slave2也做修改,将HOSTNAME分别改成slave1和slave2;
分别修改master、slave1、slave2的/etc/hosts文件,都添加相同的内容如下:
172.19.0.2 master
172.19.0.3 slave1
172.19.0.4 slave2

3 ###master、slave1、slave2之间配置相互免密码登录###

– 分别修改master、slave1、slave2的/etc/ssh/sshd_config文件,找到下列内容,删除每行的注释符号”#”:
RSAAuthentication yes
PubkeyAuthentication yes

– 分别在master、slave1、slave2上执行命令ssh-keygen -t rsa,一路回车下去,最终会在/root/.ssh目录下生成rsa文件

– 在master上执行如下三行命令,执行完毕后,三个容器的rsa公钥都存在/root/.ssh/authorized_keys文件中了:

1
2
3
cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys
ssh root@slave1 cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys
ssh root@slave2 cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys

第一行命令将master的公钥写入authorized_keys,第二、第三行分别ssh登录slave1、slave2,将他们的公钥写入到master的authorized_keys文件中,由于是ssh登录,需要输入密码,这里是”password”

– 分别在slave1、slave2上执行以下命令,将master上的authorized_keys文件复制过来:

1
ssh root@master cat ~/.ssh/authorized_keys>> ~/.ssh/authorized_keys

由于master的authorized_keys中包含了slave1、slave2的rsa公钥,所以在slave1和slave2上执行以上命令的时候是不需要登录的;

现在三个容器的公钥都已经放在每一个容器上了,它们相互之间可以免密码登录了,例如在slave1上执行ssh root@slave2即可登录到slave2而不用输入密码

4 ######在容器上创建所需目录###
分别在master、slave1、slave2上创建以下目录:

/usr/local/work
/opt/hbase

###安装zookeeper-3.4.6集群###

去zookeeper官网下载zookeeper-3.4.6.tar.gz,然后解压到当前电脑;
在zookeeper-3.4.6/conf/目录下创建zoo.cfg文件,内容如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/work/zkdata
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=master:2887:3887
server.2=slave1:2888:3888
server.3=slave2:2889:3889

其实也就是修改了dataDir的值,还有最后三行是新增的,其他内容都是从zoo_sample.cfg复制过来的;

  1. 在当前电脑上,用ssh工具执行以下三行命令,将前面解压的并且已经修改了zoo.cfg文件的zookeeper-3.4.6目录复制到master、slave1、slave2上去:

scp -P 19010 -r ./zookeeper-3.4.6 root@localhost:/usr/local/work
scp -P 19011 -r ./zookeeper-3.4.6 root@localhost:/usr/local/work
scp -P 19012 -r ./zookeeper-3.4.6 root@localhost:/usr/local/work

执行每行命令都要输入密码”password”

  1. 在master、slave1、slave2上创建目录/usr/local/work/zkdata,在该目录下创建文件myid,文件内容分别是是”1”、“2”、“3”;
  2. 在master、slave1、slave2上依次执行启动zookeeper的命令/usr/local/work/zookeeper-3.4.6/bin/zkServer.sh start;
  3. 在每个容器上分别执行以下命令可以检查zookeeper的集群状态:

###启动zookeeper。error
出现好多奇奇怪怪的问题!!!

删掉 /usr/local/work /opt/hbase 重新来一遍