Zookeeper, 大数据

zookeeper-04 zookeeper实战

分布式安装部署

集群规划:

hadoop001 hadoop002 hadoop003 三个节点上部署zookeeper

解压安装:

tar -zxvf apache-zookeeper-3.5.7-bin.tar.gz -C /opt/module
mv apache-zookeeper-3.5.7-bin zookeeper-3.5.7

分发:

xsync  zookeeper-3.5.7

配置服务器编号

  • zookeeper-3.5.7下创建zkData目录
mkdir zookeeper-3.5.7/zkData

zkData下创建myid文件 填写唯一id号1, 2, 3

touch myid
echo 1 >> myid

配置zoo.cfg

  • 修改zoo.sample.cfg文件名为zoo.cfg
  • 修改数据保存路径
dataDir=/opt/module/zookeeper-3.5.7/zkData

添加集群服务器配置

server.1=hadoop001:2888:3888
server.2=hadoop002:2888:3888
server.3=hadoop003:2888:3888

最后分发文件

server.A=B:C:D
  • 配置参数解读
    • A是一个数字,表示这个是第几号服务器 也就是myid的配置
    • B是服务器的ip地址
    • C是这个服务器与集群中的leader服务器交换信息的端口
    • D是备用通信,leader挂了用来重新选举leader

客户端命令行操作

  • ls path
    • 使用ls命令来查看当前znode中包含的内容
    • -s 查看当前节点数据并能看到更新次数等数据
  • create
    • 普通创建
    • -s 含有序列
    • -e 临时(重启或超时消失)
  • get path
    • 获得节点值
  • set
    • 设置节点的具体值
  • stat
    • 查看节点状态
  • delete
    • 删除节点
  • deleteall
    • 递归删除节点

启动客户端

bin/zkCli.sh

API应用

创建maven项目

  • pom.xml
<dependencies>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>4.13</version>
    </dependency>
    <dependency>
        <groupId>org.apache.logging.log4j</groupId>
        <artifactId>log4j-core</artifactId>
        <version>2.8.2</version>
    </dependency>
    <dependency>
        <groupId>org.apache.zookeeper</groupId>
        <artifactId>zookeeper</artifactId>
        <version>3.5.7</version>
    </dependency>
</dependencies>

拷贝log4j.properties文件到项目resources目录

log4j.rootLogger=INFO,stdout,logfile
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spring.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n

代码

public class TestZooKeeper {
    private String connectString = "hadoop001:2181,hadoop002:2181,hadoop003:2181";
    private int sessionTimeout = 2000;
    private ZooKeeper zkClient;

    // 创建zk客户端连接
    // Before 提前运行
    @Before
    public void init() throws IOException {
        zkClient = new ZooKeeper(connectString, sessionTimeout, new Watcher() {
            public void process(WatchedEvent event) {
//                try {
//                    List<String> children = zkClient.getChildren("/", true);
//                    for (String child : children) {
//                        System.out.println(child);
//                    }
//                    System.out.println("------------");
//                } catch (KeeperException e) {
//                    e.printStackTrace();
//                } catch (InterruptedException e) {
//                    e.printStackTrace();
//                }
            }
        });
    }

    // 创建节点
    @Test
    public void createNode() throws KeeperException, InterruptedException {
        String path = zkClient.create("/sanguo", "sanguo".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
        System.out.println(path);
    }

    // 获取子节点并监听数据变化
    @Test
    public void getChildren() throws KeeperException, InterruptedException {
        List<String> children = zkClient.getChildren("/", true);
        for (String child : children) {
            System.out.println(child);
        }
        // 延时阻塞
        Thread.sleep(Long.MAX_VALUE);
    }

    // 判断节点是否存在
    @Test
    public void exists() throws KeeperException, InterruptedException {
        Stat stat = zkClient.exists("/consumers", false);
        System.out.println(stat == null ? "not exists" : "exists");
    }
}