1 安装
本次安装是从 Github下载源码进行编译安装
2 运行环境
- Linux 系统(本次使用CentOS6.5)
- JDK 1.6 及以上(本次使用JDK1.7)
- HBase 0.92 及以上(本次使用HBase1.0.0)
- GnuPlot 4.2 及以上
3 源码编译
3. 1 环境
- A Linux system
- Java Development Kit 1.6 or later
- GnuPlot 4.2 or later
- Autotools(autoconf、automake和libtool)
- Make
- Python
- Git
- 网络连接
3.2 下载源码
下载源码的方式有多种,你可以在linux中使用git clone,也可以到github中下载源码上传到linux中
这里演示的是使用git clone下载
git clone https://github.com/OpenTSDB/opentsdb.git
cd opentsdb
./build.sh
在编译完成后你会在opentsdb目录下看到build目录
3.3 创建表格
如果你是首次使用OpenTSDB的话,那么需要执行以下命令创建表格,当然你也可以进入hbase shell自行创建
cd /opentsdb/src
env COMPRESSION=NONE HBASE_HOME=你的hbasemulu ./create_table.sh
eg: env COMPRESSION=NONE HBASE_HOME=/home/hadoop/app/hbase/ ./create_table.sh
执行上面命令之后会在hbase中创建四个表格:
3.4 修改配置文件
- 进入src目录下,复制该目录下的opentsdb.conf文件至build目录下
cd src
cp opentsdb.conf /home/hadoop/app/opentsdb/build/
- 进入build目录,编辑opentsdb.conf
注意点:
- 下面配置文件中标识为 REQUIRED都是一定要配置的,不然会启动失败
- 需要创建cachedir用来存储缓存文件,并且最好做定时任务进行清理,避免磁盘写满出现无谓的问题
注意下这个属性 tsd.storage.hbase.zk_quorum = master:2181,slave2:2181,slave3:2181
因为我集群中HBase使用的是外置的zookeeper集群,那么这个属性也需要配置跟HBase中相同的zookeeper的路径
# --------- NETWORK ----------
# The TCP port TSD should use for communications
# *** REQUIRED ***
tsd.network.port = 4242
# The IPv4 network address to bind to, defaults to all addresses
# tsd.network.bind = 0.0.0.0
# Disable Nagel's algorithm, default is True
#tsd.network.tcp_no_delay = true
# Determines whether or not to send keepalive packets to peers, default
# is True
tsd.network.keep_alive = true
# Determines if the same socket should be used for new connections, default
# is True
#tsd.network.reuse_address = true
# Number of worker threads dedicated to Netty, defaults to # of CPUs * 2
#tsd.network.worker_threads = 8
# Whether or not to use NIO or tradditional blocking IO, defaults to True
#tsd.network.async_io = true
# ----------- HTTP -----------
# The location of static files for the HTTP GUI interface.
# *** REQUIRED ***
tsd.http.staticroot =/home/hadoop/app/opentsdb/build/staticroot
# Where TSD should write it's cache files to
# *** REQUIRED ***
tsd.http.cachedir = /home/hadoop/app/opentsdb/build/cachedir
# --------- CORE ----------
# Whether or not to automatically create UIDs for new metric types, default
# is False
tsd.core.auto_create_metrics = true
# Whether or not to enable the built-in UI Rpc Plugins, default
# is True
#tsd.core.enable_ui = true
# Whether or not to enable the built-in API Rpc Plugins, default
# is True
#tsd.core.enable_api = true
# --------- STORAGE ----------
# Whether or not to enable data compaction in HBase, default is True
#tsd.storage.enable_compaction = true
# How often, in milliseconds, to flush the data point queue to storage,
# default is 1,000
tsd.storage.flush_interval = 1000
# Max number of rows to be returned per Scanner round trip
# tsd.storage.hbase.scanner.maxNumRows = 128
# Name of the HBase table where data points are stored, default is "tsdb"
#tsd.storage.hbase.data_table = tsdb
# Name of the HBase table where UID information is stored, default is "tsdb-uid"
#tsd.storage.hbase.uid_table = tsdb-uid
# Path under which the znode for the -ROOT- region is located, default is "/hbase"
#tsd.storage.hbase.zk_basedir = /hbase
# A comma separated list of Zookeeper hosts to connect to, with or without
# port specifiers, default is "localhost"
tsd.storage.hbase.zk_quorum = master:2181,slave2:2181,slave3:2181
# --------- COMPACTIONS ---------------------------------
# Frequency at which compaction thread wakes up to flush stuff in seconds, default 10
tsd.storage.compaction.flush_interval = 1000
# Minimum rows attempted to compact at once, default 100
# tsd.storage.compaction.min_flush_threshold = 100
# Maximum number of rows, compacted concirrently, default 10000
# tsd.storage.compaction.max_concurrent_flushes = 10000
# Compaction flush speed multiplier, default 2
# tsd.storage.compaction.flush_speed = 2
3.5 启动TSD
进入opentsdb/build目录
cd build
./tsdb tsd
3.6 查看web页面
我们进入OpenTSDB自带的奇丑的不实用的web界面来查看参数时序图,演示使用的192.168.80.175:4242,端口号是在opentsdb.conf中所配置好的端口号
以上就是OpenTSDB的部署指南,如果需要部署多个节点的话,只需要将编译好的opentsdb文件夹复制到其他节点,运行即可。