powerbi支持influxdb 集群吗

随笔 - 770&
评论 - 150&
&&&&&&&&&&&
Basho公司开源了它的时序数据库产品Riak TS 1.3
代码在github riak的riak-ts分支上!
Riak KV产品构建于Riak内核之上,提供了一种高弹性、高可用的键值数据库。Riak KV产品当前正在持续改进中,专注于数据正确性、预防数据损失和破坏等特性。
Riak TS产品源于Riak KV数据库,是一种为时序数据仓库而专门构建的产品。其中集成了Riak KV产品的所有强大功能,并使用这些功能去解决用户在处理时序数据中所遇到的问题。我们在该产品中确实地实现了哪些特性呢?这里我列出了其中的一部分:
数据的快速写入路径;
为数据桶建立模式;
查询规划及查询子系统;
对虚拟节点的并行数据抽取;
灵活的复合键值;
我们也查看了时序数据库产品的市场情况,当时只见到了寥寥可数的几个解决方案,并且所有这些解决方案的质量都不足以承担企业级的生产工作负荷。已有的时序数据解决方案或者是缺乏可扩展集群或弹性,或者是管理和操作非常繁琐。所有这些使得它们成为糟糕的选择。
为讨论解决这个问题的创意,我们进而开了一次架构会议。最终,我们的一个工程师提出了一个有意思的创意,即使用量子(时间范围)将数据围绕哈希 环分布,并基于此创意构建了一个看上去运行良好的概念验证原型。依此我们开始了Riak TS产品的开发过程,力图去解决许多时序数据处理中更加困难的问题。
/t/which-database-for-time-series-data/715/6
/en/system/Graphite%3BInfluxDB%3BRiak+TS
IoT databases should be as flexible as required by the application.&databases -- especially key-value, document and column family databases -- easily accommodate different data types and structures without the need for predefined, fixed schemas. NoSQL databases are good options when an organization has multiple data types and those data types will likely change over time. In other cases, applications that collect a fixed set of data -- such as data on weather conditions -- may benefit from a relational model. In-memory SQL databases, such as MemSQL, offer this benefit.
Managing a database for IoT applications in-house
For those organizations choosing to manage their own databases,&&is a highly scalable distributed database that supports a flexible big table schema and fast writes and scales to large volumes of data.&&is a distributed, highly scalable key-value data store which integrates with&, a big data analytics platform that enables stream analytic processing. Cassandra also integrates with Spark as well as other big data analytics platforms, such as Hadoop&.
OpenTSDB is an open source database capable of running on Hadoop and. The database is made up of command line interfaces and a Time Series Daemon (TSD). TSDs, which are responsible for processing all database requests, run independently of one another. Even though TSDs use HBase to store time-series data, TSD users have little to no contact with HBase itself.
MemSQL is a relational database tuned for real-time data streaming. With MemSQL, streamed data, transactions and historical data can be kept within the same database. The database also has the capacity to work well with geospatial data out of the box, which could be useful for location-based IoT applications. MemSQL supports integration with&&and Apache Spark, as well as other data warehousing solutions.
摘自:/feature/Find-the-IoT-database-that-best-fits-your-enterprises-needs
You’ve heard the hype, the Internet of Things (IoT) is going to connect more people to devices, more devices to the Internet and generate more data than any major IT shift in history. IoT is going to be bigger than the web, mobile and the cloud, right? It’s still too early to tell for sure, but at InfluxData we are helping startups and enterprises everyday bring an interconnected world closer to reality.
What does time-series have to do with IoT? Everything, actually. Sensors and devices used in IoT architectures emit time-series data, and a lot of it.
Why are companies building IoT and sensor data solutions?
Whether it’s pH and humidity readings from an agri-sensor, depth and fluid readings from a geo-sensor or voltage and temperature from a power control sensor, these metrics are forming the basis of intelligent businesses. Common use cases we run across are:
Agro industries are monitoring and trying to control environmental conditions for optimal plant growth.
Power and utility companies are building smart solutions to reduce resource wastage for residential and commercial customers.
Research labs and heavy industries are tracking the resources, usage and health of millions of tiny valves and instruments that go into their massive production plants, factories and manufacturing facilities.
Smart cars are now powerful computers making runtime decisions based on data collected by 100s of sensors on every vehicle.
Challenges in building IoT and sensor data solutions
The key challenges organizations face while building an IoT solution are:
Bandwidth&– As sensors are generally deployed on-premise and need to communicate over wireless networks, bandwidth constraints prevent sending large packets of data in real-time
Horsepower&– Compute power on sensors are generally limited. Hence analytics software – programs or databases or even processing logic needs to have a tiny footprint.
Concurrency&– In case of industrial IoT, number of sensors could easily range in 100s of 1000s, each transmitting metrics every minute or so. Anticipating backend database’s concurrency limits is crucial in the design of such solutions
Protocol&– As this space is rapidly evolving, there aren’t any definitive standards for communication protocols. MQTT, AMQPP, CoAP etc are being used based on use cases. Hence IoT analytics solutions need to support many communication protocols.
Scale&– Data retention, compression and visualization has it’s own challenges in such a large data footprint solution. Businesses want to plot trends (WoW, MoM, YoY) and aggregation of massive data sets can be very compute heavy.
&摘自:/use-cases/iot-and-sensor-data/
NoSQL Database: The NoSQL database is typically used to address the fast data ingest problem for device data. In some cases, there may be a stream processor—e.g. Storm, Samza, Kinesis, etc.—addressing data filtering and routing and some lightweight processing, such as counts. However, the NoSQL database is typically used because, unlike most SQL databases, which top out at about 5,000 inserts/second, you can get up to 50,000 inserts/second from NoSQL databases. However, NoSQL databases are not designed to handle the analytic processing of the data or joins, which are common requirements for Internet of Things applications. NoSQL effectively provides a real-time data ingest engine for data that is then moved to Hadoop using an extract, transform and load (ETL) process.——NOSQL写入快,但是数据分析,联合查询不方便!
阅读(...) 评论()Building a Dashboard with Grafana, InfluxDB, and PowerCLI - 推酷
Building a Dashboard with Grafana, InfluxDB, and PowerCLI
There’s something fun about building snazzy graphs and charts in which the data points are arbitrary and ultimately decided upon by myself. This is why I’ve been having a blast building a few lab graphs using the recently released
, which is&“ an open source, feature rich metrics dashboard and graph editor .” It’s certainly much simpler than others I’ve stood up, such as the really slick
that I have a lot of respect for. And it includes multiple methods in which you can
, such as local snapshots and published snapshots on
: Kicked the
tires with some simple data feeds from the lab
” & hoping this turns into a blogpost!
— Shane Schnell (@shaneschnell)
Because Shane asked for it, I’ve written down much of what I’ve been doing with Grafana in this post and tried to explain how I stood up various graphs. If I glossed over something significant, drop a comment and let me know.
Why Not Graphite?
I had originally planned to try out Graphite as the back-end data source, but ended up pivoting over to InfluxDB instead. I think Robin Moffatt over at Rittman Mead has the best reason:
Whilst I have spent many a happy hour using Graphite I’ve spent many a frustrating day and night trying to install the damn thing – every time I want to use it on a new installation. It’s fundamentally long in the tooth, a whilst good for its time is now legacy in my mind. Graphite is also several things – a data store (whisper), a web application (graphite web), and a data collector (carbon). Since we’re using Grafana, the web front end that Graphite provides is redundant, and is where a lot of the installation problems come from. (
So, I went with InfluxDB. Up to you, really.
Deploying Grafana and InfluxDB
These deploys are elegantly&simple. I default to CentOS 6.6 in the lab, so you can follow the official install guides if you’re in the same boat. You could also drop Grafana on
. There’s also modules for
Note that my template image includes the
(EPEL) repo because it crops up as a requirement so often. I don’t recall if that is required for these packages, but just throwing that out there.
Here are the two installation links I used for CentOS:
I suppose you could deploy both packages to&the same server, but I ended up cloning&a pair of servers in the lab and deploying each package separately. Both the&Grafana VM and InfluxDB VM have 1 vCPU, 1 GB of RAM, and 20 GB of thinly provisioned disk space. For a lab environment, this seemed to be more than adequate.
Assuming you’ve stood up the servers per the instructions, you’re almost done.
Configure InfluxDB
Browse to the IP or DNS name of the InfluxDB server using port 8083 and a login of root/root. Head to Databases and create a database with whatever name strikes your fancy. I went with spongebob because that’s how I roll. The details and shard space information can be left at defaults.
If you use the Explore Data link, there’s currently nothing in the database to explore. You could manually enter some data just for fun – in fact, I suggest tinkering a bit to understand how to use the
and read up on the
for the JSON payload. The query format is quite simple and likely something you won’t be using much in this walkthrough – we’re going to mainly focus on Grafana as a front end. However, knowing how to construct the payload is important.
In my lab, I went the lazy man approach and created an admin user/password with grafana/grafana. I then baked the information into the URL of the POST. Alternatively, use basic authentication. Here’s an example URL:
$url = &http://172.16.20.236:8086/db/spongebob/series?u=grafana&p=grafana&
Note the following:
The API port is 8086
The database name, spongebob , is included in the URL
The POST body uses this structure:
Notice that no work was done prior to set up th the very act of posting to the API will add data points to the series name specified. &Also notice that the points key:value pair uses a nested array because you can batch data points and send over multiple arrays at one time (using your own timestamp value). If you want to rely on the InfluxDB timestamp, send over one array at a time and the point in which the server receives the data will be used.
That’s pretty much it for InfluxDB. You now have the back-end stood up and ready to receive data. I’ve
to collect data from vSphere Hosts, VMs, SQL Server, and a NAS share used for Veeam Backups. You can view those in my&
project&to get started with data collection, use the ps1 scripts as examples, or even improve upon the repo and send me a pull request. I don’t think the project will become anything super polished, but I wanted to share what I’ve written thus far.
Configure Grafana
The Grafana web interface&is available by browsing the IP or DNS name of your Grafana server using port 3000. The default login is admin/admin. There are no dashboards out-of-the-box, so the first screen you see will be rather barren.
Let’s add InfluxDB to the configuration so that Grafana can display some data. Perform the following:
Start by selecting the Grafana logo on the top left corner to expand the menu
Choose Data Sources .
Select Add New .
Enter the information for your InfluxDB server, including the database name (mine is spongebob). Don’t forget that the API URL is 8086 ; don’t use port 8083 (web interface).
I’d recommend making this data source default , as otherwise Grafana defaults to itself as the data source.
Building a Grafana Dashboard
It’s time to build a dashboard!
Select the Home button.
Choose +New to build a new dashboard, which I will walk through a bit below, or …
Choose Import to load a dashboard from a JSON file. You can load my sample dashboard using my
&on GitHub.
Once you have a dashboard created, it’s time to make some graphs.
Select the green menu button on the left side to edit a Row.
Choose Add Panel .
Choose Graph .
If you want more rows, use the purple Add Row button.
To save your work, press the Save button.
A new graph will appear with a name of “no title (click here).” Do what it says ��
Click on the Title (that says no title ).
Select Edit .
There’s a lot you can do here, so I’ll focus on a single use case to get your noodle juice flowing.
Building an ESXi CPU Utilization Graph
Graphs use data sources to create visuals. Because my InfluxDB was added, and is being updated with live data from my various scripts, it’s really just a matter of finding the data and displaying it with Grafana. Each series you enter in Grafana&will pull one or more series of data from InfluxDB (or other back-end data sources) and use select statements, time grouping, and other query delineations to dynamically build a graph.
Here’s the data structure I’m using for the host performance points:
Because I use a set naming structure for my hosts (the $vmhost var), it seemed easiest to use a regular expression (regex) for the two metric series necessary to pull data as opposed to creating a query for each host individually. The nice thing about a regex is that it will automatically add new hosts to the chart without any help, so long as I continue to use the same name format.
Enclosing a string with forward slashes is used to build a regex with an InfluxDB back-end data source. The bracketed [0-9] portion allows the metric to pull data from any match that includes a single number in the name, such as esx1 and esx3. The hosts with a “ d ” in them are marked for dev work.
/esx[0-9].glacier.local/
/esxd[0-9].glacier.local/
For the alias, I’m using a variable and a static string: $0 cpu. The series name can be referenced as a series of strings split&by periods. Thus esx1.glacier.local can be referenced by these variables:
$1 = glacier
$2 = local
And so on for series names with more periods.
Finally, update the select box with the data point for this chart. Because it’s a CPU Utilization chart, I’ve chosen the CPU data point. The result is that each metric pulled by the regex will be esx# cpu. The remaining values can be left default for this example, as Grafana is smart enough to determine time groupings based on the data it receives. The chart now looks like this:
Make sure to save the dashboard when changes are made, or just browse away from the dashboard and discard changes if you don’t like how it looks and want to revert to the last save.
With a little time and work, you can have some pretty amazing graphs built into the dashboard.
已发表评论数()
请填写推刊名
描述不能大于100个字符!
权限设置: 公开
仅自己可见
正文不准确
标题不准确
排版有问题
主题不准确
没有分页内容
图片无法显示
视频无法显示
与原文不一致中国领先的IT技术网站
51CTO旗下网站
关于云数据管理的复兴之路是怎样的?
随着数据变得越来越大,越来越多的应用程序开始转向云计算,数据库技术出现了一种新的动向,大数据分析和云应用的普及使得数据管理领域的启蒙运动也在大力进军加速。与MySQL时代相比,当前的形式――随着云计算策略的首次出现――大量数据对象和分布式数据管理的出现――着实让人感觉像是一条复兴之旅。
作者:佚名来源:| 14:50
【作者按】在最近前往荷兰的商务旅程中,我开始沉思Baruch
Spinoza(荷兰)的深远影响。Spinoza是文艺复兴时期的伟大哲学家,与当时的同龄人不同,Spinoza强烈反对传统的神学观点。他的名言之一是:&不要对新思想感到惊讶。因为众所周知,一件事不被许多人接受并不意味着它本身不是真的&。
他认为除了相对特定的情况,事物没有本质上的好坏之分。这让我联想到纷繁的数据库,它们从黑暗的&前云时代&演化而来,尽管当今市场上有各种各样的数据库,但每个数据库都有自己的用途。我们可以自由地从多个数据库中选择,选择多方面的层,将庞然大物分解为微服务,通过利用各种云数据管理工具和技术在构建现代云应用程序方面进行创新。
云计算前世(BC)
我们如何看待数据库的历史遗留问题:计算机信息处理的早期,借由SQL数据库的优点我们得以一统数据管理领域。在那些落后的年代,如果数据增长到几GB,数据库就会被认为相当庞大了。
然后到了中世纪时期,MySQL在1995年提出开源许可模式,并在数据管理领域产生了第一个连锁反应。
随着数据变得越来越大,越来越多的应用程序开始转向云计算,数据库技术出现了一种新的动向,大数据分析和云应用的普及使得数据管理领域的启蒙运动也在大力进军加速。与MySQL时代相比,当前的形式&&随着云计算策略的首次出现&&大量数据对象和分布式数据管理的出现&&着实让人感觉像是一条复兴之旅。
云计算今生(CE)
在我们当今的云时代下,数据管理是一个数据库数据存储和堆叠的复杂网络。随着MongoDB、HBase、Cassandra、CouchDB、DynamoDB等数据库的普及,谷歌、Facebook、亚马逊和其他公司已经将大量的NoSQL数据库投入前线使用。这是一个巨大的挑战,要把握每一个数据库,并弄清楚为什么使用它。为了理解潜层技术并获得对使用哪个NoSQL数据库的广泛理解,CAP定理是一个很简便的工具。
现代云管理策略
在当今时代,数据管理需要被分解成许多不同的维度。在选择新兴闪耀的NoSQL数据库之前,我们应该谨慎地考虑使用已经证实的SQL数据库。在选择具体技术之前,了解数据管理的短期和长期商业策略,权衡竞争优先事项是至关重要的。
每当我想要评估一个数据管理策略时,我都会通过一份清单来帮助我做决定(如下):
数据的安全性和合规性考虑是什么
数据的短期和长期可扩展性是什么
数据的类型和用途是什么
模式更改的频率是多少
数据检索的延迟原因是什么
数据的速率是多少
数据的多样性是什么
数据可用性的要求是什么
数据存储的搜索需求是什么
数据如何处理成信息和见解
数据如何分析和报告
数据存储在多用户环境中吗
数据管理的最佳成本是什么
数据管理的层级是什么
数据管理的生命周期要求是什么(备份/恢复)
技术人员对于云应用程序中使用了多个数据库感到舒适。随着微服务和货柜化的使用,这一趋势也在加速。此外,大多数云应用程序商都意识到分离数据管理层的需求,这些层包括一个UI缓存层、CDN层、图形分析层、业务层、业务分析层、安全层、报告层、物联网设备层等众多层,每个层都可以有自己的数据管理策略&&只要数据被保护,通过等REST应用程序界面来进行访问。
把数据库作为一项服务(DBaaS)
这是一个令人兴奋的时代,一组成熟的DBaaS选项针对SQL和NoSQL数据库应运而生。例如,亚马逊极光提供MySQL和PostgreSQL数据库,Instaclustr则提供Cassandra数据库系统作为一项托管在AWS上的服务。
把分析作为一项服务(AaaS)
三大云服务提供商都提供分析服务。云分析平台采用的最大障碍是对数据安全的担忧,AWS和Azure提供了一套稳健的数据分析服务,以减轻这种担忧,Azure分析服务针对SQL,并提供强大的可视化Power
图数据库使用
图形数据库变革使解决方案更快地从图形匹配查询中受益,同时帮助加速基于网络安全、推荐引擎、IT操作、网络等方面的邻接关系的搜索。例如,在客户的物联网网络安全产品中,我们使用Apache
Spark和Cassandra
DB作为分析层,基于MongoDB的网络安全编制,但产生的数据由Neo4j图数据库中组织起来,以便进一步分析网络安全的威胁。这是一个数据管理分级分离的绝好例子,在这种情况下,最佳数据库被用于解决网络安全产品里非常复杂的问题。
物联网数据库
随着物联网可行性应用程序的出现,从设备层收集和处理的大量数据需要以一种专门的方式处理。我们已经成功地使用一个相对新兴、令人振奋的InfluxDB开放源码数据库来有效处理时间序列数据。因此,适当的应用程序可以使用InfluxDB数据库和相关联TICK堆栈来进行数据管理:
TICK堆栈图【编辑推荐】【责任编辑: TEL:(010)】
大家都在看猜你喜欢
热点聚焦热点关注关注
24H热文一周话题本月最赞
讲师:6811人学习过
讲师:23875人学习过
讲师:446人学习过
精选博文论坛热帖下载排行
本书是关于如何使用已有的密码技术和算法对数据库中存储的信息进行保护的书,书中所关注的内容主要是如何设计、建立(或者挑选、集成)一套...
订阅51CTO邮刊基于influxDB+graphite+grafana+statsd+collect+elasticserach+zabbix打造 一个全方位监控系统。这个是基于kamon+statsd+graphite+grafana的jvm,api,socket等信息截图
-------------------------- In this article I’m going to look at collecting time-series metrics into the InfluxDB database and visualising them in snazzy Grafana dashboards. The datasets I’m going to use are OS metrics (CPU, Disk, etc) and the DMS metrics from OBIEE, both of which are collected using the support for a Carbon/Graphite listener in InfluxDB.The
in OBIEE is one of the best ways of being able to peer into the internals of the product and find out quite what’s going on. Whether performing diagnostics on a specific issue or just generally monitoring to make sure things are ticking over nicely, using the DMS metrics you can level-up your OBIEE sysadmin skills beyond what you’d get with Fusion Middleware Control out of the box. In fact, the DMS metrics are what you can get access to with Cloud Control 12c (EM12c) – but for that you need . In this article we’re going to see how to easily set up our DMS dashboard.N.B. if you’ve read my
articles, what I write here (use InfluxDB/Grafana) supersedes what I wrote in those (use Graphite) as my recommended approach to working with arbitrary time-series metrics.OverviewTo get the DMS data out of OBIEE we’re going to use the
tool that Rittman Mead open-sourced last year. This connects to OPMN and pulls the data out. We’ll store the data in InfluxDB, and then visualise it inGrafana. Whilst not mandatory for the DMS stats, we’ll also setup collectl so that we can show OS stats alongside the DMS ones.InfluxDB is a database, but unlike a RDBMS such as Oracle – good for generally everything – it is a what’s called aTime-Series Database (TSDB). This category of database focuses on storing data for a series, holding a givenvalue for a point in time. Generally they’re optimised for handling large quantities of inbound metrics (think Internet of Things), rather than necessarily excelling at handling changes to the data (update/delete) – but that’s fine here since metric events in the past don’t generally change.I’m using InfluxDB here for a few reasons:Grafana supports it as a source, with lots of active development for its specific features.It’s not Graphite. Whilst I have spent many a happy hour using Graphite I’ve spent many a frustrating day and night trying to install the damn thing – every time I want to use it on a new installation. It’s fundamentally long in the tooth, a whilst good for its time is now legacy in my mind. Graphite is also several things – a data store (whisper), a web application (graphite web), and a data collector (carbon). Since we’re using Grafana, the web front end that Graphite provides is redundant, and is where a lot of the installation problems come from.! Yes I could store time series data in Oracle/mySQL/DB2/yadayada, but InfluxDB does one thing (storing time series metrics) and one thing only, very well and very easily with almost no setup.For an eloquent discussion of Time-Series Databases read these couple of excellent articles by Baron Schwarz and .GrafanaOn the front-end we have
which is a web application that is rapidly becoming accepted as one of the best time-series metric visualisation tools available. It is a fork of , and can work with data held in a variety of sources including Graphite and InfluxDB. To run Grafana you need to have a web server in place – I’m using Apache just because it’s familiar, but Grafana probably works with whatever your favourite is too.OSThis article is based around the , but should work without modification on any OL/CentOS/RHEL 6 environment.InfluxDB and Grafana run on both RHEL and Debian based Linux distros, as well as Mac OS. The specific setup steps detailed here might need some changes according on the OS.Getting Started with InfluxDBInfluxDB Installation and Configuration as a Graphite/Carbon EndpointInfluxDB is a doddle to install. Simply
the rpm, unzip it, and run. BOOM. Compared to Graphite, this makes it a massive winner already. 12wget
sudo rpm-ivh influxdb-latest-1.x86_64.rpmThis downloads and installs InfluxDB into /opt/influxdb and configures it as a service that will start at boot time.Before we go ahead an start it, let’s configure it to work with existing applications that are sending data to Graphite using the Carbon protocol. InfluxDB can support this and enables you to literally switch Graphite out in favour of InfluxDB with no changes required on the source.Edit the configuration file that you’ll find at /opt/influxdb/shared/config.toml and locate the line that reads: 1[input_plugins.graphite]In v0.8.8 this is at line 41. In the following stanza set the plugin to enabled, specify the listener port, and give the name of the database that you want to store data in, so that it looks like this. 1234567# Configure the graphite api[input_plugins.graphite]enabled=true# address = "0.0.0.0" # If not set, is actually set to bind-address.port=2003database="carbon"
# store graphite data in this database# udp_enabled = true # enable udp interface on the same port as the tcp interfaceNote that the file is owned by a user created at installation time, influxdb, so you’ll need to use sudo to edit the file.Now start up InfluxDB: 1sudo service influxdb startYou should see it start up successfully: 1234[oracle@demo influxdb]$sudo service influxdb startSetting ulimit-n65536Starting the process influxdb[OK]influxdb process was started[OK]You can see the InfluxDB log file and confirm that the Graphite/Carbon listener has started: 1234567891011[oracle@demo shared]$tail-f/opt/influxdb/shared/log.txt[0:24:04GMT][INFO](/influxdb/influxdb/cluster.func·005:1187)Recovered local server[0:24:04GMT][INFO](/influxdb/influxdb/server.(*Server).ListenAndServe:133)recovered[0:24:04GMT][INFO](/influxdb/influxdb/coordinator.(*Coordinator).ConnectToProtobufServers:898)Connecting toother nodes inthe cluster[0:24:04GMT][INFO](/influxdb/influxdb/server.(*Server).ListenAndServe:139)Starting admin interfaceon port8083[0:24:04GMT][INFO](/influxdb/influxdb/server.(*Server).ListenAndServe:152)Starting Graphite Listener on0.0.0.0:2003[0:24:04GMT][INFO](/influxdb/influxdb/server.(*Server).ListenAndServe:178)Collectd input plugins isdisabled[0:24:04GMT][INFO](/influxdb/influxdb/server.(*Server).ListenAndServe:187)UDP server isdisabled[0:24:04GMT][INFO](/influxdb/influxdb/server.(*Server).ListenAndServe:187)UDP server isdisabled[0:24:04GMT][INFO](/influxdb/influxdb/server.(*Server).ListenAndServe:216)Starting Http Api server on port8086[0:24:04GMT][INFO](/influxdb/influxdb/server.(*Server).reportStats:254)Reporting stats:&client.Series{Name:"reports",Columns:[]string{"os","arch","id","version"},Points:[][]interface{}{[]interface{}{"linux","amd64","e7d3d5cf69a4faf2","0.8.8"}}}At this point if you’re using the stock SampleApp v406 image, or if indeed any machine with a firewall configured, you need to open up ports 8083 and 8086 for InfluxDB. Edit /etc/sysconfig/iptables (using sudo) and add: 12-AINPUT-mstate--state NEW-mtcp-ptcp--dport8083-jACCEPT-AINPUT-mstate--state NEW-mtcp-ptcp--dport8086-jACCEPTimmediately after the existing ACCEPT rules. Restart iptables to pick up the change: 1sudo service iptables restartIf you now go to
(replace localhost with the hostname of the server on which you’ve installed InfluxDB), you’ll get the InfluxDB web interface. It’s fairly rudimentary, but suffices just fine:Login as root/root, and you’ll see a list of nothing much, since we’ve not got any databases yet. You can create a database from here, but for repeatability and a general preference for using the command line here is how to create a database called carbon with the HTTP API called from curl (assuming you’r changelocalhost if not): 1curl-XPOST'http://localhost:8086/db?u=root&p=root'-d'{"name": "carbon"}'Simple huh? Now hit refresh on the web UI and after logging back in again you’ll see the new database:You can call the database anything you want, just make sure what you create in InfluxDB matches what you put in the configuration file for the graphite/carbon listener.Now we’ll create a second database that we’ll need later on to hold the internal dashboard definitions from Grafana: 1curl-XPOST'http://localhost:8086/db?u=root&p=root'-d'{"name": "grafana"}'You should now have two InfluxDB databases, primed and ready for data:Validating the InfluxDB Carbon ListenerTo make sure that InfluxDB is accepting data on the carbon listener use the
(nc) utility to send some dummy data to it: 1echo"example.foo.bar 3 `date +%s`"|nc localhost2003Now go to the InfluxDB web interface and click Explore Data >>. In the query field enter 1list seriesTo see the first five rows of data itself use the query 1select*from/.*/limit5InfluxDB QueriesYou’ll notice that what we’re doing here (“SELECT … FROM …”) looks pretty SQL-like. Indeed, InfluxDB support a SQL-like query language, which if you’re coming from an RDBMS background i-)The , but what I would point out is the apparently odd /.*/ constructor for the “table” is in fact a regular expression (regex) to match the series for which to return values. We could have writtenselect * from example.foo.bar but the .* wildcard enclosed in the / / regex delimiters is a quick way to check all the series we’ve got.Going off on a bit of a tangent (but hey, why not), let’s write a quick Python script to stick some randomised data into InfluxDB. Paste the following into a terminal window to create the script and make it executable: 1234567891011121314151617181920cat&~/test_carbon.py&&EOF#!/usr/bin/env pythonimport socketimport timeimport randomimport sys CARBON_SERVER=sys.argv[1]CARBON_PORT=int(sys.argv[2]) whileTrue:
message='test.data.foo.bar %d %d\n'%(random.randint(1,20),int(time.time()))
print'sending message:\n%s'%message
sock=socket.socket()
sock.connect((CARBON_SERVER,CARBON_PORT))
sock.sendall(message)
time.sleep(1)
sock.close()EOFchmodu+x~/test_carbon.pyAnd run it: (hit Ctrl-C when you’ve had enough) 1234567$~/test_carbon.pylocalhost2003sending message:test.data.foo.bar sending message:test.data.foo.bar[...]Now we’ve got two series in InfluxDB:example.foo.bar – that we sent using nctest.data.foo.bar – using the python scriptLet’s go back to the InfluxDB web UI and have a look at the new data, using the literal series name in the query: 1select*from test.data.foo.barWell fancy that – InfluxDB has done us a nice little graph of the data. But more to the point, we can see all the values in the series.And a regex shows us both series, matching on the ‘foo’ part of the name: 1select*from/foo/limit3Let’s take it a step further. InfluxDB supports aggregate functions, such as max, min, and so on: 1select count(value),max(value),mean(value),min(value)from test.data.foo.barWhilst we’re at it, let’s bring in another way to get data out – with the HTTP API, just like we used for creating the database above. Given a query, it returns the data in json format. There’s a nice little utility called jq which we can use to pretty-print the json, so let’s install that first: 1sudo yum install-yjqand then call the InfluxDB API, piping the return into jq: 1curl--silent--get'http://localhost:8086/db/carbon/series?u=root&p=root'--data-urlencode"q=select count(value), max(value),mean(value),min(value) from test.data.foo.bar"|jq'.'The result should look something like this: 123456789101112131415161718192021[
"name":"test.data.foo.bar",
"columns":[
"points":[
}]We could have used the Web UI for this, but to be honest the inclusion of the graphs just confuses things because there’s nothing to graph and the table of data that we want gets hidden lower down the page.Setting up obi-metrics-agent to Send OBIEE DMS metrics to InfluxDB is an open-source tool from Rittman Mead that polls your OBIEE system to pull out all the lovely juicy
metrics from it. It can write them to file, insert them to an RDBMS, or as we’re using it here, send them to a carbon-compatible endpoint (such as Graphite, or in our case, InfluxDB).To install it simply clone the git repository (I’m doing it to /opt but you can put it where you want) 1234567# Install pre-requisitesudo yum install-ylibxml2-devel python-devel libxslt-devel python-pipsudo pip install lxml# Clone the git repositorygit clone/RittmanMead/obi-metrics-agent.git~/obi-metrics-agent# Move it to /opt foldersudo mv~/obi-metrics-agent/optand then run it: 123456cd/opt/obi-metrics-agent ./obi-metrics-agent.py\--opmnbin/app/oracle/biee/instances/instance1/bin/opmnctl\--output carbon\--carbon-server localhostI’ve used line continuation character \ here to make the statement clearer. Make sure you update opmnbin for the correct path of your OPMN binary as necessary, and localhost if your InfluxDB server is not local to where you are running obi-metrics-agent.After running this you should be able to see the metrics in InfluxDB. For example: 1select*from/Oracle_BI_DB_Connection_Pool\..+\.*Busy/limit5Setting up collectl to Send OS metrics to InfluxDB is an excellent tool written by Mark Seger and reports on all sorts of OS-level metrics. It can run interactively, write metrics to file, and/or send them on to a carbon endpoint such as InfluxDB.Installation is a piece of cake, using the EPEL yum repository: 123456# Install the EPEL yum repositorysudo rpm-Uvh
# Install collectlsudo yum install-ycollectl# Set it to start at bootsudo chkconfig--level35collectl onConfiguration to enable logging to InfluxDB is a simple matter of modifying the /etc/collectl.conf configuration file either by hand or using this set of sed statements to do it automagically.The localhost in the second sed command is the hostname of the server on which InfluxDB is running: 12sudo sed-i.bak-e's/^DaemonCommands/#DaemonCommands/g'/etc/collectl.confsudo sed-i-e'/^#DaemonCommands/a DaemonCommands = -f \/var\/log\/collectl -P -m -scdmnCDZ --export graphite,localhost:2003,p=.os,s=cdmnCDZ'/etc/collectl.confIf you want to log more frequently than ten seconds, make this change (for 5 second intervals here): 1sudo sed-i-e'/#Interval =
10/a Interval = 5'/etc/collectl.confRestart collectl for the changes to take effect: 1sudo service collectl restartAs above, a quick check through the web UI should confirm we’re getting data through into InfluxDB:Note the very handy regex lets us be lazy with the series naming. We know there is a metric called in part ‘cputotal’, so using /cputotal/ can match anything with it in.Installing and Configuring GrafanaLike InfluxDB, Grafana is also easy to install, although it does require a bit of setting up. It needs to be hooked into a web server, as well as configured to connect to a source for metrics and storing dashboard definitions.First, download the binary (this is based on v1.9.1, but releases are frequent so check the
for the latest): 12cd~wget
Unzip it and move it to /opt: 12unzip grafana-1.9.1.zipsudo mvgrafana-1.9.1/optConfiguring Grafana to Connect to InfluxDBWe need to do a bit of configuration, so first create the configuration file based on the template given: 12cd/opt/grafana-1.9.1cpconfig.sample.jsconfig.jsAnd now open the config.js file in your favourite text editor. Grafana supports various sources for metrics data, as well as various targets to which it can save the dashboard definitions. The configuration file helpfully comes with configuration elements for many of these, but all commented out. Uncomment the InfluxDB stanzas and amend them as follows: 123456789101112131415datasources:{
influxdb:{
type:'influxdb',
url:"http://sampleapp:8086/db/carbon",
username:'root',
password:'root',
type:'influxdb',
url:"http://sampleapp:8086/db/grafana",
username:'root',
password:'root',
grafanaDB:true
},},Points to note:The servername is the server host as you will be accessing it from your web browser. So whilst the configuration we did earlier was all based around ‘localhost’, since it was just communication within components on the same server, the Grafana configuration is what the web application from your web browser uses. So unless you are using a web browser on the same machine as where InfluxDB is running, you must put in the server address of your InfluxDB machine here.The default InfluxDB username/password is root/root, not admin/adminEdit the database names in the url, either as shown if you’ve followed the same names used earlier in the article or your own versions of them if not.Setting Grafana up in ApacheGrafana runs within a web server, such as Apache or nginx. Here I’m using Apache, so first off install it: 1sudo yum install-yhttpdAnd then set up an entry for Grafana in the configuration folder by pasting the following to the command line: 123456789101112cat&/tmp/grafana.conf&&EOFAlias/grafana/opt/grafana-1.9.1 &Location/grafana&Order deny,allowAllow from127.0.0.1Allow from::1Allow from all&/Location&EOF sudo mv/tmp/grafana.conf/etc/httpd/conf.d/grafana.confNow restart Apache: 1sudo service httpd restartAnd if the gods of bits and bytes are smiling on you, when you go to
you should see:Note that as with InfluxDB, you may well need to open your firewall for Apache which is on port 80 by default. Follow the same iptables instructions as above to do this.Building Grafana Dashboards on Metrics Held in InfluxDBSo now we’ve set up our metric collectors, sending data into InfluxDB.Let’s see now how to produce some swanky dashboards in Grafana.Grafana has a concept of Dashboards, which are made up of Rows and within those Panels. A Panel can have on it a metric Graphs (duh), but also static text or single figure metrics.To create a new dashboard click the folder icon and select New:You get a fairly minimal blank dashboards. On the left you’ll notice a little green tab: hover over that and it pops out to form a menu box, from where you can choose the option to add a graph panel:Grafana Graph BasicsOn the blank graph that’s created click on the title (with the accurate text “click here”) and select edit from the options that appear. This takes you to the graph editing page, which looks equally blank but from here we can now start adding metrics:In the box labelled series start typing Active_Sessions and notice that Grafana will autocomplete it to any available metrics matching this:Select Oracle_BI_PS_Sessions.Active_Sessions and your graph should now display the metric.To change the time period shown in the graph, use the time picker at the top of the screen.You can also click & drag (“brushing”) on any graph to select a particular slice of time.So, set the time filter to 15 minutes ago and from the Auto-refresh submenu set it to refresh every 5 seconds. Now login to your OBIEE instance, and you should see the Active Sessions value increase (one per session login):To add another to the graph you can click on Add query at the bottom right of the page, or if it’s closely related to the one you’ve defined already click on the cog next to it and select duplicate:In the second query add Oracle_BI_General.Total_sessions (remember, you can just type part of the string and Grafana autocompletes based on the metric series stored in InfluxDB). Run a query in OBIEE to cause sessions to be created on the BI Server, and you should now see the Total sessions increase:To save the graph, and the dashboard, click the Save icon. To return to the dashboard to see how your graph looks alongside others, or to add a new dashboards, click on Back to dashboard.Grafana Graph FormattingLet’s now take a look at the options we’ve got for modifying the styling of the graph. There are several tabs/sections to the graph editor – General, Metrics (the default), Axes & Grid, and Display Styles. The first obvious thing to change is the graph title, which can be changed on the General tab:From here you can also change how the graph is sized on the dashboard using the Span and Height options. A new feature in recent versions of Grafana is the ability to link dashboards to help with analysis paths – guided navigation as we’d call it in OBIEE – and it’s from the General tab here that you can define this.On the Metrics tab you can specify what text to use in the legend. By default you get the full series name, which is usually too big to be useful as well as containing a lot of redundant repeating text. You can either specify literal text in the alias field, or you can use segments of the series name identified by $x where x is the zero-based segment number. In the example I’ve hardcoded the literal value for the second metric query, and used a dynamic segment name for the first:On the Axes & Grid tab you can specify the obvious stuff like min/max scales for the axes and the scale to use (bits, bytes, etc). To put metrics on the right axis (and to change the colour of the metric line too) click on the legend line, and from there select the axis/colour as required:You can set thresholds to overlay on the graph (to highlight warning/critical values, for example), as well as customise the legend to show an aggregate value for each metric, show it in a table, or not at all:The last tab, Display Styles, has even more goodies. One of my favourite new additions to Grafana is the Tooltip. Enabling this gives you a tooltip when you hover over the graph, displaying the value of all the series at that point in time:You can change the presentation of the graph, which by default is a line, adding bars and/or points, as well as changing the line width and fill.Solid Fill:Bars onlyPoints and translucent fill:Advanced InfluxDB Query Building in GrafanaIdentifying Metric Series with RegExIn the example above there were two fairly specific metrics that we wanted to report against. What you will find is much more common is wanting to graph out a set of metrics from the same ‘family’. For example, OBIEE DMS metrics include a great deal of information about each Connection Pool that’s defined. They’re all in a hierarchy that look like this: 1obi11-01.OBI.Oracle_BI_DB_Connection_Pool.Star_01_-_Sample_App_Data_ORCL_Sample_Relational_ConnectionUnder which you’ve got 123CapacityCurrent Connection CountCurrent Queued Requestsand so on.So rather than creating an individual metric query for each of these (similar to how we did for the two session metrics previously) we’ll use InfluxDB’s rather smart regex method for identifying metric series in a query. And because Grafana is awesome, writing the regex isn’t as painful as it could be because the autocomplete validates your expression in realtime. Let’s get started.First up, let’s work out the root of the metric series that we want. In this case, it’s the orcl connection pool. So in the series box, enter /orcl/. The / delimiters indicate that it is a regex query. As soon as you enter the second /you’ll get the autocomplete showing you the matching series: 1/orcl/If you scroll down the list you’ll notice there’s other metrics in there beside Connection Pool ones, so let’s refine our query a bit 1/orcl_Connection_Pool/That’s better, but we’ve now got all the Connection Pool metrics, which whilst are fascinating to study (no, really) complicate our view of the data a bit, so let’s pick out just the ones we want. First up we’ll put in the dot that’s going to precede any of the final identifiers for the series (.Capacity, .Current Connection Count, etc). A dot is a special character in regex so we need to escape it \. 1/orcl_Connection_Pool\./And now let’s check we’re on the right lines by specifying just Capacity to match: 1/orcl_Connection_Pool\.Capacity/Excellent. So we can now add in more permutations, with a bracketed list of options separated with the pipe (regex OR) character: 1/orcl_Connection_Pool\.(Capacity|Current)/We can use a wildcard .* for expressions that are not directly after the dot that we specified in the match pattern. For example, let’s add any metric that includes Queued: 1/orcl_Connection_Pool\.(Capacity|Current|.*Queued)/But now we’ve a rather long list of matches, so let’s refine the regex to narrow it down: 1/orcl_Connection_Pool\.(Capacity|Current|Peak.*Queued).+(Requests|Connection)/(Something else I tried before this was regex negative look-behind, but it looks like Go (which InfluxDB is written in)).Setting the Alias to $4, and the legend to include values in a table format gives us this:Now to be honest here, in this specific example, I could have created four separate metric queries in a fraction of the time it took to construct that regex. That doesn’t detract from the usefulness and power of regex though, it simply illustrates the point of using the right tool for the right job, and where there’s a few easily identified and static metrics, a manual selection may be quicker.AggregatesBy default Grafana will request the mean of a series at the defined grain of time from InfluxDB. The grain of time is calculated automatically based on the time window you’ve got shown in your graph. If you’re collecting data every five seconds, and build a graph to show a week’s worth of data, showing all 120960 data points will end up in a very indistinct line:So instead Grafana generates an InfluxDB query that rolls the data up to more sensible intervals – in the case of a week’s worth of data, every 10 minutes:You can see, and override, the time grouping in the metric panel. By default it’s dynamic and you can see the current value in use in lighter text, like this:You can also set an optional minimal time grouping in the second of the “group by time” box (beneath the first). This is a time grouping under which Grafana will never go, so if you always want to roll up to, say, at least a minute (but higher if the duration of the graph requires it), you’d set that here.So I’ve said that InfluxDB can roll up the figures – but how does it roll up multiple values into one? By default, it takes the mean of all the values. Depending on what you’re looking at, this can be less that desirable, because you may miss important spikes and troughs in your data. So you can change the aggregate rule, to look at the maximum value, minimum, and so on. Do this by clicking on the aggregation in the metric panel:This is the same series of data, but shown as 5 second samples rolled up to a minute, using the mean, max, and min aggregate rules:For a look at how all three series can be better rendered together see the discussion of Series Specific Overrideslater in this article.You can also use aggregate functions with measures that may not be simple point in time values. For example, an incrementing/accumulating measure (such as a counter like “number of requests since launch”) you actually want to graph the rate of change, the delta between each point. To do this, use the derivative function. In this graph you can see the default aggregation (mean, in green) against derivative, in yellow. One is in effect the “actual” value of the measure, the other is the rate of change, which is much more useful to see in a time series.Note that if you are using derivative you may need to fix the group by time to the grain at which you are storing data. In my example I am storing data every 5 seconds, but if the default time grain on the graph is 1s then it won’t show the derivative data.See more details about the . If you want to use an aggregation (or any query) that isn’t supported in the Grafana interface simply click on the cog icon and select Raw query modefrom where you can customise the query to your heart’s content.Drawing inverse graphsAs mentioned just above, you can customise the query sent to InfluxDB, which means you can do this neat trick to render multiple related series that would otherwise overlap by inverting one of them. In this example I’ve got the network I/O drawn conventionally:But since metrics like network I/O, disk I/O and so on have a concept of adding and taking, it feels much more natural to see the input as ‘positive’ and output as ‘negative’.Which certainly for my money is easier to see at a glance whether we’ve got data coming or going, and at what volume. To implement this simply set up your series as usual, and then for the series you want to invert click on the cog icon and select Raw query mode. Then in place of 1mean(value)put 1mean(value*-1)Series Specific OverridesThe presentation options that you specify for a graph will by default apply to all series shown in the graph. As we saw previously you can change the colour, width, fill etc of a line, or render the graph as bars and/or points instead. This is all good stuff, but presumes that all measures are created equal – that every piece of data on the graph has the same meaning and importance. Often we’ll want to change how display a particular set of data, and we can useSeries Specific Overrides in Grafana to do that.For example in this graph we can see the number of busy connections and the available capacity:But the actual (Busy Connections) is the piece of data we want to see at a glance, against the context of the available Capacity. So by setting up a Series Specific Override we can change the formatting of each line individually – calling out the actual (thick green) and making the threshold more muted (purple):To configure a Series Specific Override got to the Display Styles panel and click Add series override rule. Pick the specific series or use a regex to identify it, and then use the + button to add formatting options:A very useful formatting option is Z-index, which enables you to define the layering on the graph so that a given series is rendered on top (or below) another. To bring something to the very front use a Z-index of 3; for the very back use -3. Series Specific Overrides are also a good way of dynamically assigning multiple Y axes.Another great use of Series Specific Overrides is to show the min/max range for data as a shaded area behind the main line, thus providing more context for aggregate data. I discussed above how Grafana can get InfluxDB to roll up (aggregate) values across time periods to make graphs more readable when shown for long time frames – and how this can mask data exceptions. If you only show the mean, you miss smal if you only show the max or min then you over or under count the actual impact of the measure. But, we can have the best of all worlds! The next two graphs show the starting point – showing just the mean (missing the subtleties of a data series) and showing all three versions of a measure (ugly and unusable):Instead of this, let’s bring out the mean, but still show it in context of the range of the values within the aggregate:I hope you’d agree that this a much cleaner and clearer way of presenting the data. To do it we need two steps:Make sure that each metric has an alias. This is used in the label but importantly is also used in the next step to identify each data series. You can skip this bit if you really want and regex the series to match directly in the next step, but setting an alias is much easierOn the Display Styles tab click Add series override rule at the bottom of the page. In the alias or regex box you should see your aliases listed. Select the one which is the maximum series. Then choose the formatting option Fill below to and select the minimum seriesYou’ll notice that Grafana automagically adds in a second rule to disable lines for the minimum series, as well as on the existing maximum series rule.Optionally, add another rule for your mean series, setting the Z-index to 3 to bring it right to the front.All pretty simple really, and a nice result:Variables in Grafana (a.k.a. Templating)In lots of metric series there is often going to be groups of measures that are associated with reoccurring instances of a parent. For example, CPU details for multiple servers, or in the OBIEE world connection pool details for multiple connection pools.centos-base.os.cputotals.userdb12c-01.os.cputotals.usergitserver.os.cputotals.usermedia02.os.cputotals.usermonitoring-01.os.cputotals.useretcInstead of creating a graph for each permutation, or modifying the graph each time you want to see a different instance, you can instead use Templating, which is basically creating a variable that can be incorporated into query definitions.To create a template you first need to enable it per dashboard, using the cog icon in the top-right of the dashboard:Then open the Templating option from the menu opened by clicking on the cog on the left side of the screenNow set up the name of the variable, and specify a full (not partial, as you would in the graph panel) InfluxDB query that will return all the values for the variable – or rather, the list of all series from which you’re going to take the variable name.Let’s have a look at an example. Within the OBIEE DMS metrics you have details about the thread pools within the BI Server, and there are different thread pool types, and it is that type that I want to store. Here’s a snippet of the series: 12345678910[...]obi11-01.OBI.Oracle_BI_Thread_Pool.DB_Gateway.Peak_Queued_Requestsobi11-01.OBI.Oracle_BI_Thread_Pool.DB_Gateway.Peak_Queued_Time_millisecondsobi11-01.OBI.Oracle_BI_Thread_Pool.DB_Gateway.Peak_Thread_Countobi11-01.OBI.Oracle_BI_Thread_Pool.Server.Accumulated_Requestsobi11-01.OBI.Oracle_BI_Thread_Pool.Server.Average_Execution_Time_millisecondsobi11-01.OBI.Oracle_BI_Thread_Pool.Server.Average_Queued_Requestsobi11-01.OBI.Oracle_BI_Thread_Pool.Server.Average_Queued_Time_millisecondsobi11-01.OBI.Oracle_BI_Thread_Pool.Server.Avg_Request_per_sec[...]Looking down the list, it’s the DB_Gateway and Server values that I want to extract. First up is some regex to return the series with the thread pool name in: 1/.*Oracle_BI_Thread_Pool.*/and now build it as part of an InfluxDB query: 1list series/.*Oracle_BI_Thread_Pool.*/You can validate this against InfluxDB directly using the web UI for InfluxDB or curl as described much earlier in this article. Put the query into the Grafana Template definition and hit the green play button. You’ll get a list back of all series returned by the query:Now we want to extract out the threadpool names and we do this using the regex capture group ( ): 1/.*Oracle_BI_Thread_Pool\.(.*)\./Hit play again and the results from the first query are parsed through the regex and you should have just the values you need:If the values are likely to change (for example, Connection Pool names will change in OBIEE depending on the RPD) then make sure you select Refresh on load. Click Add and you’re done.You can also define variables with fixed values, which is good if they’re never going to change, or they are but you’ve not got your head around RegEx. Simply change the Type to Custom and enter comma-separated values.To use the variable simply reference it prefix with a dollar sign, in the metric definition:or in the title:To change the value selected just use the dropdown from the top of the screen:AnnotationsAnother very nice feature of Grafana is Annotations. These are overlays on each graph at a given point in time to provide additional context to the data. How I use it is when analysing test data to be able to see what script I ran when:There’s two elements to Annotations – setting them up in Grafana, and getting the data into the backend (InfluxDB in this case, but they work with other data sources such as Graphite too).Storing an AnnotationAn annotation is nothing more than some time series data, but typically a string at a given point in time rather than a continually changing value (measure) over time.To store it just just chuck the data at InfluxDB and it creates the necessary series. In this example I’m using one called events but it could be called foobar for all it matters. You can read more about and choose one most suitable to the event that it is you want to record to display as an annotation. I’m running some bash-based testing, so curl fits well here, but if you were using a python program you could use the , and so on.Sending data with curl is easy, and looks like this: 1curl-XPOST-d'[{"name":"events","columns":["id","action"],"points":[["big load test","start"]]}]''http://monitoring-:8086/db/carbon/series?u=root&p=root'The main bit of interest, other than the obvious server name and credentials, is the JSON payload that we’re sending. Pulling it out and formatting it a bit more nicely: 12345678910111213{
"name":"events",
"columns":[
"test-id",
"points":[
"big load test",
]}So the series (“table”) we’re loading is called events, and we’re going to store an entry for this point in time with two columns, test-id and action storing values big load test and start respectively. Interestingly (and something that’s very powerful) is that InfluxDB’s schema can evolve in a way that no traditional RDBMS could. Never mind that we’re not had to define events before loading it, we could even load it at subsequent time points with more columnsif we want to simply by sending them in the data payload.Coming back to real-world usage, we want to make the load as dynamic as possible, so with a few variables and a bit of bash magic we have something like this that will automatically load to InfluxDB the start and end time of every load test that gets run, along with the name of the script that ran it and the host on which it ran: 123456789101112INFLUXDB_HOST=monitoring-INFLUXDB_PORT=8086INFLUXDB_USER=rootINFLUXDB_PW=rootHOSTNAME=$(hostname)SCRIPT=`basename$0`curl-XPOST-d'[{"name":"events","columns":["host","id","action"],"points":[["'"$HOSTNAME"'","'"$SCRIPT"'","start"]]}]'"http://$INFLUXDB_HOST:$INFLUXDB_PORT/db/carbon/series?u=$INFLUXDB_USER&p=$INFLUXDB_PW" echo'Load testing bash code goes here. For now let us just go to sleep'sleep60 curl-XPOST-d'[{"name":"events","columns":["host","id","action"],"points":[["'"$HOSTNAME"'","'"$SCRIPT"'","end"]]}]'"http://$INFLUXDB_HOST:$INFLUXDB_PORT/db/carbon/series?u=$INFLUXDB_USER&p=$INFLUXDB_PW"Displaying annotations in GrafanaOnce we’ve got a series (“table”) in InfluxDB with our events in, pulling them through into Grafana is pretty simple. Let’s first check the data we’ve got, by going to the InfluxDB web UI (http://influxdb:8083) and from the Explore Data >> page running a query against the series we’ve loaded: 1select*from eventsThe time value is in , and the remaining values are whatever you sent to it.Now in Grafana enable Annotations for the dashboard (via the cog in the top-right corner)Once enabled use the cog in the top-left corner to open the menu from which you select the Annotations dialog. Click on the Add tab. Give the event group a name, and then the InfluxDB query that pulls back the relevant data. All you need to do is take the above query that you used to test out the data and append the necessary time predicate where $timeFilter so that only events for the time window currently being shown are returned: 1select*from events where$timeFilterClick Add and then set your time window to include a period when an event was recorded. You should see a nice clear vertical line and a marker on the x-axis that when you hover over it gives you some more information:You can use the Column Mapping options in the Annotations window to bring in additional information into the tooltip. For example, in my event series I have the id of the test, action (start/end), and the hostname. I can get this overlaid onto the tooltip by mapping the columns thus:Which then looks like this on the graph tooltips:N.B. currently (Grafana v1.9.1) when making changes to an Annotation definition you need to refresh the graph views after clicking Update on the annotation definition, otherwise you won’t see the change reflected in the annotations on the graphs.SparklinesEverything I’ve written about Grafana so far has revolved around the graphs that it creates, and unsurprisingly because this is the core feature of the tool, the . But there are other visualisation options available – “Singlestat”, and “Text”. The latter is pretty obvious and I’m not going to discuss it here, but Singlestat, a.k.a. Sparkline and/or Performance Tiles, is awesome and well worth a look. First, an illustration of what I’m blathering about:A nice headline figure of the current number of active sessions, along with a sparkline to show the trend of the metric.To add one of these to your dashboard go to the green row menu icon on the left (it’s mostly hidden and will pop out when you hover over it) and select Add Panel -& singlestat.On the panel that appears go to the edit screen as shown :In the Metrics panel specify the series as you would with a graph, but remember you need to pull back just a single series – no point writing a regex to match multiple ones. Here I’m going to show the number of queued requests on a connection pool. Note that because I want to show the latest value I change the aggregation to last:In the General tab set a title for the panel, as well as the width of it – unlike graphs you typically want these panels to be fairly narrow since the point is to show a figure not lots of detail. You’ll notice that I’ve also defined aDrilldown / detail link so that a user can click on the summary figure and go to another dashboard to see more detail.The Options tab gives you the option to set font size, prefixes/suffixes, and is also where you set up sparkline and conditional formatting.Tick the Spark line box to draw a sparkline within the panel – if you’re not seen them before sparklines are great visualisations for showing the trend of a metric without fussing with axes and specific values. Tick theBackground mode to use the entire height of the panel for the graph and overlay the summary figure on top.Now for the bit I think is particularly nice – conditional formatting of the singlestat panel. It’s dead easy and not a new concept but is really great way to let a user see at a real glance if there’s something that needs their attention. In the case of this example here, queueing connections, any queueing is dodgy and more than a few is bad (m’kay). So let’s colour code it:You can even substitute values for words – maybe the difference between 61 queued sessions and 65 is fairly irrelevant, it’s the fact that there are that magnitude of queued sessions that is more the problem:Note that the values are absolutes, not ranges. There is an
for this so hopefully that will change. The effect is nice though:ConclusionHopefully this article has given you a good idea of what is possible with data stored in InfluxDB and visualised in Grafana, and how to go about doing it.If you’re interested in OBIEE monitoring you might also be interested in the ELK suite of tools that complements what I have described here well, giving an overall setup like this:You can , or indeed
if you’d like to learn more or have us come and help with your OBIEE monitoring and diagnostics.
reference:
如果您想留下此文,您可以将其发送至您的邮箱(将同时以邮件内容&PDF形式发送)
相关文章推荐
(Ctrl+Enter提交) &&
已有0人在此发表见解
&在& 18:08收藏到了
&&在信息爆炸的时代,您的知识需要整理,沉淀,积累!Lai18为您提供一个简单实用的文章整理收藏工具,在这里您可以收藏对您有用的技术文章,自由分门别类,在整理的过程中,用心梳理自己的知识!相信,用不了多久,您收藏整理的文章将是您一生的知识宝库!
· 蜀ICP备号-1

我要回帖

更多关于 influxdb java 的文章

 

随机推荐