Commit c94331cd by Torkel Ödegaard

Merge branch 'develop' into develop-color-tweaks

parents 56118b9f b7a8db49
......@@ -4,8 +4,6 @@ Read before posting:
- Checkout FAQ: https://community.grafana.com/c/howto/faq
- Checkout How to troubleshoot metric query issues: https://community.grafana.com/t/how-to-troubleshoot-metric-query-issues/50
Please prefix your title with [Bug] or [Feature request].
Please include this information:
- What Grafana version are you using?
- What datasource are you using?
......
......@@ -38,12 +38,14 @@ public/css/*.min.css
conf/custom.ini
fig.yml
docker-compose.yml
docker-compose.yaml
profile.cov
/grafana
.notouch
/pkg/cmd/grafana-cli/grafana-cli
/pkg/cmd/grafana-server/grafana-server
/pkg/cmd/grafana-server/debug
debug.test
/examples/*/dist
/packaging/**/*.rpm
/packaging/**/*.deb
......
......@@ -11,7 +11,19 @@
## New Features
* **Data Source Proxy**: Add support for whitelisting specified cookies that will be passed through to the data source when proxying data source requests [#5457](https://github.com/grafana/grafana/issues/5457), thanks [@robingustafsson](https://github.com/robingustafsson)
* **Postgres/MySQL**: add __timeGroup macro for mysql [#9596](https://github.com/grafana/grafana/pull/9596), thanks [@svenklemm](https://github.com/svenklemm)
* **Text**: Text panel are now edited in the ace editor. [#9698](https://github.com/grafana/grafana/pull/9698), thx [@mtanda](https://github.com/mtanda)
* **Teams**: Add Microsoft Teams notifier as [#8523](https://github.com/grafana/grafana/issues/8523), thx [@anthu](https://github.com/anthu)
* **Datasources**: Its now possible to configure datasources with config files [#1789](https://github.com/grafana/grafana/issues/1789)
* **Graphite**: Query editor updated to support new query by tag features [#9230](https://github.com/grafana/grafana/issues/9230)
* **Dashboard history**: New config file option versions_to_keep sets how many versions per dashboard to store, [#9671](https://github.com/grafana/grafana/issues/9671)
## Minor
* **Alert panel**: Adds placeholder text when no alerts are within the time range [#9624](https://github.com/grafana/grafana/issues/9624), thx [@straend](https://github.com/straend)
* **Mysql**: MySQL enable MaxOpenCon and MaxIdleCon regards how constring is configured. [#9784](https://github.com/grafana/grafana/issues/9784), thx [@dfredell](https://github.com/dfredell)
* **Cloudwatch**: Fixes broken query inspector for cloudwatch [#9661](https://github.com/grafana/grafana/issues/9661), thx [@mtanda](https://github.com/mtanda)
* **Dashboard**: Make it possible to start dashboards from search and dashboard list panel [#1871](https://github.com/grafana/grafana/issues/1871)
* **Annotations**: Posting annotations now return the id of the annotation [#9798](https://github.com/grafana/grafana/issues/9798)
## Tech
* **RabbitMq**: Remove support for publishing events to RabbitMQ [#9645](https://github.com/grafana/grafana/issues/9645)
......@@ -21,6 +33,32 @@
* **Singlestat**: suppress error when result contains no datapoints [#9636](https://github.com/grafana/grafana/issues/9636), thx [@utkarshcmu](https://github.com/utkarshcmu)
* **Postgres/MySQL**: Control quoting in SQL-queries when using template variables [#9030](https://github.com/grafana/grafana/issues/9030), thanks [@svenklemm](https://github.com/svenklemm)
# 4.6.3 (unreleased)
## Fixes
* **Gzip**: Fixes bug gravatar images when gzip was enabled [#5952](https://github.com/grafana/grafana/issues/5952)
* **Alert list**: Now shows alert state changes even after adding manual annotations on dashboard [#9951](https://github.com/grafana/grafana/issues/9951)
# 4.6.2 (2017-11-16)
## Important
* **Prometheus**: Fixes bug with new prometheus alerts in Grafana. Make sure to download this version if your using Prometheus for alerting. More details in the issue. [#9777](https://github.com/grafana/grafana/issues/9777)
## Fixes
* **Color picker**: Bug after using textbox input field to change/paste color string [#9769](https://github.com/grafana/grafana/issues/9769)
* **Cloudwatch**: Fix for cloudwatch templating query `ec2_instance_attribute` [#9667](https://github.com/grafana/grafana/issues/9667), thanks [@mtanda](https://github.com/mtanda)
* **Heatmap**: Fixed tooltip for "time series buckets" mode [#9332](https://github.com/grafana/grafana/issues/9332)
* **InfluxDB**: Fixed query editor issue when using `>` or `<` operators in WHERE clause [#9871](https://github.com/grafana/grafana/issues/9871)
# 4.6.1 (2017-11-01)
* **Singlestat**: Lost thresholds when using save dashboard as [#9681](https://github.com/grafana/grafana/issues/9681)
* **Graph**: Fix for series override color picker [#9715](https://github.com/grafana/grafana/issues/9715)
* **Go**: build using golang 1.9.2 [#9713](https://github.com/grafana/grafana/issues/9713)
* **Plugins**: Fixed problem with loading plugin js files behind auth proxy [#9509](https://github.com/grafana/grafana/issues/9509)
* **Graphite**: Annotation tooltip should render empty string when undefined [#9707](https://github.com/grafana/grafana/issues/9707)
# 4.6.0 (2017-10-26)
## Fixes
......
FROM phusion/baseimage:0.9.22
MAINTAINER Denys Zhdanov <denis.zhdanov@gmail.com>
RUN apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y install vim \
nginx \
python-dev \
python-flup \
python-pip \
python-ldap \
expect \
git \
memcached \
sqlite3 \
libffi-dev \
libcairo2 \
libcairo2-dev \
python-cairo \
python-rrdtool \
pkg-config \
nodejs \
&& rm -rf /var/lib/apt/lists/*
# choose a timezone at build-time
# use `--build-arg CONTAINER_TIMEZONE=Europe/Brussels` in `docker build`
ARG CONTAINER_TIMEZONE
ENV DEBIAN_FRONTEND noninteractive
RUN if [ ! -z "${CONTAINER_TIMEZONE}" ]; \
then ln -sf /usr/share/zoneinfo/$CONTAINER_TIMEZONE /etc/localtime && \
dpkg-reconfigure -f noninteractive tzdata; \
fi
# fix python dependencies (LTS Django and newer memcached/txAMQP)
RUN pip install --upgrade pip && \
pip install django==1.8.18 \
python-memcached==1.53 \
txAMQP==0.6.2
ARG version=1.0.2
ARG whisper_version=${version}
ARG carbon_version=${version}
ARG graphite_version=${version}
ARG statsd_version=v0.7.2
# install whisper
RUN git clone -b ${whisper_version} --depth 1 https://github.com/graphite-project/whisper.git /usr/local/src/whisper
WORKDIR /usr/local/src/whisper
RUN python ./setup.py install
# install carbon
RUN git clone -b ${carbon_version} --depth 1 https://github.com/graphite-project/carbon.git /usr/local/src/carbon
WORKDIR /usr/local/src/carbon
RUN pip install -r requirements.txt \
&& python ./setup.py install
# install graphite
RUN git clone -b ${graphite_version} --depth 1 https://github.com/graphite-project/graphite-web.git /usr/local/src/graphite-web
WORKDIR /usr/local/src/graphite-web
RUN pip install -r requirements.txt \
&& python ./setup.py install
ADD conf/opt/graphite/conf/*.conf /opt/graphite/conf/
ADD conf/opt/graphite/webapp/graphite/local_settings.py /opt/graphite/webapp/graphite/local_settings.py
# ADD conf/opt/graphite/webapp/graphite/app_settings.py /opt/graphite/webapp/graphite/app_settings.py
WORKDIR /opt/graphite/webapp
RUN mkdir -p /var/log/graphite/ \
&& PYTHONPATH=/opt/graphite/webapp django-admin.py collectstatic --noinput --settings=graphite.settings
# install statsd
RUN git clone -b ${statsd_version} https://github.com/etsy/statsd.git /opt/statsd
ADD conf/opt/statsd/config.js /opt/statsd/config.js
# config nginx
RUN rm /etc/nginx/sites-enabled/default
ADD conf/etc/nginx/nginx.conf /etc/nginx/nginx.conf
ADD conf/etc/nginx/sites-enabled/graphite-statsd.conf /etc/nginx/sites-enabled/graphite-statsd.conf
# init django admin
ADD conf/usr/local/bin/django_admin_init.exp /usr/local/bin/django_admin_init.exp
ADD conf/usr/local/bin/manage.sh /usr/local/bin/manage.sh
RUN chmod +x /usr/local/bin/manage.sh && /usr/local/bin/django_admin_init.exp
# logging support
RUN mkdir -p /var/log/carbon /var/log/graphite /var/log/nginx
ADD conf/etc/logrotate.d/graphite-statsd /etc/logrotate.d/graphite-statsd
# daemons
ADD conf/etc/service/carbon/run /etc/service/carbon/run
ADD conf/etc/service/carbon-aggregator/run /etc/service/carbon-aggregator/run
ADD conf/etc/service/graphite/run /etc/service/graphite/run
ADD conf/etc/service/statsd/run /etc/service/statsd/run
ADD conf/etc/service/nginx/run /etc/service/nginx/run
# default conf setup
ADD conf /etc/graphite-statsd/conf
ADD conf/etc/my_init.d/01_conf_init.sh /etc/my_init.d/01_conf_init.sh
# cleanup
RUN apt-get clean\
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# defaults
EXPOSE 80 2003-2004 2023-2024 8125/udp 8126
VOLUME ["/opt/graphite/conf", "/opt/graphite/storage", "/etc/nginx", "/opt/statsd", "/etc/logrotate.d", "/var/log"]
WORKDIR /
ENV HOME /root
CMD ["/sbin/my_init"]
Copyright (c) 2013-2016 Nathan Hopkins
MIT License
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# Roadmap (2017-08-29)
# Roadmap (2017-10-31)
This roadmap is a tentative plan for the core development team. Things change constantly as PRs come in and priorities change.
But it will give you an idea of our current vision and plan.
### Short term (1-4 months)
- Release Grafana v4.5 with fixes and minor enhancements
- Release Grafana v5
- User groups
- Dashboard folders
- Dashboard permissions (on folders as well), permissions on groups or users
- Dashboard & folder permissions (assigned to users or groups)
- New Dashboard layout engine
- New sidemenu & nav UX
- Elasticsearch alerting
- React migration foundation (core components)
- Graphite 1.1 Tags Support
### Long term
### Long term (4 - 8 months)
- Backend plugins to support more Auth options, Alerting data sources & notifications
- Universal time series transformations for any data source (meta queries)
- Reporting
- Web socket & live data streams
- Migrate to Angular2 or react
- Alerting improvements (silence, per series tracking, etc)
- Dashboard as configuration and other automation / provisioning improvements
- Progress on React migration
- Change visualization (panel type) on the fly.
- Multi stat panel (vertical version of singlestat with bars/graph mode with big number etc)
- Repeat panel by query results
### In a distant future far far away
- Meta queries
- Integrated light weight TSDB
- Web socket & live data sources
### Outside contributions
We know this is being worked on right now by contributors (and we hope to merge it when it's ready).
- Clustering for alert engine (load distribution)
......@@ -7,7 +7,7 @@ clone_folder: c:\gopath\src\github.com\grafana\grafana
environment:
nodejs_version: "6"
GOPATH: c:\gopath
GOVERSION: 1.9.1
GOVERSION: 1.9.2
install:
- rmdir c:\go /s /q
......
......@@ -9,7 +9,7 @@ machine:
GOPATH: "/home/ubuntu/.go_workspace"
ORG_PATH: "github.com/grafana"
REPO_PATH: "${ORG_PATH}/grafana"
GODIST: "go1.9.1.linux-amd64.tar.gz"
GODIST: "go1.9.2.linux-amd64.tar.gz"
post:
- mkdir -p ~/download
- mkdir -p ~/docker
......
......@@ -7,5 +7,7 @@ coverage:
project: yes
patch: yes
changes: no
comment: false
comment:
layout: "diff"
behavior: "once"
# list of datasources that should be deleted from the database
delete_datasources:
# - name: Graphite
# org_id: 1
# list of datasources to insert/update depending
# whats available in the datbase
datasources:
# # <string, required> name of the datasource. Required
# - name: Graphite
# # <string, required> datasource type. Required
# type: graphite
# # <string, required> access mode. direct or proxy. Required
# access: proxy
# # <int> org id. will default to org_id 1 if not specified
# org_id: 1
# # <string> url
# url: http://localhost:8080
# # <string> database password, if used
# password:
# # <string> database user, if used
# user:
# # <string> database name, if used
# database:
# # <bool> enable/disable basic auth
# basic_auth:
# # <string> basic auth username
# basic_auth_user:
# # <string> basic auth password
# basic_auth_password:
# # <bool> enable/disable with credentials headers
# with_credentials:
# # <bool> mark as default datasource. Max one per org
# is_default:
# # <map> fields that will be converted to json and stored in json_data
# json_data:
# graphiteVersion: "1.1"
# tlsAuth: true
# tlsAuthWithCACert: true
# # <string> json object of data that will be encrypted.
# secure_json_data:
# tlsCACert: "..."
# tlsClientCert: "..."
# tlsClientKey: "..."
# version: 1
# # <bool> allow users to edit datasources from the UI.
# editable: false
......@@ -12,17 +12,17 @@ instance_name = ${HOSTNAME}
#################################### Paths ###############################
[paths]
# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)
#
data = data
#
# Directory where grafana can store logs
#
logs = data/log
#
# Directory where grafana will automatically scan and look for plugins
#
plugins = data/plugins
# Config files containing datasources that will be configured at startup
datasources = conf/datasources
#################################### Server ##############################
[server]
# Protocol (http, https, socket)
......@@ -82,6 +82,9 @@ max_idle_conn = 2
# Max conn setting default is 0 (mean not set)
max_open_conn =
# Set to true to log the sql calls and execution times.
log_queries =
# For "postgres", use either "disable", "require" or "verify-full"
# For "mysql", use either "true", "false", or "skip-verify".
ssl_mode = disable
......@@ -171,6 +174,7 @@ disable_gravatar = false
# data source proxy whitelist (ip_or_domain:port separated by spaces)
data_source_proxy_whitelist =
#################################### Snapshots ###########################
[snapshots]
# snapshot sharing options
external_enabled = true
......@@ -183,7 +187,13 @@ snapshot_remove_expired = true
# remove snapshots after 90 days
snapshot_TTL_days = 90
#################################### Users ####################################
#################################### Dashboards ##################
[dashboards]
# Number dashboard versions to keep (per dashboard). Default: 20, Minimum: 1
versions_to_keep = 20
#################################### Users ###############################
[users]
# disable user signup / registration
allow_sign_up = false
......@@ -429,7 +439,7 @@ enabled = true
execute_alerts = true
#################################### Internal Grafana Metrics ############
# Metrics available at HTTP API Url /api/metrics
# Metrics available at HTTP API Url /metrics
[metrics]
enabled = true
interval_seconds = 10
......
......@@ -12,18 +12,17 @@
#################################### Paths ####################################
[paths]
# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)
#
;data = /var/lib/grafana
#
# Directory where grafana can store logs
#
;logs = /var/log/grafana
#
# Directory where grafana will automatically scan and look for plugins
#
;plugins = /var/lib/grafana/plugins
#
# Config files containing datasources that will be configured at startup
;datasources = conf/datasources
#################################### Server ####################################
[server]
# Protocol (http, https, socket)
......@@ -91,6 +90,8 @@
# Max conn setting default is 0 (mean not set)
;max_open_conn =
# Set to true to log the sql calls and execution times.
log_queries =
#################################### Session ####################################
[session]
......@@ -161,6 +162,7 @@
# data source proxy whitelist (ip_or_domain:port separated by spaces)
;data_source_proxy_whitelist =
#################################### Snapshots ###########################
[snapshots]
# snapshot sharing options
;external_enabled = true
......@@ -173,7 +175,12 @@
# remove snapshots after 90 days
;snapshot_TTL_days = 90
#################################### Users ####################################
#################################### Dashboards History ##################
[dashboards]
# Number dashboard versions to keep (per dashboard). Default: 20, Minimum: 1
;versions_to_keep = 20
#################################### Users ###############################
[users]
# disable user signup / registration
;allow_sign_up = true
......@@ -373,7 +380,7 @@
;execute_alerts = true
#################################### Internal Grafana Metrics ##########################
# Metrics available at HTTP API Url /api/metrics
# Metrics available at HTTP API Url /metrics
[metrics]
# Disable / Enable internal metrics
;enabled = true
......
collectd:
build: blocks/collectd
environment:
HOST_NAME: myserver
GRAPHITE_HOST: graphite
GRAPHITE_PORT: 2003
GRAPHITE_PREFIX: collectd.
REPORT_BY_CPU: 'false'
COLLECT_INTERVAL: 10
links:
- graphite
collectd:
build: blocks/collectd
environment:
HOST_NAME: myserver
GRAPHITE_HOST: graphite
GRAPHITE_PORT: 2003
GRAPHITE_PREFIX: collectd.
REPORT_BY_CPU: 'false'
COLLECT_INTERVAL: 10
links:
- graphite
elasticsearch:
image: elasticsearch:2.4.1
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200:9200"
- "9300:9300"
volumes:
- ./blocks/elastic/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
elasticsearch:
image: elasticsearch:2.4.1
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200:9200"
- "9300:9300"
volumes:
- ./blocks/elastic/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
elasticsearch1:
image: elasticsearch:1.7.6
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "11200:9200"
- "11300:9300"
volumes:
- ./blocks/elastic/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
elasticsearch1:
image: elasticsearch:1.7.6
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "11200:9200"
- "11300:9300"
volumes:
- ./blocks/elastic/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
# You need to run 'sysctl -w vm.max_map_count=262144' on the host machine
elasticsearch5:
image: elasticsearch:5
command: elasticsearch
ports:
- "10200:9200"
- "10300:9300"
elasticsearch5:
image: elasticsearch:5
command: elasticsearch
ports:
- "10200:9200"
- "10300:9300"
graphite:
build: blocks/graphite
ports:
- "8080:80"
- "2003:2003"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
graphite:
build: blocks/graphite
ports:
- "8080:80"
- "2003:2003"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
fake-graphite-data:
image: grafana/fake-data-gen
net: bridge
environment:
FD_DATASOURCE: graphite
FD_PORT: 2003
fake-graphite-data:
image: grafana/fake-data-gen
network_mode: bridge
environment:
FD_DATASOURCE: graphite
FD_PORT: 2003
FROM phusion/baseimage:0.9.22
MAINTAINER Denys Zhdanov <denis.zhdanov@gmail.com>
RUN apt-get -y update \
&& apt-get -y upgrade \
&& apt-get -y --force-yes install vim \
&& apt-get -y install vim \
nginx \
python-dev \
python-flup \
......@@ -22,38 +23,67 @@ RUN apt-get -y update \
nodejs \
&& rm -rf /var/lib/apt/lists/*
# choose a timezone at build-time
# use `--build-arg CONTAINER_TIMEZONE=Europe/Brussels` in `docker build`
ARG CONTAINER_TIMEZONE
ENV DEBIAN_FRONTEND noninteractive
RUN if [ ! -z "${CONTAINER_TIMEZONE}" ]; \
then ln -sf /usr/share/zoneinfo/$CONTAINER_TIMEZONE /etc/localtime && \
dpkg-reconfigure -f noninteractive tzdata; \
fi
# fix python dependencies (LTS Django and newer memcached/txAMQP)
RUN pip install django==1.8.18 \
RUN pip install --upgrade pip && \
pip install django==1.8.18 \
python-memcached==1.53 \
txAMQP==0.6.2 \
&& pip install --upgrade pip
txAMQP==0.6.2
ARG version=1.0.2
ARG whisper_version=${version}
ARG carbon_version=${version}
ARG graphite_version=${version}
RUN echo "Building Version: $version"
ARG whisper_repo=https://github.com/graphite-project/whisper.git
ARG carbon_repo=https://github.com/graphite-project/carbon.git
ARG graphite_repo=https://github.com/graphite-project/graphite-web.git
ARG statsd_version=v0.8.0
ARG statsd_repo=https://github.com/etsy/statsd.git
# install whisper
RUN git clone -b 1.0.2 --depth 1 https://github.com/graphite-project/whisper.git /usr/local/src/whisper
RUN git clone -b ${whisper_version} --depth 1 ${whisper_repo} /usr/local/src/whisper
WORKDIR /usr/local/src/whisper
RUN python ./setup.py install
# install carbon
RUN git clone -b 1.0.2 --depth 1 https://github.com/graphite-project/carbon.git /usr/local/src/carbon
RUN git clone -b ${carbon_version} --depth 1 ${carbon_repo} /usr/local/src/carbon
WORKDIR /usr/local/src/carbon
RUN pip install -r requirements.txt \
&& python ./setup.py install
# install graphite
RUN git clone -b 1.0.2 --depth 1 https://github.com/graphite-project/graphite-web.git /usr/local/src/graphite-web
RUN git clone -b ${graphite_version} --depth 1 ${graphite_repo} /usr/local/src/graphite-web
WORKDIR /usr/local/src/graphite-web
RUN pip install -r requirements.txt \
&& python ./setup.py install
# install statsd
RUN git clone -b ${statsd_version} ${statsd_repo} /opt/statsd
# config graphite
ADD conf/opt/graphite/conf/*.conf /opt/graphite/conf/
ADD conf/opt/graphite/webapp/graphite/local_settings.py /opt/graphite/webapp/graphite/local_settings.py
ADD conf/opt/graphite/webapp/graphite/app_settings.py /opt/graphite/webapp/graphite/app_settings.py
# ADD conf/opt/graphite/webapp/graphite/app_settings.py /opt/graphite/webapp/graphite/app_settings.py
WORKDIR /opt/graphite/webapp
RUN mkdir -p /var/log/graphite/ \
&& PYTHONPATH=/opt/graphite/webapp django-admin.py collectstatic --noinput --settings=graphite.settings
# install statsd
RUN git clone -b v0.7.2 https://github.com/etsy/statsd.git /opt/statsd
ADD conf/opt/statsd/config.js /opt/statsd/config.js
# config statsd
ADD conf/opt/statsd/config.js /opt/statsd/
# config nginx
RUN rm /etc/nginx/sites-enabled/default
......@@ -63,8 +93,7 @@ ADD conf/etc/nginx/sites-enabled/graphite-statsd.conf /etc/nginx/sites-enabled/g
# init django admin
ADD conf/usr/local/bin/django_admin_init.exp /usr/local/bin/django_admin_init.exp
ADD conf/usr/local/bin/manage.sh /usr/local/bin/manage.sh
RUN chmod +x /usr/local/bin/manage.sh \
&& /usr/local/bin/django_admin_init.exp
RUN chmod +x /usr/local/bin/manage.sh && /usr/local/bin/django_admin_init.exp
# logging support
RUN mkdir -p /var/log/carbon /var/log/graphite /var/log/nginx
......@@ -86,8 +115,10 @@ RUN apt-get clean\
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# defaults
EXPOSE 80 2003-2004 2023-2024 8125/udp 8126
EXPOSE 80 2003-2004 2023-2024 8125 8125/udp 8126
VOLUME ["/opt/graphite/conf", "/opt/graphite/storage", "/etc/nginx", "/opt/statsd", "/etc/logrotate.d", "/var/log"]
WORKDIR /
ENV HOME /root
ENV STATSD_INTERFACE udp
CMD ["/sbin/my_init"]
......@@ -12,7 +12,7 @@ graphite_conf_dir_contents=$(find /opt/graphite/conf -mindepth 1 -print -quit)
graphite_webapp_dir_contents=$(find /opt/graphite/webapp/graphite -mindepth 1 -print -quit)
graphite_storage_dir_contents=$(find /opt/graphite/storage -mindepth 1 -print -quit)
if [[ -z $graphite_dir_contents ]]; then
git clone -b 1.0.2 --depth 1 https://github.com/graphite-project/graphite-web.git /usr/local/src/graphite-web
# git clone -b 1.0.2 --depth 1 https://github.com/graphite-project/graphite-web.git /usr/local/src/graphite-web
cd /usr/local/src/graphite-web && python ./setup.py install
fi
if [[ -z $graphite_storage_dir_contents ]]; then
......
......@@ -40,4 +40,3 @@ aggregationMethod = sum
pattern = .*
xFilesFactor = 0.3
aggregationMethod = average
# Schema definitions for Whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds.
#
# Definition Syntax:
#
# [name]
# pattern = regex
# retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, ...
#
# Remember: To support accurate aggregation from higher to lower resolution
# archives, the precision of a longer retention archive must be
# cleanly divisible by precision of next lower retention archive.
#
# Valid: 60s:7d,300s:30d (300/60 = 5)
# Invalid: 180s:7d,300s:30d (300/180 = 3.333)
#
# Carbon's internal metrics. This entry should match what is specified in
# CARBON_METRIC_PREFIX and CARBON_METRIC_INTERVAL settings
[carbon]
pattern = ^carbon\..*
retentions = 1m:31d,10m:1y,1h:5y
......
#!/bin/bash
PYTHONPATH=/opt/graphite/webapp django-admin.py syncdb --settings=graphite.settings
PYTHONPATH=/opt/graphite/webapp django-admin.py update_users --settings=graphite.settings
\ No newline at end of file
# PYTHONPATH=/opt/graphite/webapp django-admin.py update_users --settings=graphite.settings
\ No newline at end of file
graphite:
build: blocks/graphite1
ports:
- "8080:80"
- "2003:2003"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
graphite:
build:
context: blocks/graphite1
args:
version: master
ports:
- "8080:80"
- "2003:2003"
- "8125:8125/udp"
- "8126:8126"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
fake-graphite-data:
image: grafana/fake-data-gen
net: bridge
environment:
FD_DATASOURCE: graphite
FD_PORT: 2003
fake-graphite-data:
image: grafana/fake-data-gen
network_mode: bridge
environment:
FD_DATASOURCE: graphite
FD_PORT: 2003
[cache]
LOCAL_DATA_DIR = /opt/graphite/storage/whisper/
# Specify the user to drop privileges to
# If this is blank carbon runs as the user that invokes it
# This user must have write access to the local data directory
USER =
# Limit the size of the cache to avoid swapping or becoming CPU bound.
# Sorts and serving cache queries gets more expensive as the cache grows.
# Use the value "inf" (infinity) for an unlimited cache size.
MAX_CACHE_SIZE = inf
# Limits the number of whisper update_many() calls per second, which effectively
# means the number of write requests sent to the disk. This is intended to
# prevent over-utilizing the disk and thus starving the rest of the system.
# When the rate of required updates exceeds this, then carbon's caching will
# take effect and increase the overall throughput accordingly.
MAX_UPDATES_PER_SECOND = 1000
# Softly limits the number of whisper files that get created each minute.
# Setting this value low (like at 50) is a good way to ensure your graphite
# system will not be adversely impacted when a bunch of new metrics are
# sent to it. The trade off is that it will take much longer for those metrics'
# database files to all get created and thus longer until the data becomes usable.
# Setting this value high (like "inf" for infinity) will cause graphite to create
# the files quickly but at the risk of slowing I/O down considerably for a while.
MAX_CREATES_PER_MINUTE = inf
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2004
CACHE_QUERY_INTERFACE = 0.0.0.0
CACHE_QUERY_PORT = 7002
LOG_UPDATES = False
# Enable AMQP if you want to receve metrics using an amqp broker
# ENABLE_AMQP = False
# Verbose means a line will be logged for every metric received
# useful for testing
# AMQP_VERBOSE = False
# AMQP_HOST = localhost
# AMQP_PORT = 5672
# AMQP_VHOST = /
# AMQP_USER = guest
# AMQP_PASSWORD = guest
# AMQP_EXCHANGE = graphite
# Patterns for all of the metrics this machine will store. Read more at
# http://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol#Bindings
#
# Example: store all sales, linux servers, and utilization metrics
# BIND_PATTERNS = sales.#, servers.linux.#, #.utilization
#
# Example: store everything
# BIND_PATTERNS = #
# NOTE: you cannot run both a cache and a relay on the same server
# with the default configuration, you have to specify a distinict
# interfaces and ports for the listeners.
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2004
CACHE_SERVERS = server1, server2, server3
MAX_QUEUE_SIZE = 10000
import datetime
import time
from django.utils.timezone import get_current_timezone
from django.core.urlresolvers import get_script_prefix
from django.http import HttpResponse
from django.shortcuts import render_to_response, get_object_or_404
from pytz import timezone
from graphite.util import json
from graphite.events import models
from graphite.render.attime import parseATTime
def to_timestamp(dt):
return time.mktime(dt.timetuple())
class EventEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime.datetime):
return to_timestamp(obj)
return json.JSONEncoder.default(self, obj)
def view_events(request):
if request.method == "GET":
context = { 'events' : fetch(request),
'slash' : get_script_prefix()
}
return render_to_response("events.html", context)
else:
return post_event(request)
def detail(request, event_id):
e = get_object_or_404(models.Event, pk=event_id)
context = { 'event' : e,
'slash' : get_script_prefix()
}
return render_to_response("event.html", context)
def post_event(request):
if request.method == 'POST':
event = json.loads(request.body)
assert isinstance(event, dict)
values = {}
values["what"] = event["what"]
values["tags"] = event.get("tags", None)
values["when"] = datetime.datetime.fromtimestamp(
event.get("when", time.time()))
if "data" in event:
values["data"] = event["data"]
e = models.Event(**values)
e.save()
return HttpResponse(status=200)
else:
return HttpResponse(status=405)
def get_data(request):
if 'jsonp' in request.REQUEST:
response = HttpResponse(
"%s(%s)" % (request.REQUEST.get('jsonp'),
json.dumps(fetch(request), cls=EventEncoder)),
mimetype='text/javascript')
else:
response = HttpResponse(
json.dumps(fetch(request), cls=EventEncoder),
mimetype="application/json")
return response
def fetch(request):
#XXX we need to move to USE_TZ=True to get rid of naive-time conversions
def make_naive(dt):
if 'tz' in request.GET:
tz = timezone(request.GET['tz'])
else:
tz = get_current_timezone()
local_dt = dt.astimezone(tz)
if hasattr(local_dt, 'normalize'):
local_dt = local_dt.normalize()
return local_dt.replace(tzinfo=None)
if request.GET.get("from", None) is not None:
time_from = make_naive(parseATTime(request.GET["from"]))
else:
time_from = datetime.datetime.fromtimestamp(0)
if request.GET.get("until", None) is not None:
time_until = make_naive(parseATTime(request.GET["until"]))
else:
time_until = datetime.datetime.now()
tags = request.GET.get("tags", None)
if tags is not None:
tags = request.GET.get("tags").split(" ")
return [x.as_dict() for x in
models.Event.find_events(time_from, time_until, tags=tags)]
[
{
"pk": 1,
"model": "auth.user",
"fields": {
"username": "admin",
"first_name": "",
"last_name": "",
"is_active": true,
"is_superuser": true,
"is_staff": true,
"last_login": "2011-09-20 17:02:14",
"groups": [],
"user_permissions": [],
"password": "sha1$1b11b$edeb0a67a9622f1f2cfeabf9188a711f5ac7d236",
"email": "root@example.com",
"date_joined": "2011-09-20 17:02:14"
}
}
]
# Edit this file to override the default graphite settings, do not edit settings.py
# Turn on debugging and restart apache if you ever see an "Internal Server Error" page
#DEBUG = True
# Set your local timezone (django will try to figure this out automatically)
TIME_ZONE = 'UTC'
# Setting MEMCACHE_HOSTS to be empty will turn off use of memcached entirely
#MEMCACHE_HOSTS = ['127.0.0.1:11211']
# Sometimes you need to do a lot of rendering work but cannot share your storage mount
#REMOTE_RENDERING = True
#RENDERING_HOSTS = ['fastserver01','fastserver02']
#LOG_RENDERING_PERFORMANCE = True
#LOG_CACHE_PERFORMANCE = True
# If you've got more than one backend server they should all be listed here
#CLUSTER_SERVERS = []
# Override this if you need to provide documentation specific to your graphite deployment
#DOCUMENTATION_URL = "http://wiki.mycompany.com/graphite"
# Enable email-related features
#SMTP_SERVER = "mail.mycompany.com"
# LDAP / ActiveDirectory authentication setup
#USE_LDAP_AUTH = True
#LDAP_SERVER = "ldap.mycompany.com"
#LDAP_PORT = 389
#LDAP_SEARCH_BASE = "OU=users,DC=mycompany,DC=com"
#LDAP_BASE_USER = "CN=some_readonly_account,DC=mycompany,DC=com"
#LDAP_BASE_PASS = "readonly_account_password"
#LDAP_USER_QUERY = "(username=%s)" #For Active Directory use "(sAMAccountName=%s)"
# If sqlite won't cut it, configure your real database here (don't forget to run manage.py syncdb!)
#DATABASE_ENGINE = 'mysql' # or 'postgres'
#DATABASE_NAME = 'graphite'
#DATABASE_USER = 'graphite'
#DATABASE_PASSWORD = 'graphite-is-awesome'
#DATABASE_HOST = 'mysql.mycompany.com'
#DATABASE_PORT = '3306'
grafana:$apr1$4R/20xhC$8t37jPP5dbcLr48btdkU//
daemon off;
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
server_names_hash_bucket_size 32;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
server {
listen 80 default_server;
server_name _;
open_log_file_cache max=1000 inactive=20s min_uses=2 valid=1m;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host $host;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, OPTIONS";
add_header Access-Control-Allow-Headers "origin, authorization, accept";
location /content {
alias /opt/graphite/webapp/content;
}
location /media {
alias /usr/share/pyshared/django/contrib/admin/media;
}
}
}
{
graphitePort: 2003,
graphiteHost: "127.0.0.1",
port: 8125,
mgmt_port: 8126,
backends: ['./backends/graphite'],
debug: true
}
[min]
pattern = \.min$
xFilesFactor = 0.1
aggregationMethod = min
[max]
pattern = \.max$
xFilesFactor = 0.1
aggregationMethod = max
[sum]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum
[default_average]
pattern = .*
xFilesFactor = 0.5
aggregationMethod = average
[carbon]
pattern = ^carbon\..*
retentions = 1m:31d,10m:1y,1h:5y
[highres]
pattern = ^highres.*
retentions = 1s:1d,1m:7d
[statsd]
pattern = ^statsd.*
retentions = 1m:7d,10m:1y
[default]
pattern = .*
retentions = 10s:1d,1m:7d,10m:1y
[supervisord]
nodaemon = true
environment = GRAPHITE_STORAGE_DIR='/opt/graphite/storage',GRAPHITE_CONF_DIR='/opt/graphite/conf'
[program:nginx]
command = /usr/sbin/nginx
stdout_logfile = /var/log/supervisor/%(program_name)s.log
stderr_logfile = /var/log/supervisor/%(program_name)s.log
autorestart = true
[program:carbon-cache]
;user = www-data
command = /opt/graphite/bin/carbon-cache.py --debug start
stdout_logfile = /var/log/supervisor/%(program_name)s.log
stderr_logfile = /var/log/supervisor/%(program_name)s.log
autorestart = true
[program:graphite-webapp]
;user = www-data
directory = /opt/graphite/webapp
environment = PYTHONPATH='/opt/graphite/webapp'
command = /usr/bin/gunicorn_django -b127.0.0.1:8000 -w2 graphite/settings.py
stdout_logfile = /var/log/supervisor/%(program_name)s.log
stderr_logfile = /var/log/supervisor/%(program_name)s.log
autorestart = true
influxdb:
image: influxdb:latest
container_name: influxdb
ports:
- "2004:2004"
- "8083:8083"
- "8086:8086"
volumes:
- ./blocks/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf
influxdb:
image: influxdb:latest
container_name: influxdb
ports:
- "2004:2004"
- "8083:8083"
- "8086:8086"
volumes:
- ./blocks/influxdb/influxdb.conf:/etc/influxdb/influxdb.conf
fake-influxdb-data:
image: grafana/fake-data-gen
net: bridge
environment:
FD_DATASOURCE: influxdb
FD_PORT: 8086
fake-influxdb-data:
image: grafana/fake-data-gen
network_mode: bridge
environment:
FD_DATASOURCE: influxdb
FD_PORT: 8086
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "localhost:6831:6831/udp"
- "16686:16686"
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "127.0.0.1:6831:6831/udp"
- "16686:16686"
memcached:
image: memcached:latest
ports:
- "11211:11211"
memcached:
image: memcached:latest
ports:
- "11211:11211"
mysql:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: grafana
MYSQL_USER: grafana
MYSQL_PASSWORD: password
ports:
- "3306:3306"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
command: [mysqld, --character-set-server=utf8mb4, --collation-server=utf8mb4_unicode_ci, --innodb_monitor_enable=all]
mysql:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: grafana
MYSQL_USER: grafana
MYSQL_PASSWORD: password
ports:
- "3306:3306"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
command: [mysqld, --character-set-server=utf8mb4, --collation-server=utf8mb4_unicode_ci, --innodb_monitor_enable=all]
mysql_opendata:
build: blocks/mysql_opendata
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: testdata
MYSQL_USER: grafana
MYSQL_PASSWORD: password
ports:
- "3307:3306"
mysql_opendata:
build: blocks/mysql_opendata
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: testdata
MYSQL_USER: grafana
MYSQL_PASSWORD: password
ports:
- "3307:3306"
mysqltests:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: grafana_tests
MYSQL_USER: grafana
MYSQL_PASSWORD: password
ports:
- "3306:3306"
mysqltests:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: rootpass
MYSQL_DATABASE: grafana_tests
MYSQL_USER: grafana
MYSQL_PASSWORD: password
ports:
- "3306:3306"
FROM debian:jessie
MAINTAINER Christian Luginbühl <dinke@pimprecords.com>
LABEL maintainer="Christian Luginbühl <dinke@pimprecords.com>"
ENV OPENLDAP_VERSION 2.4.40
......
openldap:
build: blocks/openldap
environment:
SLAPD_PASSWORD: grafana
SLAPD_DOMAIN: grafana.org
SLAPD_ADDITIONAL_MODULES: memberof
ports:
- "389:389"
openldap:
build: blocks/openldap
environment:
SLAPD_PASSWORD: grafana
SLAPD_DOMAIN: grafana.org
SLAPD_ADDITIONAL_MODULES: memberof
ports:
- "389:389"
opentsdb:
image: opower/opentsdb:latest
ports:
- "4242:4242"
opentsdb:
image: opower/opentsdb:latest
ports:
- "4242:4242"
fake-opentsdb-data:
image: grafana/fake-data-gen
net: bridge
environment:
FD_DATASOURCE: opentsdb
fake-opentsdb-data:
image: grafana/fake-data-gen
network_mode: bridge
environment:
FD_DATASOURCE: opentsdb
postgrestest:
image: postgres:9.4.14
environment:
POSTGRES_USER: grafana
POSTGRES_PASSWORD: password
POSTGRES_DATABASE: grafana
ports:
- "5432:5432"
command: postgres -c log_connections=on -c logging_collector=on -c log_destination=stderr -c log_directory=/var/log/postgresql
postgrestest:
image: postgres:latest
environment:
POSTGRES_USER: grafana
POSTGRES_PASSWORD: password
POSTGRES_DATABASE: grafana
ports:
- "5432:5432"
command: postgres -c log_connections=on -c logging_collector=on -c log_destination=stderr -c log_directory=/var/log/postgresql
postgrestest:
image: postgres:latest
environment:
POSTGRES_USER: grafanatest
POSTGRES_PASSWORD: grafanatest
ports:
- "5432:5432"
postgrestest:
image: postgres:latest
environment:
POSTGRES_USER: grafanatest
POSTGRES_PASSWORD: grafanatest
ports:
- "5432:5432"
prometheus:
build: blocks/prometheus
net: host
ports:
- "9090:9090"
prometheus:
build: blocks/prometheus
network_mode: host
ports:
- "9090:9090"
node_exporter:
image: prom/node-exporter
net: host
ports:
- "9100:9100"
node_exporter:
image: prom/node-exporter
network_mode: host
ports:
- "9100:9100"
fake-prometheus-data:
image: grafana/fake-data-gen
net: host
ports:
- "9091:9091"
environment:
FD_DATASOURCE: prom
fake-prometheus-data:
image: grafana/fake-data-gen
network_mode: host
ports:
- "9091:9091"
environment:
FD_DATASOURCE: prom
alertmanager:
image: quay.io/prometheus/alertmanager
net: host
ports:
- "9093:9093"
alertmanager:
image: quay.io/prometheus/alertmanager
network_mode: host
ports:
- "9093:9093"
FROM prom/prometheus:v2.0.0
ADD prometheus.yml /etc/prometheus/
ADD alert.rules /etc/prometheus/
# Alert Rules
ALERT AppCrash
IF process_open_fds > 0
FOR 15s
LABELS { severity="critical" }
ANNOTATIONS {
summary = "Number of open fds > 0",
description = "Just testing"
}
# my global config
global:
scrape_interval: 10s # By default, scrape targets every 15 seconds.
evaluation_interval: 10s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Load and evaluate rules in this file every 'evaluation_interval' seconds.
#rule_files:
# - "alert.rules"
# - "first.rules"
# - "second.rules"
# alerting:
# alertmanagers:
# - scheme: http
# static_configs:
# - targets:
# - "127.0.0.1:9093"
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node_exporter'
static_configs:
- targets: ['127.0.0.1:9100']
- job_name: 'fake-data-gen'
static_configs:
- targets: ['127.0.0.1:9091']
- job_name: 'grafana'
static_configs:
- targets: ['127.0.0.1:3000']
FROM centos:centos7
MAINTAINER Przemyslaw Ozgo <linux@ozgo.info>
LABEL maintainer="Przemyslaw Ozgo <linux@ozgo.info>"
RUN \
yum update -y && \
......
snmpd:
image: namshi/smtp
ports:
- "25:25"
snmpd:
image: namshi/smtp
ports:
- "25:25"
version: "2"
services:
......@@ -7,8 +7,9 @@ template_dir=templates
grafana_config_file=conf.tmp
grafana_config=config
fig_file=docker-compose.yml
fig_config=fig
compose_header_file=compose_header.yml
fig_file=docker-compose.yaml
fig_config=docker-compose.yaml
if [ "$#" == 0 ]; then
blocks=`ls $blocks_dir`
......@@ -23,13 +24,16 @@ if [ "$#" == 0 ]; then
exit 0
fi
for file in $gogs_config_file $fig_file; do
for file in $grafana_config_file $fig_file; do
if [ -e $file ]; then
echo "Deleting $file"
rm $file
fi
done
echo "Adding Compose header to $fig_file"
cat $compose_header_file >> $fig_file
for dir in $@; do
current_dir=$blocks_dir/$dir
if [ ! -d "$current_dir" ]; then
......@@ -45,7 +49,7 @@ for dir in $@; do
if [ -e $current_dir/$fig_config ]; then
echo "Adding $current_dir/$fig_config to $fig_file"
cat $current_dir/fig >> $fig_file
cat $current_dir/$fig_config >> $fig_file
echo "" >> $fig_file
fi
done
......
+++
title = "Provisioning"
description = ""
keywords = ["grafana", "provisioning"]
type = "docs"
[menu.docs]
parent = "admin"
weight = 8
+++
# Provisioning Grafana
## Config file
Checkout the [configuration](/installation/configuration) page for more information about what you can configure in `grafana.ini`
### Config file locations
- Default configuration from `$WORKING_DIR/conf/defaults.ini`
- Custom configuration from `$WORKING_DIR/conf/custom.ini`
- The custom configuration file path can be overridden using the `--config` parameter
> **Note.** If you have installed Grafana using the `deb` or `rpm`
> packages, then your configuration file is located at
> `/etc/grafana/grafana.ini`. This path is specified in the Grafana
> init.d script using `--config` file parameter.
### Using environment variables
All options in the configuration file (listed below) can be overridden
using environment variables using the syntax:
```bash
GF_<SectionName>_<KeyName>
```
Where the section name is the text within the brackets. Everything
should be upper case, `.` should be replaced by `_`. For example, given these configuration settings:
```bash
# default section
instance_name = ${HOSTNAME}
[security]
admin_user = admin
[auth.google]
client_secret = 0ldS3cretKey
```
Then you can override them using:
```bash
export GF_DEFAULT_INSTANCE_NAME=my-instance
export GF_SECURITY_ADMIN_USER=true
export GF_AUTH_GOOGLE_CLIENT_SECRET=newS3cretKey
```
<hr />
## Configuration management tools
Currently we do not provide any scripts/manifests for configuring Grafana. Rather then spending time learning and creating scripts/manifests for each tool, we think our time is better spent making Grafana easier to provision. Therefor, we heavily relay on the expertise of he community.
Tool | Project
-----|------------
Puppet | [https://forge.puppet.com/puppet/grafana](https://forge.puppet.com/puppet/grafana)
Ansible | [https://github.com/picotrading/ansible-grafana](https://github.com/picotrading/ansible-grafana)
Chef | [https://github.com/JonathanTron/chef-grafana](https://github.com/JonathanTron/chef-grafana)
Saltstack | [https://github.com/salt-formulas/salt-formula-grafana](https://github.com/salt-formulas/salt-formula-grafana)
## Datasources
> This feature is available from v4.7
It's possible to manage datasources in Grafana by adding one or more yaml config files in the [`conf/datasources`](/installation/configuration/#datasources) directory. Each config file can contain a list of `datasources` that will be added or updated during start up. If the datasource already exists, Grafana will update it to match the configuration file. The config file can also contain a list of datasources that should be deleted. That list is called `delete_datasources`. Grafana will delete datasources listed in `delete_datasources` before inserting/updating those in the `datasource` list.
### Running multiple grafana instances.
If you are running multiple instances of Grafana you might run into problems if they have different versions of the datasource.yaml configuration file. The best way to solve this problem is to add a version number to each datasource in the configuration and increase it when you update the config. Grafana will only update datasources with the same or lower version number than specified in the config. That way old configs cannot overwrite newer configs if they restart at the same time.
### Example datasource config file
```yaml
# list of datasources that should be deleted from the database
delete_datasources:
- name: Graphite
org_id: 1
# list of datasources to insert/update depending
# whats available in the datbase
datasources:
# <string, required> name of the datasource. Required
- name: Graphite
# <string, required> datasource type. Required
type: graphite
# <string, required> access mode. direct or proxy. Required
access: proxy
# <int> org id. will default to org_id 1 if not specified
org_id: 1
# <string> url
url: http://localhost:8080
# <string> database password, if used
password:
# <string> database user, if used
user:
# <string> database name, if used
database:
# <bool> enable/disable basic auth
basic_auth:
# <string> basic auth username
basic_auth_user:
# <string> basic auth password
basic_auth_password:
# <bool> enable/disable with credentials headers
with_credentials:
# <bool> mark as default datasource. Max one per org
is_default:
# <map> fields that will be converted to json and stored in json_data
json_data:
graphiteVersion: "1.1"
tlsAuth: true
tlsAuthWithCACert: true
# <string> json object of data that will be encrypted.
secure_json_data:
tlsCACert: "..."
tlsClientCert: "..."
tlsClientKey: "..."
version: 1
# <bool> allow users to edit datasources from the UI.
editable: false
```
#### Json data
Since all datasources dont have the same configuration settings we only have the most common ones as fields. The rest should be stored as a json blob in the `json_data` field. Here are the most common settings that the core datasources use.
| Name | Type | Datasource |Description |
| ----| ---- | ---- | --- |
| tlsAuth | boolean | *All* | Enable TLS authentication using client cert configured in secure json data |
| tlsAuthWithCACert | boolean | *All* | Enable TLS authtication using CA cert |
| graphiteVersion | string | Graphite | Graphite version |
| timeInterval | string | Elastic, Influxdb & Prometheus | Lowest interval/step value that should be used for this data source |
| esVersion | string | Elastic | Elasticsearch version |
| timeField | string | Elastic | Which field that should be used as timestamp |
| interval | string | Elastic | Index date time format |
| authType | string | Cloudwatch | Auth provider. keys/credentials/arn |
| assumeRoleArn | string | Cloudwatch | ARN of Assume Role |
| defaultRegion | string | Cloudwatch | AWS region |
| customMetricsNamespaces | string | Cloudwatch | Namespaces of Custom Metrics |
| tsdbVersion | string | OpenTsdb | Version |
| tsdbResolution | string | OpenTsdb | Resolution |
| sslmode | string | Postgre | SSLmode. 'disable', 'require', 'verify-ca' or 'verify-full' |
#### Secure Json data
{"authType":"keys","defaultRegion":"us-west-2","timeField":"@timestamp"}
Secure json data is a map of settings that will be encrypted with [secret key](/installation/configuration/#secret-key) from the grafana config. The purpose of this is only to hide content from the users of the application. This should be used for storing TLS Cert and password that Grafana will append to request on the server side. All these settings are optional.
| Name | Type | Datasource | Description |
| ----| ---- | ---- | --- |
| tlsCACert | string | *All* |CA cert for out going requests |
| tlsClientCert | string | *All* |TLS Client cert for outgoing requests |
| tlsClientKey | string | *All* |TLS Client key for outgoing requests |
| password | string | Postgre | password |
| user | string | Postgre | user |
......@@ -45,10 +45,10 @@ Macro example | Description
------------ | -------------
*$__time(dateColumn)* | Will be replaced by an expression to rename the column to `time`. For example, *dateColumn as time*
*$__timeSec(dateColumn)* | Will be replaced by an expression to rename the column to `time` and converting the value to unix timestamp. For example, *extract(epoch from dateColumn) as time*
*$__timeFilter(dateColumn)* | Will be replaced by a time range filter using the specified column name. For example, *dateColumn > to_timestamp(1494410783) AND dateColumn < to_timestamp(1494497183)*
*$__timeFilter(dateColumn)* | Will be replaced by a time range filter using the specified column name. For example, *extract(epoch from dateColumn) BETWEEN 1494410783 AND 1494497183*
*$__timeFrom()* | Will be replaced by the start of the currently active time selection. For example, *to_timestamp(1494410783)*
*$__timeTo()* | Will be replaced by the end of the currently active time selection. For example, *to_timestamp(1494497183)*
*$__timeGroup(dateColumn,'5m')* | Will be replaced by an expression usable in GROUP BY clause. For example, *(extract(epoch from "dateColumn")/extract(epoch from '5m'::interval))::int*extract(epoch from '5m'::interval)*
*$__timeGroup(dateColumn,'5m')* | Will be replaced by an expression usable in GROUP BY clause. For example, *(extract(epoch from "dateColumn")/300)::bigint*300*
*$__unixEpochFilter(dateColumn)* | Will be replaced by a time range filter using the specified column name with times represented as unix timestamp. For example, *dateColumn > 1494410783 AND dateColumn < 1494497183*
*$__unixEpochFrom()* | Will be replaced by the start of the currently active time selection as unix timestamp. For example, *1494410783*
*$__unixEpochTo()* | Will be replaced by the end of the currently active time selection as unix timestamp. For example, *1494497183*
......@@ -186,7 +186,7 @@ ORDER BY atimestamp ASC
## Annotations
[Annotations]({{< relref "reference/annotations.md" >}}) allows you to overlay rich event information on top of graphs. You add annotation queries via the Dashboard menu / Annotations view.
[Annotations]({{< relref "reference/annotations.md" >}}) allow you to overlay rich event information on top of graphs. You add annotation queries via the Dashboard menu / Annotations view.
An example query:
......
......@@ -34,6 +34,7 @@ Name | Description
*Basic Auth* | Enable basic authentication to the Prometheus data source.
*User* | Name of your Prometheus user
*Password* | Database user's password
*Scrape interval* | This will be used as a lower limit for the Prometheus step query parameter. Default value is 15s.
## Query editor
......@@ -95,3 +96,7 @@ Prometheus supports two ways to query annotations.
- A Prometheus query for pending and firing alerts (for details see [Inspecting alerts during runtime](https://prometheus.io/docs/alerting/rules/#inspecting-alerts-during-runtime))
The step option is useful to limit the number of events returned from your query.
## Getting Grafana metrics into Prometheus
Since 4.6.0 Grafana exposes metrics for Prometheus on the `/metrics` endpoint. We also bundle a dashboard within Grafana so you can get started viewing your metrics faster. You can import the bundled dashboard by going to the data source edit page and click the dashboard tab. There you can find a dashboard for Grafana and one for Prometheus. Import and start viewing all the metrics!
......@@ -17,7 +17,7 @@ This make is much easier to verify functionally since the data can be shared ver
## Enable
`Grafana TestData` is not enabled by default. To enable it you have to go to `/plugins/testdata/edit` and click the enable button to enable.
`Grafana TestData` is not enabled by default. To enable it, first navigate to the Plugins section, found in your Grafana main menu. Click the Apps tabs in the Plugins section and select the Grafana TestData App. (Or navigate to http://your_grafana_instance/plugins/testdata/edit to go directly there). Finally click the enable button to enable.
## Create mock data.
......
......@@ -89,7 +89,7 @@ Content-Type: application/json
## Create Annotation
Creates an annotation in the Grafana database. The `dashboardId` and `panelId` fields are optional. If they are not specified then a global annotation is created and can be queried in any dashboard that adds the Grafana annotations data source.
Creates an annotation in the Grafana database. The `dashboardId` and `panelId` fields are optional. If they are not specified then a global annotation is created and can be queried in any dashboard that adds the Grafana annotations data source. When creating a region annotation the response will include both `id` and `endId`, if not only `id`.
`POST /api/annotations`
......@@ -117,7 +117,11 @@ Content-Type: application/json
HTTP/1.1 200
Content-Type: application/json
{"message":"Annotation added"}
{
"message":"Annotation added",
"id": 1,
"endId": 2
}
```
## Create Annotation in Graphite format
......@@ -148,7 +152,10 @@ Content-Type: application/json
HTTP/1.1 200
Content-Type: application/json
{"message":"Graphite annotation added"}
{
"message":"Graphite annotation added",
"id": 1
}
```
## Update Annotation
......
......@@ -258,7 +258,7 @@ Query parameters:
**Example Request**:
```http
GET /api/search?query=MyDashboard&starred=true&tag=prod HTTP/1.1
GET /api/search?query=Production%20Overview&starred=true&tag=prod HTTP/1.1
Accept: application/json
Content-Type: application/json
Authorization: Bearer eyJrIjoiT0tTcG1pUlY2RnVKZTFVaDFsNFZXdE9ZWmNrMkZYbk
......@@ -276,8 +276,8 @@ Content-Type: application/json
"title":"Production Overview",
"uri":"db/production-overview",
"type":"dash-db",
"tags":[],
"isStarred":false
"tags":[prod],
"isStarred":true
}
]
```
\ No newline at end of file
```
......@@ -46,8 +46,8 @@ those options.
- [Graphite]({{< relref "features/datasources/graphite.md" >}})
- [Elasticsearch]({{< relref "features/datasources/elasticsearch.md" >}})
- [InfluxDB]({{< relref "features/datasources/influxdb.md" >}})
- [Prometheus]({{< relref "features/datasources/influxdb.md" >}})
- [OpenTSDB]({{< relref "features/datasources/prometheus.md" >}})
- [Prometheus]({{< relref "features/datasources/prometheus.md" >}})
- [OpenTSDB]({{< relref "features/datasources/opentsdb.md" >}})
- [MySQL]({{< relref "features/datasources/mysql.md" >}})
- [Postgres]({{< relref "features/datasources/postgres.md" >}})
- [Cloudwatch]({{< relref "features/datasources/cloudwatch.md" >}})
......@@ -87,6 +87,14 @@ command line in the init.d script or the systemd service file. It can
be overridden in the configuration file or in the default environment variable
file.
### plugins
Directory where grafana will automatically scan and look for plugins
### datasources
Config files containing datasources that will be configured at startup
## [server]
### http_addr
......@@ -224,6 +232,9 @@ The maximum number of connections in the idle connection pool.
### max_open_conn
The maximum number of open connections to the database.
### log_queries
Set to `true` to log the sql calls and execution times.
<hr />
## [security]
......@@ -551,7 +562,7 @@ session provider you have configured.
- **file:** session file path, e.g. `data/sessions`
- **mysql:** go-sql-driver/mysql dsn config string, e.g. `user:password@tcp(127.0.0.1:3306)/database_name`
- **postgres:** ex: user=a password=b host=localhost port=5432 dbname=c sslmode=require
- **postgres:** ex: user=a password=b host=localhost port=5432 dbname=c sslmode=verify-full
- **memcache:** ex: 127.0.0.1:11211
- **redis:** ex: `addr=127.0.0.1:6379,pool_size=100,prefix=grafana`
......@@ -580,7 +591,7 @@ CREATE TABLE session (
);
```
Postgres valid `sslmode` are `disable`, `require` (default), `verify-ca`, and `verify-full`.
Postgres valid `sslmode` are `disable`, `require`, `verify-ca`, and `verify-full` (default).
### cookie_name
......@@ -613,6 +624,12 @@ Analytics ID here. By default this feature is disabled.
<hr />
## [dashboards]
### versions_to_keep (introduced in v5.0)
Number dashboard versions to keep (per dashboard). Default: 20, Minimum: 1.
## [dashboards.json]
If you have a system that automatically builds dashboards as json files you can enable this feature to have the
......@@ -673,7 +690,7 @@ Ex `filters = sqlstore:debug`
## [metrics]
### enabled
Enable metrics reporting. defaults true. Available via HTTP API `/api/metrics`.
Enable metrics reporting. defaults true. Available via HTTP API `/metrics`.
### interval_seconds
......
......@@ -15,7 +15,7 @@ weight = 1
Description | Download
------------ | -------------
Stable for Debian-based Linux | [grafana_4.6.0_amd64.deb](https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana_4.6.0_amd64.deb)
Stable for Debian-based Linux | [grafana_4.6.2_amd64.deb](https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana_4.6.2_amd64.deb)
<!-- Beta for Debian-based Linux | [grafana_4.5.0-beta1_amd64.deb](https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana_4.5.0-beta1_amd64.deb) -->
......@@ -26,9 +26,9 @@ installation.
```bash
wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana_4.6.0_amd64.deb
wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana_4.6.2_amd64.deb
sudo apt-get install -y adduser libfontconfig
sudo dpkg -i grafana_4.6.0_amd64.deb
sudo dpkg -i grafana_4.6.2_amd64.deb
```
<!--
......
......@@ -15,7 +15,7 @@ weight = 2
Description | Download
------------ | -------------
Stable for CentOS / Fedora / OpenSuse / Redhat Linux | [4.6.0 (x86-64 rpm)](https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.6.0-1.x86_64.rpm)
Stable for CentOS / Fedora / OpenSuse / Redhat Linux | [4.6.2 (x86-64 rpm)](https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.6.2-1.x86_64.rpm)
<!-- Latest Beta for CentOS / Fedora / OpenSuse / Redhat Linux | [4.5.0-beta1 (x86-64 rpm)](https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.5.0-beta1.x86_64.rpm) -->
......@@ -27,7 +27,7 @@ installation.
You can install Grafana using Yum directly.
```bash
$ sudo yum install https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.6.0-1.x86_64.rpm
$ sudo yum install https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.6.2-1.x86_64.rpm
```
Or install manually using `rpm`.
......@@ -35,15 +35,15 @@ Or install manually using `rpm`.
#### On CentOS / Fedora / Redhat:
```bash
$ wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.6.0-1.x86_64.rpm
$ wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.6.2-1.x86_64.rpm
$ sudo yum install initscripts fontconfig
$ sudo rpm -Uvh grafana-4.6.0-1.x86_64.rpm
$ sudo rpm -Uvh grafana-4.6.2-1.x86_64.rpm
```
#### On OpenSuse:
```bash
$ sudo rpm -i --nodeps grafana-4.6.0-1.x86_64.rpm
$ sudo rpm -i --nodeps grafana-4.6.2-1.x86_64.rpm
```
## Install via YUM Repository
......
......@@ -13,7 +13,7 @@ weight = 3
Description | Download
------------ | -------------
Latest stable package for Windows | [grafana.4.6.0.windows-x64.zip](https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.6.0.windows-x64.zip)
Latest stable package for Windows | [grafana.4.6.2.windows-x64.zip](https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.6.2.windows-x64.zip)
Read [Upgrading Grafana]({{< relref "installation/upgrading.md" >}}) for tips and guidance on updating an existing
installation.
......
......@@ -13,9 +13,10 @@ dev environment. Grafana ships with its own required backend server; also comple
## Dependencies
- [Go 1.9.1](https://golang.org/dl/)
- [NodeJS LTS](https://nodejs.org/download/)
- [Go 1.9.2](https://golang.org/dl/)
- [Git](https://git-scm.com/downloads)
- [NodeJS LTS](https://nodejs.org/download/)
- node-gyp is the Node.js native addon build tool and it requires extra dependencies: python 2.7, make and GCC. These are already installed for most Linux distros and MacOS. See the Building On Windows section or the [node-gyp installation instructions](https://github.com/nodejs/node-gyp#installation) for more details.
## Get Code
Create a directory for the project and set your path accordingly (or use the [default Go workspace directory](https://golang.org/doc/code.html#GOPATH)). Then download and install Grafana into your $GOPATH directory:
......@@ -40,8 +41,8 @@ go run build.go build # (or 'go build ./pkg/cmd/grafana-server')
```
#### Building on Windows
The Grafana backend includes Sqlite3 which requires GCC to compile. So in order to compile Grafana on windows you need
to install GCC. We recommend [TDM-GCC](http://tdm-gcc.tdragon.net/download).
The Grafana backend includes Sqlite3 which requires GCC to compile. So in order to compile Grafana on windows you need to install GCC. We recommend [TDM-GCC](http://tdm-gcc.tdragon.net/download).
[node-gyp](https://github.com/nodejs/node-gyp#installation) is the Node.js native addon build tool and it requires extra dependencies to be installed on Windows. In a command prompt which is run as administrator, run:
......
......@@ -25,12 +25,16 @@ enabled = true
header_name = X-WEBAUTH-USER
header_property = username
auto_sign_up = true
ldap_sync_ttl = 60
whitelist =
```
* **enabled**: this is to toggle the feature on or off
* **header_name**: this is the HTTP header name that passes the username or email address of the authenticated user to Grafana. Grafana will trust what ever username is contained in this header and automatically log the user in.
* **header_property**: this tells Grafana whether the value in the header_name is a username or an email address. (In Grafana you can log in using your account username or account email)
* **auto_sign_up**: If set to true, Grafana will automatically create user accounts in the Grafana DB if one does not exist. If set to false, users who do not exist in the GrafanaDB won’t be able to log in, even though their username and password are valid.
* **ldap_sync_ttl**: When both auth.proxy and auth.ldap are enabled, user's organisation and role are synchronised from ldap after the http proxy authentication. You can force ldap re-synchronisation after `ldap_sync_ttl` minutes.
* **whitelist**: Comma separated list of trusted authentication proxies IP.
With a fresh install of Grafana, using the above configuration for the authProxy feature, we can send a simple API call to list all users. The only user that will be present is the default “Admin” user that is added the first time Grafana starts up. As you can see all we need to do to authenticate the request is to provide the “X-WEBAUTH-USER” header.
......
module.exports = {
verbose: false,
"globals": {
"ts-jest": {
"tsConfigFile": "tsconfig.json"
}
},
"transform": {
"^.+\\.tsx?$": "<rootDir>/node_modules/ts-jest/preprocessor.js"
},
"moduleDirectories": ["<rootDir>/node_modules", "<rootDir>/public"],
"moduleDirectories": ["node_modules", "public"],
"roots": [
"<rootDir>/public"
],
......
......@@ -95,12 +95,12 @@
"zone.js": "^0.7.2"
},
"scripts": {
"dev": "node ./node_modules/.bin/webpack --progress --colors --config scripts/webpack/webpack.dev.js",
"watch": "node ./node_modules/.bin/webpack --progress --colors --watch --config scripts/webpack/webpack.dev.js",
"build": "node ./node_modules/.bin/grunt build",
"test": "node ./node_modules/.bin/grunt test",
"test:coverage": "node ./node_modules/.bin/grunt test --coverage=true",
"lint": "node ./node_modules/.bin/tslint -c tslint.json --project tsconfig.json --type-check",
"dev": "webpack --progress --colors --config scripts/webpack/webpack.dev.js",
"watch": "webpack --progress --colors --watch --config scripts/webpack/webpack.dev.js",
"build": "grunt build",
"test": "grunt test",
"test:coverage": "grunt test --coverage=true",
"lint": "tslint -c tslint.json --project tsconfig.json --type-check",
"karma": "node ./node_modules/grunt-cli/bin/grunt karma:dev",
"jest": "node ./node_modules/jest-cli/bin/jest.js --notify --watch",
"precommit": "node ./node_modules/grunt-cli/bin/grunt precommit"
......
#! /usr/bin/env bash
version=4.5.2
version=4.6.2
wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana_${version}_amd64.deb
......
......@@ -8,6 +8,7 @@ import (
"github.com/grafana/grafana/pkg/components/simplejson"
"github.com/grafana/grafana/pkg/middleware"
"github.com/grafana/grafana/pkg/services/annotations"
"github.com/grafana/grafana/pkg/util"
)
func GetAnnotations(c *middleware.Context) Response {
......@@ -21,6 +22,7 @@ func GetAnnotations(c *middleware.Context) Response {
PanelId: c.QueryInt64("panelId"),
Limit: c.QueryInt64("limit"),
Tags: c.QueryStrings("tags"),
Type: c.Query("type"),
}
repo := annotations.GetRepository()
......@@ -75,9 +77,11 @@ func PostAnnotation(c *middleware.Context, cmd dtos.PostAnnotationsCmd) Response
return ApiError(500, "Failed to save annotation", err)
}
startID := item.Id
// handle regions
if cmd.IsRegion {
item.RegionId = item.Id
item.RegionId = startID
if item.Data == nil {
item.Data = simplejson.New()
......@@ -93,9 +97,18 @@ func PostAnnotation(c *middleware.Context, cmd dtos.PostAnnotationsCmd) Response
if err := repo.Save(&item); err != nil {
return ApiError(500, "Failed save annotation for region end time", err)
}
return Json(200, util.DynMap{
"message": "Annotation added",
"id": startID,
"endId": item.Id,
})
}
return ApiSuccess("Annotation added")
return Json(200, util.DynMap{
"message": "Annotation added",
"id": startID,
})
}
func formatGraphiteAnnotation(what string, data string) string {
......@@ -154,7 +167,10 @@ func PostGraphiteAnnotation(c *middleware.Context, cmd dtos.PostGraphiteAnnotati
return ApiError(500, "Failed to save Graphite annotation", err)
}
return ApiSuccess("Graphite annotation added")
return Json(200, util.DynMap{
"message": "Graphite annotation added",
"id": item.Id,
})
}
func UpdateAnnotation(c *middleware.Context, cmd dtos.UpdateAnnotationsCmd) Response {
......
......@@ -212,10 +212,10 @@ func (hs *HttpServer) registerRoutes() {
// Data sources
apiRoute.Group("/datasources", func(datasourceRoute RouteRegister) {
datasourceRoute.Get("/", wrap(GetDataSources))
datasourceRoute.Post("/", quota("data_source"), bind(m.AddDataSourceCommand{}), AddDataSource)
datasourceRoute.Post("/", quota("data_source"), bind(m.AddDataSourceCommand{}), wrap(AddDataSource))
datasourceRoute.Put("/:id", bind(m.UpdateDataSourceCommand{}), wrap(UpdateDataSource))
datasourceRoute.Delete("/:id", DeleteDataSourceById)
datasourceRoute.Delete("/name/:name", DeleteDataSourceByName)
datasourceRoute.Delete("/:id", wrap(DeleteDataSourceById))
datasourceRoute.Delete("/name/:name", wrap(DeleteDataSourceByName))
datasourceRoute.Get("/:id", wrap(GetDataSourceById))
datasourceRoute.Get("/name/:name", wrap(GetDataSourceByName))
}, reqOrgAdmin)
......@@ -340,8 +340,8 @@ func (hs *HttpServer) registerRoutes() {
r.Any("/api/gnet/*", reqSignedIn, ProxyGnetRequest)
// Gravatar service.
avt := avatar.CacheServer()
r.Get("/avatar/:hash", avt.ServeHTTP)
avatarCacheServer := avatar.NewCacheServer()
r.Get("/avatar/:hash", avatarCacheServer.Handler)
// Websocket
r.Any("/ws", hs.streamManager.Serve)
......
......@@ -24,6 +24,7 @@ import (
"github.com/grafana/grafana/pkg/log"
"github.com/grafana/grafana/pkg/setting"
"gopkg.in/macaron.v1"
)
var gravatarSource string
......@@ -89,12 +90,12 @@ func (this *Avatar) Update() (err error) {
return err
}
type service struct {
type CacheServer struct {
notFound *Avatar
cache map[string]*Avatar
}
func (this *service) mustInt(r *http.Request, defaultValue int, keys ...string) (v int) {
func (this *CacheServer) mustInt(r *http.Request, defaultValue int, keys ...string) (v int) {
for _, k := range keys {
if _, err := fmt.Sscanf(r.FormValue(k), "%d", &v); err == nil {
defaultValue = v
......@@ -103,8 +104,8 @@ func (this *service) mustInt(r *http.Request, defaultValue int, keys ...string)
return defaultValue
}
func (this *service) ServeHTTP(w http.ResponseWriter, r *http.Request) {
urlPath := r.URL.Path
func (this *CacheServer) Handler(ctx *macaron.Context) {
urlPath := ctx.Req.URL.Path
hash := urlPath[strings.LastIndex(urlPath, "/")+1:]
var avatar *Avatar
......@@ -126,20 +127,24 @@ func (this *service) ServeHTTP(w http.ResponseWriter, r *http.Request) {
this.cache[hash] = avatar
}
w.Header().Set("Content-Type", "image/jpeg")
w.Header().Set("Content-Length", strconv.Itoa(len(avatar.data.Bytes())))
w.Header().Set("Cache-Control", "private, max-age=3600")
ctx.Resp.Header().Add("Content-Type", "image/jpeg")
if err := avatar.Encode(w); err != nil {
if !setting.EnableGzip {
ctx.Resp.Header().Add("Content-Length", strconv.Itoa(len(avatar.data.Bytes())))
}
ctx.Resp.Header().Add("Cache-Control", "private, max-age=3600")
if err := avatar.Encode(ctx.Resp); err != nil {
log.Warn("avatar encode error: %v", err)
w.WriteHeader(500)
ctx.WriteHeader(500)
}
}
func CacheServer() http.Handler {
func NewCacheServer() *CacheServer {
UpdateGravatarSource()
return &service{
return &CacheServer{
notFound: newNotFound(),
cache: make(map[string]*Avatar),
}
......
......@@ -33,6 +33,7 @@ func GetDataSources(c *middleware.Context) Response {
BasicAuth: ds.BasicAuth,
IsDefault: ds.IsDefault,
JsonData: ds.JsonData,
ReadOnly: ds.ReadOnly,
}
if plugin, exists := plugins.DataSources[ds.Type]; exists {
......@@ -68,59 +69,70 @@ func GetDataSourceById(c *middleware.Context) Response {
return Json(200, &dtos)
}
func DeleteDataSourceById(c *middleware.Context) {
func DeleteDataSourceById(c *middleware.Context) Response {
id := c.ParamsInt64(":id")
if id <= 0 {
c.JsonApiErr(400, "Missing valid datasource id", nil)
return
return ApiError(400, "Missing valid datasource id", nil)
}
ds, err := getRawDataSourceById(id, c.OrgId)
if err != nil {
return ApiError(400, "Failed to delete datasource", nil)
}
if ds.ReadOnly {
return ApiError(403, "Cannot delete read-only data source", nil)
}
cmd := &m.DeleteDataSourceByIdCommand{Id: id, OrgId: c.OrgId}
err := bus.Dispatch(cmd)
err = bus.Dispatch(cmd)
if err != nil {
c.JsonApiErr(500, "Failed to delete datasource", err)
return
return ApiError(500, "Failed to delete datasource", err)
}
c.JsonOK("Data source deleted")
return ApiSuccess("Data source deleted")
}
func DeleteDataSourceByName(c *middleware.Context) {
func DeleteDataSourceByName(c *middleware.Context) Response {
name := c.Params(":name")
if name == "" {
c.JsonApiErr(400, "Missing valid datasource name", nil)
return
return ApiError(400, "Missing valid datasource name", nil)
}
cmd := &m.DeleteDataSourceByNameCommand{Name: name, OrgId: c.OrgId}
getCmd := &m.GetDataSourceByNameQuery{Name: name, OrgId: c.OrgId}
if err := bus.Dispatch(getCmd); err != nil {
return ApiError(500, "Failed to delete datasource", err)
}
if getCmd.Result.ReadOnly {
return ApiError(403, "Cannot delete read-only data source", nil)
}
cmd := &m.DeleteDataSourceByNameCommand{Name: name, OrgId: c.OrgId}
err := bus.Dispatch(cmd)
if err != nil {
c.JsonApiErr(500, "Failed to delete datasource", err)
return
return ApiError(500, "Failed to delete datasource", err)
}
c.JsonOK("Data source deleted")
return ApiSuccess("Data source deleted")
}
func AddDataSource(c *middleware.Context, cmd m.AddDataSourceCommand) {
func AddDataSource(c *middleware.Context, cmd m.AddDataSourceCommand) Response {
cmd.OrgId = c.OrgId
if err := bus.Dispatch(&cmd); err != nil {
if err == m.ErrDataSourceNameExists {
c.JsonApiErr(409, err.Error(), err)
return
return ApiError(409, err.Error(), err)
}
c.JsonApiErr(500, "Failed to add datasource", err)
return
return ApiError(500, "Failed to add datasource", err)
}
ds := convertModelToDtos(cmd.Result)
c.JSON(200, util.DynMap{
return Json(200, util.DynMap{
"message": "Datasource added",
"id": cmd.Result.Id,
"name": cmd.Result.Name,
......@@ -160,11 +172,14 @@ func fillWithSecureJsonData(cmd *m.UpdateDataSourceCommand) error {
}
ds, err := getRawDataSourceById(cmd.Id, cmd.OrgId)
if err != nil {
return err
}
if ds.ReadOnly {
return m.ErrDatasourceIsReadOnly
}
secureJsonData := ds.SecureJsonData.Decrypt()
for k, v := range secureJsonData {
......@@ -201,6 +216,7 @@ func GetDataSourceByName(c *middleware.Context) Response {
}
dtos := convertModelToDtos(query.Result)
dtos.ReadOnly = true
return Json(200, &dtos)
}
......@@ -242,6 +258,7 @@ func convertModelToDtos(ds *m.DataSource) dtos.DataSource {
JsonData: ds.JsonData,
SecureJsonFields: map[string]bool{},
Version: ds.Version,
ReadOnly: ds.ReadOnly,
}
for k, v := range ds.SecureJsonData {
......
......@@ -26,6 +26,7 @@ type DataSource struct {
JsonData *simplejson.Json `json:"jsonData,omitempty"`
SecureJsonFields map[string]bool `json:"secureJsonFields"`
Version int `json:"version"`
ReadOnly bool `json:"readOnly"`
}
type DataSourceListItemDTO struct {
......@@ -42,6 +43,7 @@ type DataSourceListItemDTO struct {
BasicAuth bool `json:"basicAuth"`
IsDefault bool `json:"isDefault"`
JsonData *simplejson.Json `json:"jsonData,omitempty"`
ReadOnly bool `json:"readOnly"`
}
type DataSourceList []DataSourceListItemDTO
......
......@@ -146,12 +146,13 @@ func (hs *HttpServer) newMacaron() *macaron.Macaron {
m := macaron.New()
m.Use(middleware.Logger())
m.Use(middleware.Recovery())
if setting.EnableGzip {
m.Use(middleware.Gziper())
}
m.Use(middleware.Recovery())
for _, route := range plugins.StaticRoutes {
pluginRoute := path.Join("/public/plugins/", route.PluginId)
hs.log.Debug("Plugins: Adding route", "route", pluginRoute, "dir", route.Directory)
......@@ -193,7 +194,8 @@ func (hs *HttpServer) metricsEndpoint(ctx *macaron.Context) {
}
func (hs *HttpServer) healthHandler(ctx *macaron.Context) {
if ctx.Req.Method != "GET" || ctx.Req.URL.Path != "/api/health" {
notHeadOrGet := ctx.Req.Method != http.MethodGet && ctx.Req.Method != http.MethodHead
if notHeadOrGet || ctx.Req.URL.Path != "/api/health" {
return
}
......
......@@ -81,8 +81,6 @@ func (rr *routeRegister) Register(router Router) *macaron.Router {
}
func (rr *routeRegister) route(pattern, method string, handlers ...macaron.Handler) {
//inject tracing
h := make([]macaron.Handler, 0)
for _, fn := range rr.namedMiddleware {
h = append(h, fn(pattern))
......
......@@ -15,7 +15,6 @@ func Search(c *middleware.Context) {
starred := c.Query("starred")
limit := c.QueryInt("limit")
dashboardType := c.Query("type")
folderId := c.QueryInt64("folderId")
if limit == 0 {
limit = 1000
......@@ -29,6 +28,14 @@ func Search(c *middleware.Context) {
}
}
folderIds := make([]int64, 0)
for _, id := range c.QueryStrings("folderIds") {
folderId, err := strconv.ParseInt(id, 10, 64)
if err == nil {
folderIds = append(folderIds, folderId)
}
}
searchQuery := search.Query{
Title: query,
Tags: tags,
......@@ -38,7 +45,7 @@ func Search(c *middleware.Context) {
OrgId: c.OrgId,
DashboardIds: dbids,
Type: dashboardType,
FolderId: folderId,
FolderIds: folderIds,
}
err := bus.Dispatch(&searchQuery)
......
......@@ -16,7 +16,6 @@ import (
"github.com/grafana/grafana/pkg/metrics"
"github.com/grafana/grafana/pkg/models"
"github.com/grafana/grafana/pkg/services/sqlstore"
"github.com/grafana/grafana/pkg/setting"
_ "github.com/grafana/grafana/pkg/services/alerting/conditions"
......@@ -88,11 +87,6 @@ func main() {
server.Start()
}
func initSql() {
sqlstore.NewEngine()
sqlstore.EnsureAdminUser()
}
func listenToSystemSignals(server models.GrafanaServer) {
signalChan := make(chan os.Signal, 1)
ignoreChan := make(chan os.Signal, 1)
......
......@@ -9,6 +9,9 @@ import (
"strconv"
"time"
"github.com/grafana/grafana/pkg/cmd/grafana-cli/logger"
"github.com/grafana/grafana/pkg/services/provisioning"
"golang.org/x/sync/errgroup"
"github.com/grafana/grafana/pkg/api"
......@@ -21,7 +24,9 @@ import (
"github.com/grafana/grafana/pkg/services/cleanup"
"github.com/grafana/grafana/pkg/services/notifications"
"github.com/grafana/grafana/pkg/services/search"
"github.com/grafana/grafana/pkg/services/sqlstore"
"github.com/grafana/grafana/pkg/setting"
"github.com/grafana/grafana/pkg/social"
"github.com/grafana/grafana/pkg/tracing"
)
......@@ -54,12 +59,19 @@ func (g *GrafanaServerImpl) Start() {
g.writePIDFile()
initSql()
metrics.Init(setting.Cfg)
search.Init()
login.Init()
social.NewOAuthService()
plugins.Init()
if err := provisioning.StartUp(setting.DatasourcesPath); err != nil {
logger.Error("Failed to provision Grafana from config", "error", err)
g.Shutdown(1, "Startup failed")
return
}
closer, err := tracing.Init(setting.Cfg)
if err != nil {
g.log.Error("Tracing settings is not valid", "error", err)
......@@ -87,6 +99,11 @@ func (g *GrafanaServerImpl) Start() {
g.startHttpServer()
}
func initSql() {
sqlstore.NewEngine()
sqlstore.EnsureAdminUser()
}
func (g *GrafanaServerImpl) initLogging() {
err := setting.NewConfigContext(&setting.CommandLineArgs{
Config: *configFile,
......
......@@ -225,7 +225,7 @@ func init() {
M_DataSource_ProxyReq_Timer = prometheus.NewSummary(prometheus.SummaryOpts{
Name: "api_dataproxy_request_all_milliseconds",
Help: "summary for dashboard search duration",
Help: "summary for dataproxy request duration",
Namespace: exporterName,
})
......
......@@ -363,6 +363,7 @@ type scenarioContext struct {
respJson map[string]interface{}
handlerFunc handlerFunc
defaultHandler macaron.Handler
url string
req *http.Request
}
......
......@@ -123,23 +123,22 @@ func Recovery() macaron.Handler {
c.Data["ErrorMsg"] = string(stack)
}
c.HTML(500, "500")
// // Lookup the current responsewriter
// val := c.GetVal(inject.InterfaceOf((*http.ResponseWriter)(nil)))
// res := val.Interface().(http.ResponseWriter)
//
// // respond with panic message while in development mode
// var body []byte
// if setting.Env == setting.DEV {
// res.Header().Set("Content-Type", "text/html")
// body = []byte(fmt.Sprintf(panicHtml, err, err, stack))
// }
//
// res.WriteHeader(http.StatusInternalServerError)
// if nil != body {
// res.Write(body)
// }
ctx, ok := c.Data["ctx"].(*Context)
if ok && ctx.IsApiRequest() {
resp := make(map[string]interface{})
resp["message"] = "Internal Server Error - Check the Grafana server logs for the detailed error message."
if c.Data["ErrorMsg"] != nil {
resp["error"] = fmt.Sprintf("%v - %v", c.Data["Title"], c.Data["ErrorMsg"])
} else {
resp["error"] = c.Data["Title"]
}
c.JSON(500, resp)
} else {
c.HTML(500, "500")
}
}
}()
......
package middleware
import (
"path/filepath"
"testing"
"github.com/go-macaron/session"
"github.com/grafana/grafana/pkg/bus"
. "github.com/smartystreets/goconvey/convey"
"gopkg.in/macaron.v1"
)
func TestRecoveryMiddleware(t *testing.T) {
Convey("Given an api route that panics", t, func() {
apiUrl := "/api/whatever"
recoveryScenario("recovery middleware should return json", apiUrl, func(sc *scenarioContext) {
sc.handlerFunc = PanicHandler
sc.fakeReq("GET", apiUrl).exec()
sc.req.Header.Add("content-type", "application/json")
So(sc.resp.Code, ShouldEqual, 500)
So(sc.respJson["message"], ShouldStartWith, "Internal Server Error - Check the Grafana server logs for the detailed error message.")
So(sc.respJson["error"], ShouldStartWith, "Server Error")
})
})
Convey("Given a non-api route that panics", t, func() {
apiUrl := "/whatever"
recoveryScenario("recovery middleware should return html", apiUrl, func(sc *scenarioContext) {
sc.handlerFunc = PanicHandler
sc.fakeReq("GET", apiUrl).exec()
So(sc.resp.Code, ShouldEqual, 500)
So(sc.resp.Header().Get("content-type"), ShouldEqual, "text/html; charset=UTF-8")
So(sc.resp.Body.String(), ShouldContainSubstring, "<title>Grafana - Error</title>")
})
})
}
func PanicHandler(c *Context) {
panic("Handler has panicked")
}
func recoveryScenario(desc string, url string, fn scenarioFunc) {
Convey(desc, func() {
defer bus.ClearBusHandlers()
sc := &scenarioContext{
url: url,
}
viewsPath, _ := filepath.Abs("../../public/views")
sc.m = macaron.New()
sc.m.Use(Recovery())
sc.m.Use(macaron.Renderer(macaron.RenderOptions{
Directory: viewsPath,
Delims: macaron.Delims{Left: "[[", Right: "]]"},
}))
sc.m.Use(GetContextHandler())
// mock out gc goroutine
startSessionGC = func() {}
sc.m.Use(Sessioner(&session.Options{}))
sc.m.Use(OrgRedirect())
sc.m.Use(AddDefaultResponseHeaders())
sc.defaultHandler = func(c *Context) {
sc.context = c
if sc.handlerFunc != nil {
sc.handlerFunc(sc.context)
}
}
sc.m.Get(url, sc.defaultHandler)
fn(sc)
})
}
......@@ -69,3 +69,10 @@ type GetDashboardVersionsQuery struct {
Result []*DashboardVersionDTO
}
//
// Commands
//
type DeleteExpiredVersionsCommand struct {
}
......@@ -27,6 +27,7 @@ var (
ErrDataSourceNotFound = errors.New("Data source not found")
ErrDataSourceNameExists = errors.New("Data source with same name already exists")
ErrDataSourceUpdatingOldVersion = errors.New("Trying to update old version of datasource")
ErrDatasourceIsReadOnly = errors.New("Data source is readonly. Can only be updated from configuration.")
)
type DsAccess string
......@@ -50,6 +51,7 @@ type DataSource struct {
IsDefault bool
JsonData *simplejson.Json
SecureJsonData securejsondata.SecureJsonData
ReadOnly bool
Created time.Time
Updated time.Time
......@@ -109,6 +111,7 @@ type AddDataSourceCommand struct {
IsDefault bool `json:"isDefault"`
JsonData *simplejson.Json `json:"jsonData"`
SecureJsonData map[string]string `json:"secureJsonData"`
ReadOnly bool `json:"readOnly"`
OrgId int64 `json:"-"`
......@@ -132,6 +135,7 @@ type UpdateDataSourceCommand struct {
JsonData *simplejson.Json `json:"jsonData"`
SecureJsonData map[string]string `json:"secureJsonData"`
Version int `json:"version"`
ReadOnly bool `json:"readOnly"`
OrgId int64 `json:"-"`
Id int64 `json:"-"`
......@@ -142,11 +146,15 @@ type UpdateDataSourceCommand struct {
type DeleteDataSourceByIdCommand struct {
Id int64
OrgId int64
DeletedDatasourcesCount int64
}
type DeleteDataSourceByNameCommand struct {
Name string
OrgId int64
DeletedDatasourcesCount int64
}
// ---------------------
......@@ -157,6 +165,10 @@ type GetDataSourcesQuery struct {
Result []*DataSource
}
type GetAllDataSourcesQuery struct {
Result []*DataSource
}
type GetDataSourceByIdQuery struct {
Id int64
OrgId int64
......
package notifiers
import (
"encoding/json"
"github.com/grafana/grafana/pkg/bus"
"github.com/grafana/grafana/pkg/log"
m "github.com/grafana/grafana/pkg/models"
"github.com/grafana/grafana/pkg/services/alerting"
)
func init() {
alerting.RegisterNotifier(&alerting.NotifierPlugin{
Type: "teams",
Name: "Microsoft Teams",
Description: "Sends notifications using Incomming Webhook connector to Microsoft Teams",
Factory: NewTeamsNotifier,
OptionsTemplate: `
<h3 class="page-heading">Teams settings</h3>
<div class="gf-form max-width-30">
<span class="gf-form-label width-6">Url</span>
<input type="text" required class="gf-form-input max-width-30" ng-model="ctrl.model.settings.url" placeholder="Teams incoming webhook url"></input>
</div>
`,
})
}
func NewTeamsNotifier(model *m.AlertNotification) (alerting.Notifier, error) {
url := model.Settings.Get("url").MustString()
if url == "" {
return nil, alerting.ValidationError{Reason: "Could not find url property in settings"}
}
return &TeamsNotifier{
NotifierBase: NewNotifierBase(model.Id, model.IsDefault, model.Name, model.Type, model.Settings),
Url: url,
log: log.New("alerting.notifier.teams"),
}, nil
}
type TeamsNotifier struct {
NotifierBase
Url string
Recipient string
Mention string
log log.Logger
}
func (this *TeamsNotifier) Notify(evalContext *alerting.EvalContext) error {
this.log.Info("Executing teams notification", "ruleId", evalContext.Rule.Id, "notification", this.Name)
ruleUrl, err := evalContext.GetRuleUrl()
if err != nil {
this.log.Error("Failed get rule link", "error", err)
return err
}
fields := make([]map[string]interface{}, 0)
fieldLimitCount := 4
for index, evt := range evalContext.EvalMatches {
fields = append(fields, map[string]interface{}{
"name": evt.Metric,
"value": evt.Value,
})
if index > fieldLimitCount {
break
}
}
if evalContext.Error != nil {
fields = append(fields, map[string]interface{}{
"name": "Error message",
"value": evalContext.Error.Error(),
})
}
message := this.Mention
if evalContext.Rule.State != m.AlertStateOK { //dont add message when going back to alert state ok.
message += " " + evalContext.Rule.Message
}
body := map[string]interface{}{
"@type": "MessageCard",
"@context": "http://schema.org/extensions",
"summary": message,
"title": evalContext.GetNotificationTitle(),
"themeColor": evalContext.GetStateModel().Color,
"sections": []map[string]interface{}{
{
"title": "Details",
"facts": fields,
"images": []map[string]interface{}{
{
"image": evalContext.ImagePublicUrl,
},
},
"text": message,
"potentialAction": []map[string]interface{}{
{
"@context": "http://schema.org",
"@type": "ViewAction",
"name": "View Rule",
"target": []string{
ruleUrl,
},
},
},
},
},
}
data, _ := json.Marshal(&body)
cmd := &m.SendWebhookSync{Url: this.Url, Body: string(data)}
if err := bus.DispatchCtx(evalContext.Ctx, cmd); err != nil {
this.log.Error("Failed to send teams notification", "error", err, "webhook", this.Name)
return err
}
return nil
}
package notifiers
import (
"testing"
"github.com/grafana/grafana/pkg/components/simplejson"
m "github.com/grafana/grafana/pkg/models"
. "github.com/smartystreets/goconvey/convey"
)
func TestTeamsNotifier(t *testing.T) {
Convey("Teams notifier tests", t, func() {
Convey("Parsing alert notification from settings", func() {
Convey("empty settings should return error", func() {
json := `{ }`
settingsJSON, _ := simplejson.NewJson([]byte(json))
model := &m.AlertNotification{
Name: "ops",
Type: "teams",
Settings: settingsJSON,
}
_, err := NewTeamsNotifier(model)
So(err, ShouldNotBeNil)
})
Convey("from settings", func() {
json := `
{
"url": "http://google.com"
}`
settingsJSON, _ := simplejson.NewJson([]byte(json))
model := &m.AlertNotification{
Name: "ops",
Type: "teams",
Settings: settingsJSON,
}
not, err := NewTeamsNotifier(model)
teamsNotifier := not.(*TeamsNotifier)
So(err, ShouldBeNil)
So(teamsNotifier.Name, ShouldEqual, "ops")
So(teamsNotifier.Type, ShouldEqual, "teams")
So(teamsNotifier.Url, ShouldEqual, "http://google.com")
})
Convey("from settings with Recipient and Mention", func() {
json := `
{
"url": "http://google.com"
}`
settingsJSON, _ := simplejson.NewJson([]byte(json))
model := &m.AlertNotification{
Name: "ops",
Type: "teams",
Settings: settingsJSON,
}
not, err := NewTeamsNotifier(model)
teamsNotifier := not.(*TeamsNotifier)
So(err, ShouldBeNil)
So(teamsNotifier.Name, ShouldEqual, "ops")
So(teamsNotifier.Type, ShouldEqual, "teams")
So(teamsNotifier.Url, ShouldEqual, "http://google.com")
})
})
})
}
......@@ -17,6 +17,7 @@ type ItemQuery struct {
DashboardId int64 `json:"dashboardId"`
PanelId int64 `json:"panelId"`
Tags []string `json:"tags"`
Type string `json:"type"`
Limit int64 `json:"limit"`
}
......
......@@ -39,12 +39,13 @@ func (service *CleanUpService) Run(ctx context.Context) error {
func (service *CleanUpService) start(ctx context.Context) error {
service.cleanUpTmpFiles()
ticker := time.NewTicker(time.Hour * 1)
ticker := time.NewTicker(time.Minute * 10)
for {
select {
case <-ticker.C:
service.cleanUpTmpFiles()
service.deleteExpiredSnapshots()
service.deleteExpiredDashboardVersions()
case <-ctx.Done():
return ctx.Err()
}
......@@ -83,3 +84,7 @@ func (service *CleanUpService) cleanUpTmpFiles() {
func (service *CleanUpService) deleteExpiredSnapshots() {
bus.Dispatch(&m.DeleteExpiredSnapshotsCommand{})
}
func (service *CleanUpService) deleteExpiredDashboardVersions() {
bus.Dispatch(&m.DeleteExpiredVersionsCommand{})
}
package datasources
import (
"errors"
"io/ioutil"
"path/filepath"
"strings"
"github.com/grafana/grafana/pkg/bus"
"github.com/grafana/grafana/pkg/log"
"github.com/grafana/grafana/pkg/models"
yaml "gopkg.in/yaml.v2"
)
var (
ErrInvalidConfigToManyDefault = errors.New("datasource.yaml config is invalid. Only one datasource can be marked as default")
)
func Provision(configDirectory string) error {
dc := newDatasourceProvisioner(log.New("provisioning.datasources"))
return dc.applyChanges(configDirectory)
}
type DatasourceProvisioner struct {
log log.Logger
cfgProvider configReader
}
func newDatasourceProvisioner(log log.Logger) DatasourceProvisioner {
return DatasourceProvisioner{
log: log,
cfgProvider: configReader{},
}
}
func (dc *DatasourceProvisioner) apply(cfg *DatasourcesAsConfig) error {
if err := dc.deleteDatasources(cfg.DeleteDatasources); err != nil {
return err
}
for _, ds := range cfg.Datasources {
cmd := &models.GetDataSourceByNameQuery{OrgId: ds.OrgId, Name: ds.Name}
err := bus.Dispatch(cmd)
if err != nil && err != models.ErrDataSourceNotFound {
return err
}
if err == models.ErrDataSourceNotFound {
dc.log.Info("inserting datasource from configuration ", "name", ds.Name)
insertCmd := createInsertCommand(ds)
if err := bus.Dispatch(insertCmd); err != nil {
return err
}
} else {
dc.log.Debug("updating datasource from configuration", "name", ds.Name)
updateCmd := createUpdateCommand(ds, cmd.Result.Id)
if err := bus.Dispatch(updateCmd); err != nil {
return err
}
}
}
return nil
}
func (dc *DatasourceProvisioner) applyChanges(configPath string) error {
configs, err := dc.cfgProvider.readConfig(configPath)
if err != nil {
return err
}
for _, cfg := range configs {
if err := dc.apply(cfg); err != nil {
return err
}
}
return nil
}
func (dc *DatasourceProvisioner) deleteDatasources(dsToDelete []*DeleteDatasourceConfig) error {
for _, ds := range dsToDelete {
cmd := &models.DeleteDataSourceByNameCommand{OrgId: ds.OrgId, Name: ds.Name}
if err := bus.Dispatch(cmd); err != nil {
return err
}
if cmd.DeletedDatasourcesCount > 0 {
dc.log.Info("deleted datasource based on configuration", "name", ds.Name)
}
}
return nil
}
type configReader struct{}
func (configReader) readConfig(path string) ([]*DatasourcesAsConfig, error) {
files, err := ioutil.ReadDir(path)
if err != nil {
return nil, err
}
var datasources []*DatasourcesAsConfig
for _, file := range files {
if strings.HasSuffix(file.Name(), ".yaml") || strings.HasSuffix(file.Name(), ".yml") {
filename, _ := filepath.Abs(filepath.Join(path, file.Name()))
yamlFile, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
var datasource *DatasourcesAsConfig
err = yaml.Unmarshal(yamlFile, &datasource)
if err != nil {
return nil, err
}
datasources = append(datasources, datasource)
}
}
defaultCount := 0
for _, cfg := range datasources {
for _, ds := range cfg.Datasources {
if ds.OrgId == 0 {
ds.OrgId = 1
}
if ds.IsDefault {
defaultCount++
if defaultCount > 1 {
return nil, ErrInvalidConfigToManyDefault
}
}
}
for _, ds := range cfg.DeleteDatasources {
if ds.OrgId == 0 {
ds.OrgId = 1
}
}
}
return datasources, nil
}
package datasources
import (
"testing"
"github.com/grafana/grafana/pkg/bus"
"github.com/grafana/grafana/pkg/log"
"github.com/grafana/grafana/pkg/models"
. "github.com/smartystreets/goconvey/convey"
)
var (
logger log.Logger = log.New("fake.logger")
oneDatasourcesConfig string = ""
twoDatasourcesConfig string = "./test-configs/two-datasources"
twoDatasourcesConfigPurgeOthers string = "./test-configs/insert-two-delete-two"
doubleDatasourcesConfig string = "./test-configs/double-default"
allProperties string = "./test-configs/all-properties"
brokenYaml string = "./test-configs/broken-yaml"
fakeRepo *fakeRepository
)
func TestDatasourceAsConfig(t *testing.T) {
Convey("Testing datasource as configuration", t, func() {
fakeRepo = &fakeRepository{}
bus.ClearBusHandlers()
bus.AddHandler("test", mockDelete)
bus.AddHandler("test", mockInsert)
bus.AddHandler("test", mockUpdate)
bus.AddHandler("test", mockGet)
bus.AddHandler("test", mockGetAll)
Convey("One configured datasource", func() {
Convey("no datasource in database", func() {
dc := newDatasourceProvisioner(logger)
err := dc.applyChanges(twoDatasourcesConfig)
if err != nil {
t.Fatalf("applyChanges return an error %v", err)
}
So(len(fakeRepo.deleted), ShouldEqual, 0)
So(len(fakeRepo.inserted), ShouldEqual, 2)
So(len(fakeRepo.updated), ShouldEqual, 0)
})
Convey("One datasource in database with same name", func() {
fakeRepo.loadAll = []*models.DataSource{
{Name: "Graphite", OrgId: 1, Id: 1},
}
Convey("should update one datasource", func() {
dc := newDatasourceProvisioner(logger)
err := dc.applyChanges(twoDatasourcesConfig)
if err != nil {
t.Fatalf("applyChanges return an error %v", err)
}
So(len(fakeRepo.deleted), ShouldEqual, 0)
So(len(fakeRepo.inserted), ShouldEqual, 1)
So(len(fakeRepo.updated), ShouldEqual, 1)
})
})
Convey("Two datasources with is_default", func() {
dc := newDatasourceProvisioner(logger)
err := dc.applyChanges(doubleDatasourcesConfig)
Convey("should raise error", func() {
So(err, ShouldEqual, ErrInvalidConfigToManyDefault)
})
})
})
Convey("Two configured datasource and purge others ", func() {
Convey("two other datasources in database", func() {
fakeRepo.loadAll = []*models.DataSource{
{Name: "old-graphite", OrgId: 1, Id: 1},
{Name: "old-graphite2", OrgId: 1, Id: 2},
}
Convey("should have two new datasources", func() {
dc := newDatasourceProvisioner(logger)
err := dc.applyChanges(twoDatasourcesConfigPurgeOthers)
if err != nil {
t.Fatalf("applyChanges return an error %v", err)
}
So(len(fakeRepo.deleted), ShouldEqual, 2)
So(len(fakeRepo.inserted), ShouldEqual, 2)
So(len(fakeRepo.updated), ShouldEqual, 0)
})
})
})
Convey("Two configured datasource and purge others = false", func() {
Convey("two other datasources in database", func() {
fakeRepo.loadAll = []*models.DataSource{
{Name: "Graphite", OrgId: 1, Id: 1},
{Name: "old-graphite2", OrgId: 1, Id: 2},
}
Convey("should have two new datasources", func() {
dc := newDatasourceProvisioner(logger)
err := dc.applyChanges(twoDatasourcesConfig)
if err != nil {
t.Fatalf("applyChanges return an error %v", err)
}
So(len(fakeRepo.deleted), ShouldEqual, 0)
So(len(fakeRepo.inserted), ShouldEqual, 1)
So(len(fakeRepo.updated), ShouldEqual, 1)
})
})
})
Convey("broken yaml should return error", func() {
_, err := configReader{}.readConfig(brokenYaml)
So(err, ShouldNotBeNil)
})
Convey("can read all properties", func() {
cfgProvifer := configReader{}
cfg, err := cfgProvifer.readConfig(allProperties)
if err != nil {
t.Fatalf("readConfig return an error %v", err)
}
So(len(cfg), ShouldEqual, 2)
dsCfg := cfg[0]
ds := dsCfg.Datasources[0]
So(ds.Name, ShouldEqual, "name")
So(ds.Type, ShouldEqual, "type")
So(ds.Access, ShouldEqual, models.DS_ACCESS_PROXY)
So(ds.OrgId, ShouldEqual, 2)
So(ds.Url, ShouldEqual, "url")
So(ds.User, ShouldEqual, "user")
So(ds.Password, ShouldEqual, "password")
So(ds.Database, ShouldEqual, "database")
So(ds.BasicAuth, ShouldBeTrue)
So(ds.BasicAuthUser, ShouldEqual, "basic_auth_user")
So(ds.BasicAuthPassword, ShouldEqual, "basic_auth_password")
So(ds.WithCredentials, ShouldBeTrue)
So(ds.IsDefault, ShouldBeTrue)
So(ds.Editable, ShouldBeTrue)
So(len(ds.JsonData), ShouldBeGreaterThan, 2)
So(ds.JsonData["graphiteVersion"], ShouldEqual, "1.1")
So(ds.JsonData["tlsAuth"], ShouldEqual, true)
So(ds.JsonData["tlsAuthWithCACert"], ShouldEqual, true)
So(len(ds.SecureJsonData), ShouldBeGreaterThan, 2)
So(ds.SecureJsonData["tlsCACert"], ShouldEqual, "MjNOcW9RdkbUDHZmpco2HCYzVq9dE+i6Yi+gmUJotq5CDA==")
So(ds.SecureJsonData["tlsClientCert"], ShouldEqual, "ckN0dGlyMXN503YNfjTcf9CV+GGQneN+xmAclQ==")
So(ds.SecureJsonData["tlsClientKey"], ShouldEqual, "ZkN4aG1aNkja/gKAB1wlnKFIsy2SRDq4slrM0A==")
dstwo := cfg[1].Datasources[0]
So(dstwo.Name, ShouldEqual, "name2")
})
})
}
type fakeRepository struct {
inserted []*models.AddDataSourceCommand
deleted []*models.DeleteDataSourceByNameCommand
updated []*models.UpdateDataSourceCommand
loadAll []*models.DataSource
}
func mockDelete(cmd *models.DeleteDataSourceByNameCommand) error {
fakeRepo.deleted = append(fakeRepo.deleted, cmd)
return nil
}
func mockUpdate(cmd *models.UpdateDataSourceCommand) error {
fakeRepo.updated = append(fakeRepo.updated, cmd)
return nil
}
func mockInsert(cmd *models.AddDataSourceCommand) error {
fakeRepo.inserted = append(fakeRepo.inserted, cmd)
return nil
}
func mockGetAll(cmd *models.GetAllDataSourcesQuery) error {
cmd.Result = fakeRepo.loadAll
return nil
}
func mockGet(cmd *models.GetDataSourceByNameQuery) error {
for _, v := range fakeRepo.loadAll {
if cmd.Name == v.Name && cmd.OrgId == v.OrgId {
cmd.Result = v
return nil
}
}
return models.ErrDataSourceNotFound
}
datasources:
- name: name
type: type
access: proxy
org_id: 2
url: url
password: password
user: user
database: database
basic_auth: true
basic_auth_user: basic_auth_user
basic_auth_password: basic_auth_password
with_credentials: true
is_default: true
json_data:
graphiteVersion: "1.1"
tlsAuth: true
tlsAuthWithCACert: true
secure_json_data:
tlsCACert: "MjNOcW9RdkbUDHZmpco2HCYzVq9dE+i6Yi+gmUJotq5CDA=="
tlsClientCert: "ckN0dGlyMXN503YNfjTcf9CV+GGQneN+xmAclQ=="
tlsClientKey: "ZkN4aG1aNkja/gKAB1wlnKFIsy2SRDq4slrM0A=="
editable: true
purge_other_datasources: true
datasources:
- name: name2
type: type2
access: proxy
org_id: 2
url: url2
#sfxzgnsxzcvnbzcvn
cvbn
cvbn
c
vbn
cvbncvbn
\ No newline at end of file
datasources:
- name: Graphite
type: graphite
access: proxy
url: http://localhost:8080
is_default: true
datasources:
- name: Graphite
type: graphite
access: proxy
url: http://localhost:8080
is_default: true
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://localhost:9090
delete_datasources:
- name: old-graphite
datasources:
- name: Graphite
type: graphite
access: proxy
url: http://localhost:8080
delete_datasources:
- name: old-graphite3
datasources:
- name: Graphite
type: graphite
access: proxy
url: http://localhost:8080
- name: Prometheus
type: prometheus
access: proxy
url: http://localhost:9090
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment