Commit 29f58475 by Carl Bergquist Committed by GitHub

Merge pull request #8956 from saada/azure-external-image-store

Azure Blob Storage external image uploader
parents 8a5fe5b0 af15e3c0
...@@ -473,7 +473,7 @@ sampler_param = 1 ...@@ -473,7 +473,7 @@ sampler_param = 1
#################################### External Image Storage ############## #################################### External Image Storage ##############
[external_image_storage] [external_image_storage]
# You can choose between (s3, webdav, gcs) # You can choose between (s3, webdav, gcs, azure_blob)
provider = provider =
[external_image_storage.s3] [external_image_storage.s3]
...@@ -493,4 +493,9 @@ public_url = ...@@ -493,4 +493,9 @@ public_url =
[external_image_storage.gcs] [external_image_storage.gcs]
key_file = key_file =
bucket = bucket =
path = path =
\ No newline at end of file
[external_image_storage.azure_blob]
account_name =
account_key =
container_name =
...@@ -417,7 +417,7 @@ log_queries = ...@@ -417,7 +417,7 @@ log_queries =
#################################### External image storage ########################## #################################### External image storage ##########################
[external_image_storage] [external_image_storage]
# Used for uploading images to public servers so they can be included in slack/email messages. # Used for uploading images to public servers so they can be included in slack/email messages.
# you can choose between (s3, webdav, gcs) # you can choose between (s3, webdav, gcs, azure_blob)
;provider = ;provider =
[external_image_storage.s3] [external_image_storage.s3]
...@@ -436,4 +436,9 @@ log_queries = ...@@ -436,4 +436,9 @@ log_queries =
[external_image_storage.gcs] [external_image_storage.gcs]
;key_file = ;key_file =
;bucket = ;bucket =
;path = ;path =
\ No newline at end of file
[external_image_storage.azure_blob]
;account_name =
;account_key =
;container_name =
...@@ -150,7 +150,7 @@ Prometheus Alertmanager | `prometheus-alertmanager` | no ...@@ -150,7 +150,7 @@ Prometheus Alertmanager | `prometheus-alertmanager` | no
# Enable images in notifications {#external-image-store} # Enable images in notifications {#external-image-store}
Grafana can render the panel associated with the alert rule and include that in the notification. Most Notification Channels require that this image be publicly accessible (Slack and PagerDuty for example). In order to include images in alert notifications, Grafana can upload the image to an image store. It currently supports Grafana can render the panel associated with the alert rule and include that in the notification. Most Notification Channels require that this image be publicly accessible (Slack and PagerDuty for example). In order to include images in alert notifications, Grafana can upload the image to an image store. It currently supports
Amazon S3 and Webdav for this. So to set that up you need to configure the [external image uploader](/installation/configuration/#external-image-storage) in your grafana-server ini config file. Amazon S3, Webdav, and Azure Blob Storage for this. So to set that up you need to configure the [external image uploader](/installation/configuration/#external-image-storage) in your grafana-server ini config file.
Currently only the Email Channels attaches images if no external image store is specified. To include images in alert notifications for other channels then you need to set up an external image store. Currently only the Email Channels attaches images if no external image store is specified. To include images in alert notifications for other channels then you need to set up an external image store.
......
...@@ -55,7 +55,7 @@ of another alert in your conditions, and `Time Of Day`. ...@@ -55,7 +55,7 @@ of another alert in your conditions, and `Time Of Day`.
Alerting would not be very useful if there was no way to send notifications when rules trigger and change state. You Alerting would not be very useful if there was no way to send notifications when rules trigger and change state. You
can setup notifications of different types. We currently have `Slack`, `PagerDuty`, `Email` and `Webhook` with more in the can setup notifications of different types. We currently have `Slack`, `PagerDuty`, `Email` and `Webhook` with more in the
pipe that will be added during beta period. The notifications can then be added to your alert rules. pipe that will be added during beta period. The notifications can then be added to your alert rules.
If you have configured an external image store in the grafana.ini config file (s3 and webdav options available) If you have configured an external image store in the grafana.ini config file (s3, webdav, and azure_blob options available)
you can get very rich notifications with an image of the graph and the metric you can get very rich notifications with an image of the graph and the metric
values all included in the notification. values all included in the notification.
......
...@@ -731,7 +731,7 @@ Time to live for snapshots. ...@@ -731,7 +731,7 @@ Time to live for snapshots.
These options control how images should be made public so they can be shared on services like slack. These options control how images should be made public so they can be shared on services like slack.
### provider ### provider
You can choose between (s3, webdav, gcs). If left empty Grafana will ignore the upload action. You can choose between (s3, webdav, gcs, azure_blob). If left empty Grafana will ignore the upload action.
## [external_image_storage.s3] ## [external_image_storage.s3]
...@@ -786,6 +786,17 @@ Bucket Name on Google Cloud Storage. ...@@ -786,6 +786,17 @@ Bucket Name on Google Cloud Storage.
### path ### path
Optional extra path inside bucket Optional extra path inside bucket
## [external_image_storage.azure_blob]
### account_name
Storage account name
### account_key
Storage account key
### container_name
Container name where to store "Blob" images with random names. Creating the blob container beforehand is required. Only public containers are supported.
## [alerting] ## [alerting]
### enabled ### enabled
......
package imguploader
import (
"bytes"
"context"
"crypto/hmac"
"crypto/sha256"
"encoding/base64"
"encoding/xml"
"fmt"
"io"
"io/ioutil"
"mime"
"net/http"
"net/url"
"os"
"path"
"sort"
"strconv"
"strings"
"time"
"github.com/grafana/grafana/pkg/log"
"github.com/grafana/grafana/pkg/util"
)
type AzureBlobUploader struct {
account_name string
account_key string
container_name string
log log.Logger
}
func NewAzureBlobUploader(account_name string, account_key string, container_name string) *AzureBlobUploader {
return &AzureBlobUploader{
account_name: account_name,
account_key: account_key,
container_name: container_name,
log: log.New("azureBlobUploader"),
}
}
// Receive path of image on disk and return azure blob url
func (az *AzureBlobUploader) Upload(ctx context.Context, imageDiskPath string) (string, error) {
// setup client
blob := NewStorageClient(az.account_name, az.account_key)
file, err := os.Open(imageDiskPath)
if err != nil {
return "", err
}
randomFileName := util.GetRandomString(30) + ".png"
// upload image
az.log.Debug("Uploading image to azure_blob", "conatiner_name", az.container_name, "blob_name", randomFileName)
resp, err := blob.FileUpload(az.container_name, randomFileName, file)
if err != nil {
return "", err
}
if resp.StatusCode > 400 && resp.StatusCode < 600 {
body, _ := ioutil.ReadAll(io.LimitReader(resp.Body, 1<<20))
aerr := &Error{
Code: resp.StatusCode,
Status: resp.Status,
Body: body,
Header: resp.Header,
}
aerr.parseXML()
resp.Body.Close()
return "", aerr
}
if err != nil {
return "", err
}
url := fmt.Sprintf("https://%s.blob.core.windows.net/%s/%s", az.account_name, az.container_name, randomFileName)
return url, nil
}
// --- AZURE LIBRARY
type Blobs struct {
XMLName xml.Name `xml:"EnumerationResults"`
Items []Blob `xml:"Blobs>Blob"`
}
type Blob struct {
Name string `xml:"Name"`
Property Property `xml:"Properties"`
}
type Property struct {
LastModified string `xml:"Last-Modified"`
Etag string `xml:"Etag"`
ContentLength int `xml:"Content-Length"`
ContentType string `xml:"Content-Type"`
BlobType string `xml:"BlobType"`
LeaseStatus string `xml:"LeaseStatus"`
}
type Error struct {
Code int
Status string
Body []byte
Header http.Header
AzureCode string
}
func (e *Error) Error() string {
return fmt.Sprintf("status %d: %s", e.Code, e.Body)
}
func (e *Error) parseXML() {
var xe xmlError
_ = xml.NewDecoder(bytes.NewReader(e.Body)).Decode(&xe)
e.AzureCode = xe.Code
}
type xmlError struct {
XMLName xml.Name `xml:"Error"`
Code string
Message string
}
const ms_date_layout = "Mon, 02 Jan 2006 15:04:05 GMT"
const version = "2017-04-17"
var client = &http.Client{}
type StorageClient struct {
Auth *Auth
Transport http.RoundTripper
}
func (c *StorageClient) transport() http.RoundTripper {
if c.Transport != nil {
return c.Transport
}
return http.DefaultTransport
}
func NewStorageClient(account, accessKey string) *StorageClient {
return &StorageClient{
Auth: &Auth{
account,
accessKey,
},
Transport: nil,
}
}
func (c *StorageClient) absUrl(format string, a ...interface{}) string {
part := fmt.Sprintf(format, a...)
return fmt.Sprintf("https://%s.blob.core.windows.net/%s", c.Auth.Account, part)
}
func copyHeadersToRequest(req *http.Request, headers map[string]string) {
for k, v := range headers {
req.Header[k] = []string{v}
}
}
func (c *StorageClient) FileUpload(container, blobName string, body io.Reader) (*http.Response, error) {
blobName = escape(blobName)
extension := strings.ToLower(path.Ext(blobName))
contentType := mime.TypeByExtension(extension)
buf := new(bytes.Buffer)
buf.ReadFrom(body)
req, err := http.NewRequest(
"PUT",
c.absUrl("%s/%s", container, blobName),
buf,
)
if err != nil {
return nil, err
}
copyHeadersToRequest(req, map[string]string{
"x-ms-blob-type": "BlockBlob",
"x-ms-date": time.Now().UTC().Format(ms_date_layout),
"x-ms-version": version,
"Accept-Charset": "UTF-8",
"Content-Type": contentType,
"Content-Length": strconv.Itoa(buf.Len()),
})
c.Auth.SignRequest(req)
return c.transport().RoundTrip(req)
}
func escape(content string) string {
content = url.QueryEscape(content)
// the Azure's behavior uses %20 to represent whitespace instead of + (plus)
content = strings.Replace(content, "+", "%20", -1)
// the Azure's behavior uses slash instead of + %2F
content = strings.Replace(content, "%2F", "/", -1)
return content
}
type Auth struct {
Account string
Key string
}
func (a *Auth) SignRequest(req *http.Request) {
strToSign := fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s",
strings.ToUpper(req.Method),
tryget(req.Header, "Content-Encoding"),
tryget(req.Header, "Content-Language"),
tryget(req.Header, "Content-Length"),
tryget(req.Header, "Content-MD5"),
tryget(req.Header, "Content-Type"),
tryget(req.Header, "Date"),
tryget(req.Header, "If-Modified-Since"),
tryget(req.Header, "If-Match"),
tryget(req.Header, "If-None-Match"),
tryget(req.Header, "If-Unmodified-Since"),
tryget(req.Header, "Range"),
a.canonicalizedHeaders(req),
a.canonicalizedResource(req),
)
decodedKey, _ := base64.StdEncoding.DecodeString(a.Key)
sha256 := hmac.New(sha256.New, []byte(decodedKey))
sha256.Write([]byte(strToSign))
signature := base64.StdEncoding.EncodeToString(sha256.Sum(nil))
copyHeadersToRequest(req, map[string]string{
"Authorization": fmt.Sprintf("SharedKey %s:%s", a.Account, signature),
})
}
func tryget(headers map[string][]string, key string) string {
// We default to empty string for "0" values to match server side behavior when generating signatures.
if len(headers[key]) > 0 { // && headers[key][0] != "0" { //&& key != "Content-Length" {
return headers[key][0]
}
return ""
}
//
// The following is copied ~95% verbatim from:
// http://github.com/loldesign/azure/ -> core/core.go
//
/*
Based on Azure docs:
Link: http://msdn.microsoft.com/en-us/library/windowsazure/dd179428.aspx#Constructing_Element
1) Retrieve all headers for the resource that begin with x-ms-, including the x-ms-date header.
2) Convert each HTTP header name to lowercase.
3) Sort the headers lexicographically by header name, in ascending order. Note that each header may appear only once in the string.
4) Unfold the string by replacing any breaking white space with a single space.
5) Trim any white space around the colon in the header.
6) Finally, append a new line character to each canonicalized header in the resulting list. Construct the CanonicalizedHeaders string by concatenating all headers in this list into a single string.
*/
func (a *Auth) canonicalizedHeaders(req *http.Request) string {
var buffer bytes.Buffer
for key, value := range req.Header {
lowerKey := strings.ToLower(key)
if strings.HasPrefix(lowerKey, "x-ms-") {
if buffer.Len() == 0 {
buffer.WriteString(fmt.Sprintf("%s:%s", lowerKey, value[0]))
} else {
buffer.WriteString(fmt.Sprintf("\n%s:%s", lowerKey, value[0]))
}
}
}
splitted := strings.Split(buffer.String(), "\n")
sort.Strings(splitted)
return strings.Join(splitted, "\n")
}
/*
Based on Azure docs
Link: http://msdn.microsoft.com/en-us/library/windowsazure/dd179428.aspx#Constructing_Element
1) Beginning with an empty string (""), append a forward slash (/), followed by the name of the account that owns the resource being accessed.
2) Append the resource's encoded URI path, without any query parameters.
3) Retrieve all query parameters on the resource URI, including the comp parameter if it exists.
4) Convert all parameter names to lowercase.
5) Sort the query parameters lexicographically by parameter name, in ascending order.
6) URL-decode each query parameter name and value.
7) Append each query parameter name and value to the string in the following format, making sure to include the colon (:) between the name and the value:
parameter-name:parameter-value
8) If a query parameter has more than one value, sort all values lexicographically, then include them in a comma-separated list:
parameter-name:parameter-value-1,parameter-value-2,parameter-value-n
9) Append a new line character (\n) after each name-value pair.
Rules:
1) Avoid using the new line character (\n) in values for query parameters. If it must be used, ensure that it does not affect the format of the canonicalized resource string.
2) Avoid using commas in query parameter values.
*/
func (a *Auth) canonicalizedResource(req *http.Request) string {
var buffer bytes.Buffer
buffer.WriteString(fmt.Sprintf("/%s%s", a.Account, req.URL.Path))
queries := req.URL.Query()
for key, values := range queries {
sort.Strings(values)
buffer.WriteString(fmt.Sprintf("\n%s:%s", key, strings.Join(values, ",")))
}
splitted := strings.Split(buffer.String(), "\n")
sort.Strings(splitted)
return strings.Join(splitted, "\n")
}
package imguploader
import (
"context"
"testing"
"github.com/grafana/grafana/pkg/setting"
. "github.com/smartystreets/goconvey/convey"
)
func TestUploadToAzureBlob(t *testing.T) {
SkipConvey("[Integration test] for external_image_store.azure_blob", t, func() {
err := setting.NewConfigContext(&setting.CommandLineArgs{
HomePath: "../../../",
})
uploader, _ := NewImageUploader()
path, err := uploader.Upload(context.Background(), "../../../public/img/logo_transparent_400x.png")
So(err, ShouldBeNil)
So(path, ShouldNotEqual, "")
})
}
...@@ -76,6 +76,17 @@ func NewImageUploader() (ImageUploader, error) { ...@@ -76,6 +76,17 @@ func NewImageUploader() (ImageUploader, error) {
path := gcssec.Key("path").MustString("") path := gcssec.Key("path").MustString("")
return NewGCSUploader(keyFile, bucketName, path), nil return NewGCSUploader(keyFile, bucketName, path), nil
case "azure_blob":
azureBlobSec, err := setting.Cfg.GetSection("external_image_storage.azure_blob")
if err != nil {
return nil, err
}
account_name := azureBlobSec.Key("account_name").MustString("")
account_key := azureBlobSec.Key("account_key").MustString("")
container_name := azureBlobSec.Key("container_name").MustString("")
return NewAzureBlobUploader(account_name, account_key, container_name), nil
} }
return NopImageUploader{}, nil return NopImageUploader{}, nil
......
...@@ -119,5 +119,29 @@ func TestImageUploaderFactory(t *testing.T) { ...@@ -119,5 +119,29 @@ func TestImageUploaderFactory(t *testing.T) {
So(original.keyFile, ShouldEqual, "/etc/secrets/project-79a52befa3f6.json") So(original.keyFile, ShouldEqual, "/etc/secrets/project-79a52befa3f6.json")
So(original.bucket, ShouldEqual, "project-grafana-east") So(original.bucket, ShouldEqual, "project-grafana-east")
}) })
Convey("AzureBlobUploader config", func() {
setting.NewConfigContext(&setting.CommandLineArgs{
HomePath: "../../../",
})
setting.ImageUploadProvider = "azure_blob"
Convey("with container name", func() {
azureBlobSec, err := setting.Cfg.GetSection("external_image_storage.azure_blob")
azureBlobSec.NewKey("account_name", "account_name")
azureBlobSec.NewKey("account_key", "account_key")
azureBlobSec.NewKey("container_name", "container_name")
uploader, err := NewImageUploader()
So(err, ShouldBeNil)
original, ok := uploader.(*AzureBlobUploader)
So(ok, ShouldBeTrue)
So(original.account_name, ShouldEqual, "account_name")
So(original.account_key, ShouldEqual, "account_key")
So(original.container_name, ShouldEqual, "container_name")
})
})
}) })
} }
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment