I found that I couldn't actually import data directly in to Grafana because Grafana needs a data source. The data source already setup was Graphite so all I did was send the data to Graphite and sure enough, the data showed up in Grafana. There are some kinks I had to work with though and I'll explain.
First of all, if your data is older than your Graphite's retention period, nothing will show up in Grafana. So, modify the date/time to something that will fall in your retention period. However, do keep in mind that this date/time is going to be used by Grafana to represent your data. I would imagine that you could also just tell Graphite to extend the retention period. That leads us to the next gotcha.
So, you've extended your retention period but it still doesn't work. What now? Well, it turns out that you can't just extend your retention period in your Graphite configs. You have to actually tell the whisper file as well. Read the docs here "http://graphite.readthedocs.org/en/latest/config-carbon.html" and look for whisper-set-aggregation-method.py for an example.
Cool. Now that it's taken cared of, you can start sending data in to Graphite (see http://graphite.readthedocs.org/en/latest/feeding-carbon.html ).
Alright, according to the above documentation, I should be able to do the following...
echo "local.random.diceroll 4 `date +%s`" | nc -q0 myserver 2300Sure enough, typing this and pressing enter makes some data show up in Graphite (and ultimately, Grafana). The next thing I needed to do was to send hundreds of lines with 31 fields in to Graphite. No way was I going to type it up one by one. So I cooked up a quick and dirty script to do it for me.
I took the headers out of the CSV file and placed it in to a separate file called sampleheaders.csv. I deleted the headers out of the CSV file. Below is the script I came up with.
#!/bin/bash
#Set an original starting time in EPOCH
#Then, set a starting time in EPOCH
#Calculate your own EPOCH as this date/time is for demonstration purposes only
ostarting=1450944001
starting=1450944001
#Grab all the headers from the sample headers file
headers=`head -1 sampleheaders.csv`
#There are 31 fields so we have to loop through each one (hard coded but it's quick and dirty)
for i in {1..31};do
#Set the starting point to the original starting point
starting=$ostarting
#print each line one by one and only grab the field we're working on
for x in `cat fsample.csv | awk -F , -v var=$i '{print $var}'`;do
#get the header that we're on
#i.e. if we're on loop 3, then get field 3 of the headers
newvar=$(echo $headers | awk -F , -v myvar=$i '{print $myvar}' | sed "s/\"//g")
#echo out the header field in $newvar + the value in $x + the time in $starting
#This first echo is only for verbosity
#It is the second echo that actually pipes to nc
echo "comps.test1.$newvar $x $starting | nc -q0 localhost 2003"
echo "comps.test1.$newvar $x $starting" | nc -q0 localhost 2003
#increment the starting time by 86,400 seconds (about 24hrs)
#that way, the data is plotted one day after another.
starting=$( echo "${starting}+86400" | bc -l )
done
done
No comments:
Post a Comment