Browse Source

wip to git

master
Sebastian Denz 4 years ago
parent
commit
7f57611065
  1. 2
      .gitignore
  2. 8
      Dockerfile.pskreporter_exporter
  3. 8
      Dockerfile.wsjtx_exporter
  4. 22
      Readme.md
  5. 90
      cmd/alltxt2csv/main.go
  6. 2
      cmd/pskreporter-exporter/main.go
  7. 0
      cmd/pskreporter-exporter/mysql.go
  8. 0
      cmd/pskreporter-exporter/prometheus.go
  9. 0
      cmd/pskreporter-exporter/pskreporter.go
  10. BIN
      cmd/pskreporter-exporter/pskreporter_exporter.exe
  11. 4
      cmd/wsjtx-exporter/main.go
  12. 10
      cmd/wsjtx-exporter/mysql.go
  13. 56
      cmd/wsjtx-exporter/prometheus.go
  14. BIN
      cmd/wsjtx-exporter/wsjtx_exporter.exe
  15. 56
      cmd/wsjtx_exporter/prometheus.go
  16. 27
      doc/alltxt2csv.md
  17. 22
      doc/pskreporter-exporter.md
  18. 56
      doc/wsjtx-exporter.md
  19. 1
      go.sum
  20. 2
      misc/import_csv.sql
  21. 14
      misc/pskreporter_stats.sql
  22. 16
      misc/wsjtx_all_txt.sql
  23. 2
      shared/wsjtx/wsjtx.go
  24. 44
      todo.md

2
.gitignore

@ -0,0 +1,2 @@
import/
.swp

8
Dockerfile.pskreporter_exporter

@ -1,8 +1,8 @@
from golang:1.15.0 from golang:1.15.0
RUN mkdir /pskreporter_exporter RUN mkdir /pskreporter-exporter
ADD . /pskreporter_exporter ADD . /pskreporter-exporter
WORKDIR /pskreporter_exporter/cmd/pskreporter_exporter WORKDIR /pskreporter-exporter/cmd/pskreporter-exporter
RUN go build RUN go build
CMD ["/pskreporter_exporter/cmd/pskreporter_exporter/pskreporter_exporter"] CMD ["/pskreporter-exporter/cmd/pskreporter-exporter/pskreporter-exporter"]

8
Dockerfile.wsjtx_exporter

@ -1,8 +1,8 @@
from golang:1.15.0 from golang:1.15.0
RUN mkdir /wsjtx_exporter RUN mkdir /wsjtx-exporter
ADD . /wsjtx_exporter ADD . /wsjtx-exporter
WORKDIR /wsjtx_exporter/cmd/wsjtx_exporter WORKDIR /wsjtx-exporter/cmd/wsjtx-exporter
RUN go build RUN go build
CMD ["/wsjtx_exporter/cmd/wsjtx_exporter/wsjtx_exporter"] CMD ["/wsjtx-exporter/cmd/wsjtx-exporter/wsjtx-exporter"]

22
Readme.md

@ -1,6 +1,6 @@
# what is it? # what is it?
FIXME SCREENSHOT!!!!!!!!!!!!!!!!!!!!!!!!! ![alt text](screenshot.png "Logo Title Text 1")
* a set of tools to export your personal WSJT-X * a set of tools to export your personal WSJT-X
* live reception data into prometheus or mysql * live reception data into prometheus or mysql
@ -34,10 +34,10 @@ have fun!
## tooling overview ## tooling overview
* **pskreporter_exporter** * **pskreporter-exporter**
* polls pskreporter.info for your callsign * polls pskreporter.info for your callsign
* supports prometheus and mysql * supports prometheus and mysql
* **wsjtx_exporter** * **wsjtx-exporter**
* follows live traffice in ALL.txt * follows live traffice in ALL.txt
* supports prometheus and mysql * supports prometheus and mysql
* **alltxt2csv** * **alltxt2csv**
@ -72,7 +72,7 @@ show pro/con overview:
both allow distributed setups with multiple wsjtx instances submitting their data to a central prometheus or mysql service. both allow distributed setups with multiple wsjtx instances submitting their data to a central prometheus or mysql service.
you can as well run both in parallel and use prometheus for a live overview and mysql for historical evaluations. you can as well run both in parallel and use prometheus for a live overview and mysql for historical evaluations.
### pskreporter_exporter vs other access/polling of pskreporter like GridTracker ### pskreporter-exporter vs other access/polling of pskreporter like GridTracker
### can it read my whole ALL.txt since from the beginning? ### can it read my whole ALL.txt since from the beginning?
@ -109,11 +109,11 @@ choose a bigger interval
### how long does it take to import my data into mysql? ### how long does it take to import my data into mysql?
* my ALL.TXT (new format start july 2019) contains ~ 13.7 mio lines and has ~ 850M. * my ALL.TXT (new format start july 2019) contains ~ 13.7 mio lines and has ~ 850M
* converting to csv takes ~ 40min on i7-4750HQ (2015) and the result has ~ 1.2G. * converting to csv takes ~ 14min on i7-4750HQ (2015) and the result has ~ 1.2G
* currently this uses only one core, so there is a lot of room for optimization. * currently this is done using another module which uses regular expressions which is not optimial for this use case
* importing the csv to mysql takes ~ 3.5min. * importing the csv to mysql takes ~ 3.5min
* querying the whole time (~ 1.5 years) in grafana takes some seconds. * querying the whole time (~ 1.5 years) in grafana takes some seconds
### does this need a lot of ressource on my machine? ### does this need a lot of ressource on my machine?
@ -133,8 +133,8 @@ go get github.com/denzs/wsjtx_dashboards
build docker containers: build docker containers:
``` ```
docker build Dockerfile.wsjtx_exporter . docker build Dockerfile.wsjtx-exporter .
docker build Dockerfile.pskreporter_exporter . docker build Dockerfile.pskreporter-exporter .
``` ```
### to be done... ### to be done...

90
cmd/alltxt2csv/main.go

@ -1,17 +1,18 @@
package main package main
import ( import (
"fmt" "fmt"
"github.com/jnovack/flag" "github.com/jnovack/flag"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
// "strings" // "strings"
// "strconv" // "strconv"
// "time" // "time"
"os" "os"
"bufio" "bufio"
"runtime"
// "github.com/mmcloughlin/geohash" // "github.com/mmcloughlin/geohash"
// "github.com/tzneal/ham-go/dxcc" // "github.com/tzneal/ham-go/dxcc"
"github.com/denzs/wsjtx_dashboards/shared/wsjtx" "github.com/denzs/wsjtx_dashboards/shared/wsjtx"
) )
var station string var station string
@ -49,6 +50,48 @@ func init() {
} }
} }
func eatline(lines chan string, results chan wsjtx.Result) {
for {
select {
case line := <- lines :
result, parsed := wsjtx.ScanLine(line)
if parsed {
results <- result
}
}
}
return
}
func eatfile(results chan wsjtx.Result) {
log.Info("starting eating file, please wait..")
filein, err := os.Open(pathin)
if err != nil {
log.Fatal(err)
}
scanner := bufio.NewScanner(filein)
lines := make(chan string,runtime.NumCPU())
for w := 0; w <= runtime.NumCPU(); w++ {
go eatline(lines, results)
}
i := 0
for scanner.Scan() {
i++
if i % 1000000 == 0 {
log.Infof("%d lines parsed..", i)
}
lines <- scanner.Text()
}
filein.Close()
log.Info("done.. eatfile")
return
}
func main(){ func main(){
_ , err := os.Stat(pathout) _ , err := os.Stat(pathout)
if !os.IsNotExist(err) { if !os.IsNotExist(err) {
@ -59,33 +102,24 @@ func main(){
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
} }
writer := bufio.NewWriter(fileout)
filein, err := os.Open(pathin) writer := bufio.NewWriter(fileout)
if err != nil {
log.Fatal(err) results := make(chan wsjtx.Result,runtime.NumCPU())
}
lines := bufio.NewScanner(filein)
counter := 0 go eatfile(results)
for lines.Scan() {
result, parsed := wsjtx.ScanLine(lines.Text())
if !parsed {
continue
}
counter++ for {
if counter % 1000000 == 0 { select {
log.Infof("%d lines parsed..", counter) case result := <- results :
} _, err := writer.WriteString(fmt.Sprintf("\"%s\",\"%s\",\"%s\",\"%s\",\"%s\",\"%s\",\"%s\",\"%s\",\"%d\",\"%d\",\n", result.Timestamp.Format("2006-01-02 15:04:05"), station, result.Call, result.Band, result.Ent.Continent, result.Mode, result.Ent.Entity, result.GeoHash, result.Signal, result.Rx))
_, err := writer.WriteString(fmt.Sprintf("\"%s\",\"%s\",\"%s\",\"%s\",\"%s\",\"%s\",\"%s\",\"%s\",\"%d\",\"%d\",\n", result.Timestamp.Format("2006-01-02 15:04:05"), station, result.Call, result.Band, result.Ent.Continent, result.Mode, result.Ent.Entity, result.GeoHash, result.Signal, result.Rx)) if err != nil {
if err != nil { log.Warn(err)
log.Warn(err) }
} }
} }
writer.Flush() writer.Flush()
fileout.Close() fileout.Close()
filein.Close()
log.Info("done..") log.Info("done.. main")
} }

2
cmd/pskreporter_exporter/main.go → cmd/pskreporter-exporter/main.go

@ -39,7 +39,7 @@ func init() {
flag.StringVar(&mysql_pass, "dbpass", "secret", "mysql password") flag.StringVar(&mysql_pass, "dbpass", "secret", "mysql password")
flag.StringVar(&mysql_table, "dbtable", "pskreporter_stats", "mysql table name") flag.StringVar(&mysql_table, "dbtable", "pskreporter_stats", "mysql table name")
flag.StringVar(&metricpath, "metricpath", "/metrics", "path for prometheus metric endpoint") flag.StringVar(&metricpath, "metricpath", "/metrics", "path for prometheus metric endpoint")
flag.IntVar(&port, "port", 2112, "port for prometheus metric endpoint") flag.IntVar(&port, "port", 2113, "port for prometheus metric endpoint")
flag.BoolVar(&useProm, "prometheus", false, "activate prometheus exporter") flag.BoolVar(&useProm, "prometheus", false, "activate prometheus exporter")
flag.BoolVar(&useMysql, "mysql", false, "activate mysql exporter") flag.BoolVar(&useMysql, "mysql", false, "activate mysql exporter")
// flag.BoolVar(&promcalls, "promcalls", false, "activate prometheus callsign metrics") // flag.BoolVar(&promcalls, "promcalls", false, "activate prometheus callsign metrics")

0
cmd/pskreporter_exporter/mysql.go → cmd/pskreporter-exporter/mysql.go

0
cmd/pskreporter_exporter/prometheus.go → cmd/pskreporter-exporter/prometheus.go

0
cmd/pskreporter_exporter/pskreporter.go → cmd/pskreporter-exporter/pskreporter.go

BIN
cmd/pskreporter-exporter/pskreporter_exporter.exe

Binary file not shown.

4
cmd/wsjtx_exporter/main.go → cmd/wsjtx-exporter/main.go

@ -21,7 +21,7 @@ var mysql_user string
var mysql_pass string var mysql_pass string
var mysql_table string var mysql_table string
var port int var port int
var promcalls bool //var promcalls bool
var trace bool var trace bool
var useProm bool var useProm bool
var useMysql bool var useMysql bool
@ -43,7 +43,7 @@ func init() {
flag.IntVar(&port, "port", 2112, "port for prometheus metric endpoint") flag.IntVar(&port, "port", 2112, "port for prometheus metric endpoint")
flag.BoolVar(&useProm, "prometheus", false, "activate prometheus exporter") flag.BoolVar(&useProm, "prometheus", false, "activate prometheus exporter")
flag.BoolVar(&useMysql, "mysql", false, "activate mysql exporter") flag.BoolVar(&useMysql, "mysql", false, "activate mysql exporter")
flag.BoolVar(&promcalls, "promcalls", false, "activate prometheus callsign metrics") // flag.BoolVar(&promcalls, "promcalls", false, "activate prometheus callsign metrics")
flag.BoolVar(&trace, "trace", false, "log almost everything") flag.BoolVar(&trace, "trace", false, "log almost everything")
flag.Parse() flag.Parse()

10
cmd/wsjtx_exporter/mysql.go → cmd/wsjtx-exporter/mysql.go

@ -55,8 +55,8 @@ func init_db() {
"cqzone INT NOT NULL," + "cqzone INT NOT NULL," +
"ituzone INT NOT NULL," + "ituzone INT NOT NULL," +
"rx TINYINT NOT NULL," + "rx TINYINT NOT NULL," +
"PRIMARY KEY UC_" + mysql_table + "(ts, station, callsign))," + "PRIMARY KEY UC_" + mysql_table + "(ts, station, callsign)," +
"INDEX idx_dxcc (dxcc);" "INDEX idx_dxcc (dxcc));"
log.WithFields(log.Fields{"query":qry}).Debug("creating database..") log.WithFields(log.Fields{"query":qry}).Debug("creating database..")
_, err := db.Exec(qry) _, err := db.Exec(qry)
if err != nil { if err != nil {
@ -64,7 +64,7 @@ func init_db() {
panic(err) panic(err)
} }
} else { } else {
log.Info("found existing table") log.Info("found existing table..")
} }
} }
@ -72,13 +72,13 @@ func init_db() {
func dbConn() (db *sql.DB, err bool){ func dbConn() (db *sql.DB, err bool){
db, er := sql.Open("mysql", mysql_user+":"+mysql_pass+"@tcp("+mysql_host+")/"+mysql_db) db, er := sql.Open("mysql", mysql_user+":"+mysql_pass+"@tcp("+mysql_host+")/"+mysql_db)
if er != nil { if er != nil {
log.Debugf("db not reachable: %s",err) log.Error("db not reachable: %s",err)
return nil, true return nil, true
} }
pingerr := db.Ping() pingerr := db.Ping()
if pingerr != nil { if pingerr != nil {
log.Debug("db not pingable..") log.Error("db not pingable..")
return nil, true return nil, true
} }

56
cmd/wsjtx-exporter/prometheus.go

@ -0,0 +1,56 @@
package main
import (
"fmt"
"github.com/denzs/wsjtx_dashboards/shared/wsjtx"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
log "github.com/sirupsen/logrus"
)
var wsjtx_received_total *prometheus.CounterVec
//var wsjtx_received_call_total *prometheus.CounterVec
func handlePrometheus(result wsjtx.Result) {
incr_wsjtx_received_total(result)
// if promcalls {
// incr_wsjtx_received_callsigns_total(result)
// }
}
func incr_wsjtx_received_total(result wsjtx.Result) {
if(wsjtx_received_total == nil) {
log.Printf("init prometheus metric wsjtx_received_total..")
wsjtx_received_total = promauto.NewCounterVec(prometheus.CounterOpts{ Name: "wsjtx_received_total", Help: "DXCCs ordery by labels",
}, []string{"num","signal","name","continent","cqzone","ituzone","band","mode","geohash","station"},)
}
wsjtx_received_total.With(prometheus.Labels{"num":fmt.Sprintf("%d",result.Ent.DXCC),"signal":fmt.Sprintf("%d",result.Signal),
"band":result.Band,
"name":result.Ent.Entity,
"continent":result.Ent.Continent,
"cqzone":fmt.Sprintf("%d",result.Ent.CQZone),
"mode":result.Mode,
"geohash":result.GeoHash,
"station": station,
"ituzone":fmt.Sprintf("%d",result.Ent.ITUZone)}).Inc()
}
//func incr_wsjtx_received_callsigns_total(result wsjtx.Result) {
// if(wsjtx_received_call_total == nil) {
// log.Printf("inicreating wsjtx_received_call_total..")
// wsjtx_received_call_total = promauto.NewCounterVec(prometheus.CounterOpts{ Name: "wsjtx_received_call_total", Help: "DXCCs ordery by labels",
// }, []string{"num","signal","name","continent","cqzone","ituzone","band","call","mode","geohash","station"},)
// }
// wsjtx_received_call_total.With(prometheus.Labels{"num":fmt.Sprintf("%d",result.Ent.DXCC),"signal":fmt.Sprintf("%d",result.Signal),
// "band":result.Band,
// "name":result.Ent.Entity,
// "continent":result.Ent.Continent,
// "cqzone":fmt.Sprintf("%d",result.Ent.CQZone),
// "mode":result.Mode,
// "call":result.Call,
// "geohash":result.GeoHash,
// "station": station,
// "ituzone":fmt.Sprintf("%d",result.Ent.ITUZone)}).Inc()
//}

BIN
cmd/wsjtx-exporter/wsjtx_exporter.exe

Binary file not shown.

56
cmd/wsjtx_exporter/prometheus.go

@ -1,56 +0,0 @@
package main
import (
"fmt"
"github.com/denzs/wsjtx_dashboards/shared/wsjtx"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
log "github.com/sirupsen/logrus"
)
var wsjtx_received_total *prometheus.CounterVec
var wsjtx_received_call_total *prometheus.CounterVec
func handlePrometheus(result wsjtx.Result) {
incr_wsjtx_received_total(result)
if promcalls {
incr_wsjtx_received_callsigns_total(result)
}
}
func incr_wsjtx_received_total(result wsjtx.Result) {
if(wsjtx_received_total == nil) {
log.Printf("creating wsjtx_received_total...")
wsjtx_received_total = promauto.NewCounterVec(prometheus.CounterOpts{ Name: "wsjtx_received_total", Help: "DXCCs ordery by labels",
}, []string{"num","signal","name","continent","cqzone","ituzone","band","mode","geohash","station"},)
}
wsjtx_received_total.With(prometheus.Labels{"num":fmt.Sprintf("%d",result.Ent.DXCC),"signal":fmt.Sprintf("%d",result.Signal),
"band":result.Band,
"name":result.Ent.Entity,
"continent":result.Ent.Continent,
"cqzone":fmt.Sprintf("%d",result.Ent.CQZone),
"mode":result.Mode,
"geohash":result.GeoHash,
"station": station,
"ituzone":fmt.Sprintf("%d",result.Ent.ITUZone)}).Inc()
}
func incr_wsjtx_received_callsigns_total(result wsjtx.Result) {
if(wsjtx_received_call_total == nil) {
log.Printf("creating wsjtx_received_call_total...")
wsjtx_received_call_total = promauto.NewCounterVec(prometheus.CounterOpts{ Name: "wsjtx_received_call_total", Help: "DXCCs ordery by labels",
}, []string{"num","signal","name","continent","cqzone","ituzone","band","call","mode","geohash","station"},)
}
wsjtx_received_call_total.With(prometheus.Labels{"num":fmt.Sprintf("%d",result.Ent.DXCC),"signal":fmt.Sprintf("%d",result.Signal),
"band":result.Band,
"name":result.Ent.Entity,
"continent":result.Ent.Continent,
"cqzone":fmt.Sprintf("%d",result.Ent.CQZone),
"mode":result.Mode,
"call":result.Call,
"geohash":result.GeoHash,
"station": station,
"ituzone":fmt.Sprintf("%d",result.Ent.ITUZone)}).Inc()
}

27
doc/alltxt2csv.md

@ -0,0 +1,27 @@
# alltxt2csv
converts ALL.TXT to csv file which can then be nicely imported into mysql.
parameters:
```
Usage of ./alltxt2csv:
-in string
path to wsjt-x ALL.txt
-out string
path to csv outfile
-station string
your callsign or wsjtx instance identifier (default "localstation")
-trace
log every line... yes really ;)
```
## converting ALL.TXT to csv
## import of csv
```
alltxt2csv -in ~/.local/share/WSJT-X/ALL.TXT -out ~/dev/wsjtx_dashboards/import/DL3SD.csv -station DL3SD
```
* prepare IMPORT.SQL
docker exec -ti db /usr/bin/mysql --local-infile=1 -pverysecret digimode_stats -e "SET GLOBAL local_infile=1;"
docker exec -ti db /usr/bin/mysql --local-infile=1 -pverysecret digimode_stats -e "\. /wsjtx/import/DL3SD.SQL"

22
doc/pskreporter-exporter.md

@ -0,0 +1,22 @@
# pskreporter-exporter
poll pskreporter.info every 5 minutes to stores the results into mysql and/or export for prometheus.
parameter:
```
Usage of go/bin/pskreporter_exporter:
-db="digimode_stats": db name
-dbhost="db": name/ip of mysql host
-dbpass="secret": mysql password
-dbtable="pskreporter_stats": mysql table name
-dbuser="wsjtx": mysql username
-debug=false: enable debug logging
-metricpath="/metrics": path for prometheus metric endpoint
-mysql=false: activate mysql exporter
-port=2113: port for prometheus metric endpoint
-prometheus=false: activate prometheus exporter
-station="": callsign to monitor on pskreporter
-trace=false: log almost everything
```
unsure about using in combination with gridtracker...

56
doc/wsjtx-exporter.md

@ -0,0 +1,56 @@
# wsjtx-exporter
follows WSJTX-X ALL.TXT file to store entries in mysql and export metrics for prometheus.
parameters:
```
Usage of go/bin/wsjtx-exporter:
-db string
db name (default "digimode_stats")
-dbhost string
name/ip of mysql host (default "db")
-dbpass string
mysql password (default "secret")
-dbtable string
mysql table name (default "wsjtx_all_txt")
-dbuser string
mysql username (default "wsjtx")
-metricpath string
path for prometheus metric endpoint (default "/metrics")
-mysql
activate mysql exporter
-pathin string
path to WSJT-X ALL.TXT (default "/wsjtx/ALL.TXT")
-port int
port for prometheus metric endpoint (default 2112)
-prometheus
activate prometheus exporter
-station string
your callsign or wsjtx instance identifier (default "localstation")
-trace
log almost everything
```
## systemd user unit for linux
create ~/.config/systemd/user/wsjtx-exporter.service and adapt parameters to your needs!
~/.config/systemd/user/wsjtx-exporter.service:
```
[Unit]
Description=WSJT-X 'ALL.TXT' prometheues exporter
[Service]
Restart=always
ExecStart=%h/go/bin/wsjtx-exporter -mysql -prometheus -dbhost 10.0.73.1 -dbuser dl3sd -dbpass tester -station DL3SD -pathin %h/.local/share/WSJT-X/ALL.TXT -trace
[Install]
WantedBy=default.target
```
activate:
```
systemctl --user daemon-reload
systemctl --user enable wsjtx-exporter.service
systemctl --user start wsjtx-exporter.service
```

1
go.sum

@ -411,6 +411,7 @@ golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8T
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk= google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=

2
misc/IMPORT.SQL → misc/import_csv.sql

@ -1,4 +1,4 @@
LOAD DATA LOCAL INFILE '/wsjtx/SEBO.CSV' LOAD DATA LOCAL INFILE '/wsjtx/DL3SD.CSV'
INTO TABLE wsjtx_all_txt INTO TABLE wsjtx_all_txt
FIELDS TERMINATED BY ',' FIELDS TERMINATED BY ','
ENCLOSED BY '"' ENCLOSED BY '"'

14
misc/pskreporter_stats.sql

@ -0,0 +1,14 @@
CREATE TABLE IF NOT EXISTS pskreporter_stats (
ts timestamp NOT NULL,
station VARCHAR(16) NOT NULL,
callsign VARCHAR(16) NOT NULL,
band VARCHAR(10) NOT NULL,
continent VARCHAR(32) NOT NULL,
mode VARCHAR(16) NOT NULL,
dxcc VARCHAR(128) NOT NULL,
geohash VARCHAR(16) NOT NULL,
report TINYINT NOT NULL,
cqzone INT NOT NULL,
ituzone INT NOT NULL,
UNIQUE KEY UC_pskreporter_stats (ts, station, callsign)
);

16
misc/wsjtx_all_txt.sql

@ -0,0 +1,16 @@
CREATE TABLE IF NOT EXISTS wsjtx_all_txt (
ts timestamp NOT NULL,
station VARCHAR(16) NOT NULL,
callsign VARCHAR(16) NOT NULL,
band VARCHAR(10) NOT NULL,
continent VARCHAR(32) NOT NULL,
mode VARCHAR(16) NOT NULL,
dxcc VARCHAR(128) NOT NULL,
geohash VARCHAR(16) NOT NULL,
report TINYINT NOT NULL,
cqzone INT NOT NULL,
ituzone INT NOT NULL,
rx TINYINT NOT NULL,
PRIMARY KEY PK_wsjtx_all_txt (ts, station, callsign),
INDEX idx_dxcc (dxcc)
);

2
shared/wsjtx/wsjtx.go

@ -153,7 +153,7 @@ func ScanLine(line string) (Result, bool) {
"dxcc":result.Ent.DXCC, "dxcc":result.Ent.DXCC,
"continent":result.Ent.Continent, "continent":result.Ent.Continent,
"band":result.Band, "band":result.Band,
"time":string(result.Timestamp.String()), "time":result.Timestamp.String(),
"mode":result.Mode, "mode":result.Mode,
"geohash":result.GeoHash, "geohash":result.GeoHash,
"rx":result.Rx, "rx":result.Rx,

44
todo.md

@ -0,0 +1,44 @@
* wsjtx-exporter
* cqzone und ituzone nicht in db
* -back parameter implementieren
* remove calls metric?
* https://prometheus.io/docs/practices/naming/#labels
* cant reach database ist bei mir aufgetreten
* adressieren ;)
* systemd user unit
* windows aequivalent??
* trace: sucessfully parsed.. loggt: fields.time
* bei kaputter Zeile:
Nov 26 20:10:47 sebo-OptiPlex-980 wsjtx_exporter[869]: goroutine 12 [running]:
Nov 26 20:10:47 sebo-OptiPlex-980 wsjtx_exporter[869]: github.com/denzs/wsjtx_dashboards/shared/wsjtx.ScanLine(0xc0000a4004, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
* alltxt2csv
* direkt importfeature wieder aufnehmen..?
* database
* propably more indices
* create table aus binaries nehmen?
* kontrolle sollte beim db admin liegen
* doc
** german docs..
** server und/oder skript/readme
* fix dashboards
* umgang mit refresh der variablen??
* provide dashboards to grafana
* prometheues metric + value + TIMESTAMP!!!!!!
* vendoring
* add howto for ubuntu/win10
* push images to dockerhub
* Mail an PSKReporter
* Query to bunde multiple callsigns?
* How are the queries counted? rate per src ip or per query?