Compare commits

..

15 Commits

Author SHA1 Message Date
Jan-Philipp Litza cd1329963a Alfred input: Pass -z switch to alfred-json 2014-09-23 22:31:51 +02:00
Jan-Philipp Litza 322860be7e Add MACs from mesh_interfaces as alises 2014-09-20 12:42:53 +02:00
Jan-Philipp Litza 66112061d6 Fix adding of nodes with multiple matching alises 2014-09-20 12:42:40 +02:00
Jan-Philipp Litza f08aaaff4e Fix fuzzy MAC matching 2014-09-07 12:17:36 +02:00
Jan-Philipp Litza 6d452fc149 d3json: obscure client MACs 2014-09-07 12:17:22 +02:00
Jan-Philipp Litza 5fba69de7a d3json: make output more similar to pre-rewrite version 2014-07-31 16:31:14 +02:00
Jan-Philipp Litza 446bc98403 input alfred: unify nodeinfo and stats datatypes 2014-07-31 11:41:26 +02:00
Jan-Philipp Litza e54e7467fc Create package ffmap, add wiki input, remove old code 2014-07-08 14:20:47 +02:00
Jan-Philipp Litza f5e3705eec Began rewrite with more modular design 2014-07-07 23:27:21 +02:00
Jan-Philipp Litza 54402ce089 mkmap.sh: Add locking around script call 2014-07-06 20:09:42 +02:00
Jan-Philipp Litza ee51547664 mkmap.sh: Remove Lübeck-specific stuff 2014-07-06 20:04:24 +02:00
Jan-Philipp Litza 89e4c63700 alfred.py: Make geo attribute setting more robust 2014-07-06 20:03:10 +02:00
Jan-Philipp Litza 7075d8481c NodeRRD: add many more DS, rrd.py: generate neighbor counts 2014-07-06 20:03:10 +02:00
Jan-Philipp Litza 43e70191f1 RRD: Fix updating of DS 2014-07-06 20:03:10 +02:00
Jan-Philipp Litza 6fc1423124 Make handling of node attributes more flexible.
This commit makes Nodes special dicts that return None-like objects for
inexistent keys, making it a dynamic attribute store.

Also, it removes the D3MapBuilder and moves its logic to the Node and
Link classes' newly introduced export() method. Only they need to be
changed to populate the final nodes.json with more attributes.
2014-07-06 20:03:10 +02:00
33 changed files with 789 additions and 1622 deletions

10
.gitignore vendored
View File

@ -1,11 +1 @@
# backups
*~
# script-generated
aliases*.json
nodedb/
# python bytecode / cache
*.pyc *.pyc
pycache/
__pycache__/

View File

@ -1,6 +0,0 @@
sudo: false
language: python
python:
- "3.4"
install: "pip install pep8"
script: "pep8 --ignore=E501 *.py lib/*.py"

148
README.md
View File

@ -1,117 +1,53 @@
# Data for Freifunk Map, Graph and Node List # Data for Freifunk Map, Graph and Node List
[![Build Status](https://travis-ci.org/ffnord/ffmap-backend.svg?branch=master)](https://travis-ci.org/ffnord/ffmap-backend) ffmap-backend gathers information on the batman network by invoking
batctl
and
batadv-vis
as root (via sudo) and has this information placed into a target directory
as the file "nodes.json" and also updates the directory "nodes" with graphical
representations of uptimes and the number of clients connecting.
ffmap-backend gathers information on the batman network by invoking : The target directory is suggested to host all information for interpreting those
node descriptions, e.g. as provided by https://github.com/ffnord/ffmap-d3.git .
* batctl (might require root), When executed without root privileges, we suggest to grant sudo permissions
* alfred-json and within wrappers of those binaries, so no further changes are required in other
* batadv-vis scripts:
The output will be written to a directory (`-d output`).
Run `backend.py --help` for a quick overview of all available options.
For the script's regular execution add the following to the crontab:
<pre> <pre>
* * * * * backend.py -d /path/to/output -a /path/to/aliases.json --vpn ae:7f:58:7d:6c:2a d2:d0:93:63:f7:da $ cat <<EOCAT > $HOME/batctl
#!/bin/sh
exec sudo /usr/sbin/batctl $*
EOCAT
</pre> </pre>
# Dependencies and analogously for batadv-vis. The entry for /etc/sudoers could be
whateveruser ALL=(ALL:ALL) NOPASSWD: /usr/sbin/batctl,/usr/sbin/batadv-vis,/usr/sbin/alfred-json
- Python 3 The destination directory can be made directly available through apache:
- Python 3 Package [Networkx](https://networkx.github.io/) <pre>
- [alfred-json](https://github.com/tcatm/alfred-json) $ cat /etc/apache2/site-enabled/000-default
- rrdtool (if run with `--with-rrd`) ...
<Directory /home/whateverusername/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
...
# Running as unprivileged user $ cat /etc/apache2/conf.d/freifunk
Alias /map /home/ffmap/www/
Alias /firmware /home/freifunk/autoupdates/
</pre>
Some information collected by ffmap-backend requires access to specific system resources. To execute, run
python3 -mffmap.run --input-alfred --input-badadv --output-d3json ../www/nodes.json
The script expects above described sudo-wrappers in the $HOME directory of the user executing
the script. If those are not available, an error will occurr if not executed as root. Also,
the tool realpath optionally allows to execute the script from anywhere in the directory tree.
Make sure the user you are running this under is part of the group that owns the alfred socket, so For the script's regular execution add the following to the crontab:
alfred-json can access the alfred daemon. <pre>
*/5 * * * * python3 -mffmap.run --input-alfred --input-badadv --output-d3json /home/ffmap/www/nodes.json
# ls -al /var/run/alfred.sock </pre>
srw-rw---- 1 root alfred 0 Mar 19 22:00 /var/run/alfred.sock=
# adduser map alfred
Adding user `map' to group `alfred' ...
Adding user map to group alfred
Done.
$ groups
map alfred
Running batctl requires passwordless sudo access, because it needs to access the debugfs to retrive
the gateway list.
# echo 'map ALL = NOPASSWD: /usr/sbin/batctl' | tee /etc/sudoers.d/map
map ALL = NOPASSWD: /usr/sbin/batctl
# chmod 0440 /etc/sudoers.d/map
That should be everything. The script automatically detects if it is run in unprivileged mode and
will prefix `sudo` where necessary.
# Data format
## nodes.json
{ 'nodes': {
node_id: { 'flags': { flags },
'firstseen': isoformat,
'lastseen': isoformat,
'nodeinfo': {...}, # copied from alfred type 158
'statistics': {
'uptime': double, # seconds
'memory_usage': double, # 0..1
'clients': double,
'rootfs_usage': double, # 0..1
'loadavg': double,
'gateway': mac
}
},
...
}
'timestamp': isoformat
}
### flags (bool)
- online
- gateway
## Old data format
If you want to still use the old [ffmap-d3](https://github.com/ffnord/ffmap-d3)
front end, you can use the file `ffmap-d3.jq` to convert the new output to the
old one:
```
jq -n -f ffmap-d3.jq \
--argfile nodes nodedb/nodes.json \
--argfile graph nodedb/graph.json \
> nodedb/ffmap-d3.json
```
Then point your ffmap-d3 instance to the `ffmap-d3.json` file.
# Removing owner information
If you'd like to redact information about the node owner from `nodes.json`,
you may use a filter like [jq]. In this case, specify an output directory
different from your webserver directory, e.g.:
./backend.py -d /ffmap-data
Don't write to files generated in there. ffmap-backend uses them as its
database.
After running ffmap-backend, copy `graph.json` to your webserver. Then,
filter `nodes.json` using `jq` like this:
jq '.nodes = (.nodes | with_entries(del(.value.nodeinfo.owner)))' \
< /ffmap-data/nodes.json > /var/www/data/nodes.json
This will remove owner information from nodes.json before copying the data
to your webserver.
[jq]: https://stedolan.github.io/jq/

View File

@ -1,36 +0,0 @@
[
{
"node_id": "krtek",
"hostname": "krtek",
"location": {
"longitude": 10.74,
"latitude": 53.86
},
"network": {
"mesh": {
"bat0": {
"interfaces": {
"tunnel": [
"00:25:86:e6:f1:bf"
]
}
}
}
}
},
{
"node_id": "gw1",
"hostname": "burgtor",
"network": {
"mesh": {
"bat0": {
"interfaces": {
"tunnel": [
"52:54:00:f3:62:d9"
]
}
}
}
}
}
]

View File

@ -1,269 +0,0 @@
#!/usr/bin/env python3
"""
backend.py - ffmap-backend runner
https://github.com/ffnord/ffmap-backend
Erweiterte Version von Freifunk Pinneberg
- Graphiken aus RRD-Daten nur auf Anforderung erzeugen
- Verzeichnis für die RRD-Nodedb als Kommandozeilenparameter
- Statistikerzeugung korrigiert: Initialisierung und Befüllung
zu passenden Zeitpunkten
"""
import argparse
import configparser
import json
import os
import sys
import logging, logging.handlers
from datetime import datetime
import networkx as nx
from networkx.readwrite import json_graph
from lib import graph, nodes
from lib.alfred import Alfred
from lib.batman import Batman
from lib.rrddb import RRD
from lib.nodelist import export_nodelist
from lib.validate import validate_nodeinfos
NODES_VERSION = 1
GRAPH_VERSION = 1
cfg = {
'cfgfile': '/etc/ffmap/ffmap-test.cfg',
'logfile': '/var/log/ffmap.log',
'loglevel': 5,
'dest_dir': '/var/lib/ffmap/mapdata',
'aliases': [],
'prune': 0,
'nodedb': '/var/lib/ffmap/nodedb',
'rrd_data': False,
'rrd_graphs': False,
'redis': False
}
def main(params):
os.makedirs(params['dest_dir'], exist_ok=True)
nodes_fn = os.path.join(params['dest_dir'], 'nodes.json')
graph_fn = os.path.join(params['dest_dir'], 'graph.json')
nodelist_fn = os.path.join(params['dest_dir'], 'nodelist.json')
now = datetime.utcnow().replace(microsecond=0)
# parse mesh param and instantiate Alfred/Batman instances
alfred_instances = []
batman_instances = []
for value in params['mesh']:
# (1) only batman-adv if, no alfred sock
if ':' not in value:
if len(params['mesh']) > 1:
raise ValueError(
'Multiple mesh interfaces require the use of '
'alfred socket paths.')
alfred_instances.append(Alfred(unix_sockpath=None))
batman_instances.append(Batman(mesh_interface=value))
else:
# (2) batman-adv if + alfred socket
try:
batif, alfredsock = value.split(':')
alfred_instances.append(Alfred(unix_sockpath=alfredsock))
batman_instances.append(Batman(mesh_interface=batif,
alfred_sockpath=alfredsock))
except ValueError:
raise ValueError(
'Unparseable value "{0}" in --mesh parameter.'.
format(value))
# read nodedb state from nodes.json
try:
with open(nodes_fn, 'r') as nodedb_handle:
nodedb = json.load(nodedb_handle)
except (IOError, ValueError):
nodedb = {'nodes': dict()}
# flush nodedb if it uses the old format
if 'links' in nodedb:
nodedb = {'nodes': dict()}
# set version we're going to output
nodedb['version'] = NODES_VERSION
# update timestamp and assume all nodes are offline
nodedb['timestamp'] = now.isoformat()
for node_id, node in nodedb['nodes'].items():
node['flags']['online'] = False
# integrate alfred nodeinfo
for alfred in alfred_instances:
nodeinfo = validate_nodeinfos(alfred.nodeinfo())
nodes.import_nodeinfo(nodedb['nodes'], nodeinfo,
now, assume_online=True)
# integrate static aliases data
for aliases in params['aliases']:
with open(aliases, 'r') as f:
nodeinfo = validate_nodeinfos(json.load(f))
nodes.import_nodeinfo(nodedb['nodes'], nodeinfo,
now, assume_online=False)
# prepare statistics collection
nodes.reset_statistics(nodedb['nodes'])
# acquire gwl and visdata for each batman instance
mesh_info = []
for batman in batman_instances:
vd = batman.vis_data()
gwl = batman.gateway_list()
mesh_info.append((vd, gwl))
# update nodedb from batman-adv data
for vd, gwl in mesh_info:
nodes.import_mesh_ifs_vis_data(nodedb['nodes'], vd)
nodes.import_vis_clientcount(nodedb['nodes'], vd)
nodes.mark_vis_data_online(nodedb['nodes'], vd, now)
nodes.mark_gateways(nodedb['nodes'], gwl)
# get alfred statistics
for alfred in alfred_instances:
nodes.import_statistics(nodedb['nodes'], alfred.statistics())
# clear the nodedb from nodes that have not been online in $prune days
if params['prune']:
nodes.prune_nodes(nodedb['nodes'], now, params['prune'])
# build nxnetworks graph from nodedb and visdata
batadv_graph = nx.DiGraph()
for vd, gwl in mesh_info:
graph.import_vis_data(batadv_graph, nodedb['nodes'], vd)
# force mac addresses to be vpn-link only (like gateways for example)
if params['vpn']:
graph.mark_vpn(batadv_graph, frozenset(params['vpn']))
def extract_tunnel(nodes):
macs = set()
for id, node in nodes.items():
try:
for mac in node["nodeinfo"]["network"]["mesh"]["bat0"]["interfaces"]["tunnel"]:
macs.add(mac)
except KeyError:
pass
return macs
graph.mark_vpn(batadv_graph, extract_tunnel(nodedb['nodes']))
batadv_graph = graph.merge_nodes(batadv_graph)
batadv_graph = graph.to_undirected(batadv_graph)
# write processed data to dest dir
with open(nodes_fn, 'w') as f:
json.dump(nodedb, f)
graph_out = {'batadv': json_graph.node_link_data(batadv_graph),
'version': GRAPH_VERSION}
with open(graph_fn, 'w') as f:
json.dump(graph_out, f)
with open(nodelist_fn, 'w') as f:
json.dump(export_nodelist(now, nodedb), f)
# optional rrd graphs (trigger with --rrd)
if params['rrd']:
if params['nodedb']:
rrd = RRD(params['nodedb'], os.path.join(params['dest_dir'], 'nodes'))
else:
script_directory = os.path.dirname(os.path.realpath(__file__))
rrd = RRD(os.path.join(script_directory, 'nodedb'),
os.path.join(params['dest_dir'], 'nodes'))
rrd.update_database(nodedb['nodes'])
if params['img']:
rrd.update_images()
def set_loglevel(nr):
"""
Umsetzen der Nummer auf einen für "logging" passenden Wert
Die Nummer kann ein Wert zwischen 0 - kein Logging und 5 - Debug sein
"""
level = (None, logging.CRITICAL, logging.ERROR, logging.WARNING,
logging.INFO, logging.DEBUG)
if nr > 5:
nr = 5
elif nr < 0:
nr = 0
return level[nr]
if __name__ == '__main__':
# get options from command line
parser = argparse.ArgumentParser(
description = "Collect data for ffmap: creates json files and "
"optional rrd data and graphs")
parser.add_argument('-a', '--aliases',
help='Read aliases from FILE',
nargs='+', default=[], metavar='FILE')
parser.add_argument('-m', '--mesh',
default=['bat0'], nargs='+',
help='Use given batman-adv mesh interface(s) (defaults '
'to bat0); specify alfred unix socket like '
'bat0:/run/alfred0.sock.')
parser.add_argument('-d', '--dest-dir', action='store',
help='Write output to destination directory',
required=False)
parser.add_argument('-c', '--config', action='store', metavar='FILE',
help='read configuration from FILE')
parser.add_argument('-V', '--vpn', nargs='+', metavar='MAC',
help='Assume MAC addresses are part of vpn')
parser.add_argument('-p', '--prune', metavar='DAYS', type=int,
help='Forget nodes offline for at least DAYS')
parser.add_argument('-r', '--with-rrd', dest='rrd', action='store_true',
default=False,
help='Enable the collection of RRD data')
parser.add_argument('-n', '--nodedb', metavar='RRD_DIR', action='store',
help='Directory for node RRD data files')
parser.add_argument('-i', '--with-img', dest='img', action='store_true',
default=False,
help='Enable the rendering of RRD graphs (cpu '
'intensive)')
options = vars(parser.parse_args())
if options['config']:
cfg['cfgfile'] = options['config']
config = configparser.ConfigParser(cfg)
if config.read(cfg['cfgfile']):
if not options['nodedb']:
options['nodedb'] = config.get('rrd', 'nodedb')
if not options['dest_dir']:
options['dest_dir'] = config.get('global', 'dest_dir')
if not options['rrd']:
options['rrd'] = config.getboolean('rrd', 'enabled')
if not options['img']:
options['img'] = config.getboolean('rrd', 'graphs')
cfg['logfile'] = config.get('global', 'logfile')
cfg['loglevel'] = config.getint('global', 'loglevel')
# At this point global configuration is available. Time to enable logging
# Logging is handled by the operating system, so use WatchedFileHandler
handler = logging.handlers.WatchedFileHandler(cfg['logfile'])
handler.setFormatter(logging.Formatter(fmt='%(asctime)s %(levelname)s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S'))
log = logging.getLogger()
log.addHandler(handler)
loglevel = set_loglevel(cfg['loglevel'])
if loglevel:
log.setLevel(loglevel)
else:
log.disabled = True
log.info("%s started" % sys.argv[0])
if os.path.isfile(cfg['cfgfile']):
log.info("using configuration from '%s'" % cfg['cfgfile'])
main(options)
log.info("%s finished" % sys.argv[0])

View File

@ -1,52 +0,0 @@
{
"meta": {
"timestamp": $nodes.timestamp
},
"nodes": (
$graph.batadv.nodes
| map(
if has("node_id") and .node_id
then (
$nodes.nodes[.node_id] as $node
| {
"id": .id,
"uptime": $node.statistics.uptime,
"flags": ($node.flags + {"client": false}),
"name": $node.nodeinfo.hostname,
"clientcount": (if $node.statistics.clients >= 0 then $node.statistics.clients else 0 end),
"hardware": $node.nodeinfo.hardware.model,
"firmware": $node.nodeinfo.software.firmware.release,
"geo": (if $node.nodeinfo.location then [$node.nodeinfo.location.latitude, $node.nodeinfo.location.longitude] else null end),
#"lastseen": $node.lastseen,
"network": $node.nodeinfo.network
}
)
else
{
"flags": {},
"id": .id,
"geo": null,
"clientcount": 0
}
end
)
),
"links": (
$graph.batadv.links
| map(
$graph.batadv.nodes[.source].node_id as $source_id
| $graph.batadv.nodes[.target].node_id as $target_id
| select(
$source_id and $target_id and
($nodes.nodes | (has($source_id) and has($target_id)))
)
| {
"target": .target,
"source": .source,
"quality": "\(.tq), \(.tq)",
"id": ($source_id + "-" + $target_id),
"type": (if .vpn then "vpn" else null end)
}
)
)
}

View File

@ -1,185 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Knotendaten manuell ändern
Das ist z.B. für ausgeschaltete Knoten interessant, die nur
temporär nicht zur Verfügung stehen. Die können ausgeblendet
werden.
Das ist besser als löschen, weil so die Statistik nicht
verschwindet
Änderungsprotokoll
==================
Version Datum Änderung(en) von
-------- ----------- ------------------------------------------------------ ----
1.0 2017-08-03 Programm in das ffmap-backend Projekt integriert tho
"""
import argparse
import configparser
import json
import os
import sys
import glob
# Einstellungen werden in folgender Reihenfolge verarbeitet
# später gesetzte Werte überschreiben frühere
# 1. im Programm hart codiert
# 2. aus der zentralen Konfigurationsdatei gelesen
# 3. als Kommandozeilenoptionen angegeben
cfg = {
'cfgfile': '/etc/ffmap/ffmap.cfg',
'logfile': '/var/log/ffmap.log',
'loglevel': 2,
'dest_dir': '/var/lib/ffmap/mapdata',
'nodedb': '/var/lib/ffmap/nodedb',
'imgpath': '/var/www/meshviewer/stats/img'
}
roles_defined = ('node', 'temp', 'mobile', 'offloader', 'service', 'test', 'gate', 'plan', 'hidden')
def main(cfg):
# Pfade zu den beteiligten Dateien
nodes_fn = os.path.join(cfg['dest_dir'], 'nodes.json')
nodelist_fn = os.path.join(cfg['dest_dir'], 'nodelist.json')
# 1. Knotendaten (NodeDB)
# 1.1 Daten laden
try:
with open(nodes_fn, 'r') as nodedb_handle:
nodedb = json.load(nodedb_handle)
except IOError:
print("Error reading nodedb file %s" % nodes_fn)
nodedb = {'nodes': dict()}
# 1.2 Knoten bearbeiten
changed = False
for n in cfg['nodeid']:
if n in nodedb['nodes']:
print("Modify %s in nodedb" % n)
if 'role' in cfg and cfg['role'] in roles_defined:
try:
oldrole = nodedb['nodes'][n]['nodeinfo']['system']['role']
except KeyError:
oldrole = '<unset>'
print(" - change role from '%s' to '%s'" % (oldrole, cfg['role']))
nodedb['nodes'][n]['nodeinfo']['system']['role'] = cfg['role']
changed = True
if 'location' in cfg:
print(" - remove location")
# del nodedb['nodes'][n]['nodeinfo']['location']
changed = True
else:
print("Node %s not found in nodedb" % n)
# 1.3 Geänderte Daten zurückschreiben
if changed:
try:
with open(nodes_fn, 'w') as nodedb_handle:
json.dump(nodedb, nodedb_handle)
except IOError:
print("Error writing nodedb file %s" % nodes_fn)
# 2. Knotenliste (NodeList)
try:
with open(nodelist_fn, 'r') as nodelist_handle:
nodelist = json.load(nodelist_handle)
except IOError:
print("Error reading nodelist file %s" % nodelist_fn)
nodelist = {'nodelist': dict()}
# 2.1 Knoten bearbeiten
changed = False
ixlist = []
for nodeid in cfg['nodeid']:
found = False
for ix, node in enumerate(nodelist['nodes']):
if node['id'] == nodeid:
found = True
break
if found:
print("Modify %s in nodelist" % nodeid)
if 'role' in cfg and cfg['role'] in roles_defined:
try:
oldrole = nodelist['nodes'][ix]['role']
except KeyError:
oldrole = '<unset>'
print(" - change role from '%s' to '%s'" % (oldrole, cfg['role']))
nodelist['nodes'][ix]['role'] = cfg['role']
if 'location' in cfg:
print(" - remove location")
try:
#del nodelist['nodes'][ix]['position']
pass
except KeyError:
pass
changed = True
else:
print ("Node %s not found in nodelist" % nodeid)
# 2.3 Geänderte Daten zurückschreiben
if changed:
try:
with open(nodelist_fn, 'w') as nodelist_handle:
json.dump(nodelist, nodelist_handle)
except IOError:
print("Error writing nodelist file %s" % nodelist_fn)
if __name__ == "__main__":
# Optionen von der Kommandozeile lesen
parser = argparse.ArgumentParser()
parser.add_argument('-c', '--config', action='store',
help='Configuration file')
parser.add_argument('-d', '--dest-dir', action='store',
help='Directory with JSON data files',
required=False)
parser.add_argument('-i', '--nodeid', metavar='ID', action='store',
nargs='+', required=True,
help='Node id to modify')
parser.add_argument('-l', '--location', action='store_true',
help='Clear location information (hides node)',
required=False)
parser.add_argument('-r', '--role', action='store',
help='Set new role',
required=False)
# TODO
# Optionen was genau gemacht werden soll
# -p Position entfernen, Knoten wird nicht mehr angezeigt
# -r <rolle> Rolle einstellen
options = vars(parser.parse_args())
# Konfigurationsdatei einlesen
if options['config']:
cfg['cfgfile'] = options['config']
config = configparser.ConfigParser(cfg)
# config.read liefert eine Liste der geparsten Dateien
# zurück. Wenn sie leer ist, war z.B. die Datei nicht
# vorhanden
if config.read(cfg['cfgfile']):
if 'global' in config:
cfg['logfile'] = config['global']['logfile']
cfg['loglevel'] = config['global']['loglevel']
cfg['dest_dir'] = config['global']['dest_dir']
else:
print('Config file %s not parsed' % cfg['cfgfile'])
# Optionen von der Kommandozeile haben höchste Priorität
cfg['nodeid'] = options['nodeid']
if options['dest_dir']:
cfg['dest_dir'] = options['dest_dir']
if options['location']:
cfg['location'] = True
if options['role']:
cfg['role'] = options['role']
# Alles initialisiert, auf geht's
main(cfg)

View File

@ -1,225 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Lösche einen Knoten manuell aus dem Backend:
- JSON
- NodeDB
- NodeList
- Graph
- RRD-Dateien
- Bilder vom Webserver
Änderungsprotokoll
==================
Version Datum Änderung(en) von
-------- ----------- ------------------------------------------------------ ----
1.0 2017-01-06 Programm in das ffmap-backend Projekt integriert tho
"""
import argparse
import configparser
import json
import os
import sys
import glob
# Einstellungen werden in folgender Reihenfolge verarbeitet
# später gesetzte Werte überschreiben frühere
# 1. im Programm hart codiert
# 2. aus der zentralen Konfigurationsdatei gelesen
# 3. als Kommandozeilenoptionen angegeben
cfg = {
'cfgfile': '/etc/ffmap/ffmap.cfg',
'logfile': '/var/log/ffmap.log',
'loglevel': 2,
'dest_dir': '/var/lib/ffmap/mapdata',
'nodedb': '/var/lib/ffmap/nodedb',
'imgpath': '/var/www/meshviewer/stats/img'
}
def main(cfg):
# Pfade zu den beteiligten Dateien
nodes_fn = os.path.join(cfg['dest_dir'], 'nodes.json')
graph_fn = os.path.join(cfg['dest_dir'], 'graph.json')
nodelist_fn = os.path.join(cfg['dest_dir'], 'nodelist.json')
# 1. Knotendaten (NodeDB) bereinigen
# 1.1 Daten laden
try:
with open(nodes_fn, 'r') as nodedb_handle:
nodedb = json.load(nodedb_handle)
except IOError:
print("Error reading nodedb file %s" % nodes_fn)
nodedb = {'nodes': dict()}
# 1.2 Knoten entfernen
changed = False
for n in cfg['nodeid']:
if n in nodedb['nodes']:
print("Remove %s from nodedb" % n)
del nodedb['nodes'][n]
changed = True
else:
print("Node %s not found in nodedb" % n)
# 1.3 Geänderte Daten zurückschreiben
if changed:
try:
with open(nodes_fn, 'w') as nodedb_handle:
json.dump(nodedb, nodedb_handle)
except IOError:
print("Error writing nodedb file %s" % nodes_fn)
# 2. Knotenliste (NodeList) bereinigen
try:
with open(nodelist_fn, 'r') as nodelist_handle:
nodelist = json.load(nodelist_handle)
except IOError:
print("Error reading nodelist file %s" % nodelist_fn)
nodelist = {'nodelist': dict()}
# 2.1 Knoten entfernen
changed = False
ixlist = []
for nodeid in cfg['nodeid']:
found = False
for ix, node in enumerate(nodelist['nodes']):
if node['id'] == nodeid:
found = True
break
if found:
print("Remove %s from nodelist" % nodeid)
del nodelist['nodes'][ix]
changed = True
else:
print ("Node %s not found in nodelist" % nodeid)
# 2.3 Geänderte Daten zurückschreiben
if changed:
try:
with open(nodelist_fn, 'w') as nodelist_handle:
json.dump(nodelist, nodelist_handle)
except IOError:
print("Error writing nodelist file %s" % nodelist_fn)
# 3. Graph (NodeGraph) bereinigen
# 3.1 Graph laden
try:
with open(graph_fn, 'r') as graph_handle:
graph = json.load(graph_handle)
except IOError:
print("Error reading graph file %s" % graph_fn)
graph = {'graph': dict()}
# 3.2 Finde Knoten und Links
# Nodes und Links gehören zusammen
changed = False
for nodeid in cfg['nodeid']:
found = False
for ixn, node in enumerate(graph["batadv"]["nodes"]):
# Es kann nodes ohne "node_id" geben
try:
if node["node_id"] == nodeid:
found = True
break
except KeyError:
pass
if found:
print("Found %s in graph nodes at index %d" % (nodeid, ixn))
del graph["batadv"]["nodes"][ixn]
# Suche Link source oder target dem gefundenen Index entsprechen
ixlist = []
for ixg, link in enumerate(graph["batadv"]["links"]):
if link["source"] == ixn:
print("Found source link at index %d" % ixg)
print(" -> %s" % graph["batadv"]["nodes"][link["target"]])
ixlist.append(ixg)
if link["target"] == ixn:
print("Found target link at index %d" % ixg)
print(" -> %s" % graph["batadv"]["nodes"][link["source"]])
ixlist.append(ixg)
for ix in ixlist:
del graph["batadv"]["nodes"][ix]
changed = True
else:
print("Node %s not found in graph nodes" % nodeid)
# 3.3 Zurückschreiben der geänderten Daten
if changed:
try:
with open(graph_fn, 'w') as graph_handle:
json.dump(graph, graph_handle)
except IOError:
print("Error writing graph file %s" % graph_fn)
# 4. Entferne RRD-Dateien
for nodeid in cfg['nodeid']:
rrdfile = os.path.join(cfg['nodedb'], nodeid+'.rrd')
if os.path.isfile(rrdfile):
print("Removing RRD database file %s" % os.path.basename(rrdfile))
else:
print("RRD database file %s not found" % os.path.basename(rrdfile))
try:
os.remove(rrdfile)
except OSError:
pass
# 5. Entferne Bilder vom Webserver
count_deleted = 0
for nodeid in cfg['nodeid']:
for imagefile in glob.glob(os.path.join(cfg['imgpath'], nodeid+'_*.png')):
print("Removing stats image %s" % os.path.basename(imagefile))
try:
os.remove(imagefile)
count_deleted += 1
except OSError:
pass
if count_deleted == 0:
print("No stats images found in %s" % cfg['imgpath'])
if __name__ == "__main__":
# Optionen von der Kommandozeile lesen
parser = argparse.ArgumentParser()
parser.add_argument('-c', '--config', action='store',
help='Configuration file')
parser.add_argument('-d', '--dest-dir', action='store',
help='Directory with JSON data files',
required=False)
parser.add_argument('-i', '--nodeid', metavar='ID', action='store',
nargs='+', required=True,
help='Node id to remove')
parser.add_argument('-n', '--nodedb', metavar='RRD_DIR', action='store',
help='Directory for node RRD data files')
options = vars(parser.parse_args())
# Konfigurationsdatei einlesen
if options['config']:
cfg['cfgfile'] = options['config']
config = configparser.ConfigParser(cfg)
# config.read liefert eine Liste der geparsten Dateien
# zurück. Wenn sie leer ist, war z.B. die Datei nicht
# vorhanden
if config.read(cfg['cfgfile']):
if 'global' in config:
cfg['logfile'] = config['global']['logfile']
cfg['loglevel'] = config['global']['loglevel']
cfg['dest_dir'] = config['global']['dest_dir']
if 'rrd' in config:
cfg['nodedb'] = config['rrd']['nodedb']
else:
print('Config file %s not parsed' % cfg['cfgfile'])
# Optionen von der Kommandozeile haben höchste Priorität
cfg['nodeid'] = options['nodeid']
if options['dest_dir']:
cfg['dest_dir'] = options['dest_dir']
if options['nodedb']:
cfg['nodedb'] = options['nodedb']
# Alles initialisiert, auf geht's
main(cfg)

42
ffmap/__init__.py Normal file
View File

@ -0,0 +1,42 @@
import importlib
from ffmap.nodedb import NodeDB
def run(inputs, outputs):
"""Fill the database with given inputs and give it to given outputs.
Arguments:
inputs -- list of Input instances (with a compatible get_data(nodedb) method)
outputs -- list of Output instances (with a compatible output(nodedb) method)
"""
db = NodeDB()
for input_ in inputs:
input_.get_data(db)
for output in outputs:
output.output(db)
def run_names(inputs, outputs):
"""Fill the database with inputs and give it to outputs, each given
by names.
In contrast to run(inputs, outputs), this method expects only the
names of the modules to use, not instances thereof.
Arguments:
inputs -- list of dicts, each dict having the keys "name" with the
name of the input to use (directory name in inputs/), and
the key "options" with a dict of input-dependent options.
outputs -- list of dicts, see inputs.
"""
input_instances = []
output_instances = []
for input_ in inputs:
module = importlib.import_module(".inputs." + input_["name"], "ffmap")
input_instances.append(module.Input(**input_["options"]))
for output in outputs:
module = importlib.import_module(".outputs." + output["name"], "ffmap")
output_instances.append(module.Output(**output["options"]))
run(input_instances, output_instances)

29
ffmap/inputs/alfred.py Normal file
View File

@ -0,0 +1,29 @@
import subprocess
import json
class Input:
def __init__(self,request_data_type = 158):
self.request_data_type = request_data_type
@staticmethod
def _call_alfred(request_data_type):
return json.loads(subprocess.check_output([
"alfred-json",
"-z",
"-r", str(request_data_type),
"-f", "json",
]).decode("utf-8"))
def get_data(self, nodedb):
"""Add data from alfred to the supplied nodedb"""
nodeinfo = self._call_alfred(self.request_data_type)
statistics = self._call_alfred(self.request_data_type+1)
# merge statistics into nodeinfo to be compatible with earlier versions
for mac, node in statistics.items():
if mac in nodeinfo:
nodeinfo[mac]['statistics'] = statistics[mac]
for mac, node in nodeinfo.items():
aliases = [mac] + node.get('network', {}).get('mesh_interfaces', [])
nodedb.add_or_update(aliases, node)

100
ffmap/inputs/batadv.py Normal file
View File

@ -0,0 +1,100 @@
import subprocess
import json
class Input:
"""Fill the NodeDB with links from batadv-vis.
The links are added as lists containing the neighboring nodes, not
only their identifiers! Mind this when exporting the database, as
it probably leads to recursion.
"""
def __init__(self, mesh_interface="bat0"):
self.mesh_interface = mesh_interface
@staticmethod
def _is_similar_mac(a, b):
"""Determine if two MAC addresses are similar."""
if a == b:
return True
# Split the address into bytes
try:
mac_a = list(int(i, 16) for i in a.split(":"))
mac_b = list(int(i, 16) for i in b.split(":"))
except ValueError:
return False
# Second and third byte musn't differ
if mac_a[1] != mac_b[1] or mac_a[2] != mac_b[2]:
return False
# First byte must only differ in bits 2 and 3
if mac_a[0] | 6 != mac_b[0] | 6:
return False
# Count differing bytes after the third
c = [x for x in zip(mac_a[3:], mac_b[3:]) if x[0] != x[1]]
# No more than two additional bytes must differ
if len(c) > 2:
return False
# If no more bytes differ, they are very similar
if len(c) == 0:
return True
# If the sum of absolute differences isn't greater than 2, they
# are pretty similar
delta = sum(abs(i[0] - i[1]) for i in c)
return delta < 2
def get_data(self, nodedb):
"""Add data from batadv-vis to the supplied nodedb"""
output = subprocess.check_output([
"batadv-vis",
"-i", str(self.mesh_interface),
"-f", "jsondoc",
])
data = json.loads(output.decode("utf-8"))
# First pass
for node in data["vis"]:
# Determine possible other MAC addresses of this node by
# comparing all its client's MAC addresses to its primary
# MAC address. If they are similar, it probably is another
# address of the node itself! If it isn't, it is a real
# client.
node['aliases'] = [node["primary"]]
if 'secondary' in node:
node['aliases'].extend(node['secondary'])
real_clients = []
for mac in node["clients"]:
if self._is_similar_mac(mac, node["primary"]):
node['aliases'].append(mac)
else:
real_clients.append(mac)
node['clients'] = real_clients
# Add nodes and aliases without any information at first.
# This way, we can later link the objects themselves.
nodedb.add_or_update(node['aliases'])
# Second pass
for node in data["vis"]:
# We only need the primary address now, all aliases are
# already present in the database. Furthermore, we can be
# sure that all neighbors are in the database as well. If
# a neighbor isn't added already, we simply ignore it.
nodedb.add_or_update(
[node["primary"]],
{
"clients": node["clients"],
"neighbors": [
{
"metric": neighbor['metric'],
"neighbor": nodedb[neighbor['neighbor']],
} for neighbor in node["neighbors"]
if neighbor['neighbor'] in nodedb
]
}
)

71
ffmap/inputs/wiki.py Executable file
View File

@ -0,0 +1,71 @@
import json
import argparse
from itertools import zip_longest
from urllib.request import urlopen
from bs4 import BeautifulSoup
class Input:
def __init__(self, url="http://luebeck.freifunk.net/wiki/Knoten"):
self.url = url
def fetch_wikitable(self):
f = urlopen(self.url)
soup = BeautifulSoup(f)
table = soup.find("table")
rows = table.find_all("tr")
headers = []
data = []
def maybe_strip(x):
if isinstance(x.string, str):
return x.string.strip()
else:
return ""
for row in rows:
tds = list([maybe_strip(x) for x in row.find_all("td")])
ths = list([maybe_strip(x) for x in row.find_all("th")])
if any(tds):
data.append(tds)
if any(ths):
headers = ths
return [dict(zip(headers, d)) for d in data]
def get_data(self, nodedb):
nodes = self.fetch_wikitable()
for node in nodes:
if "MAC" not in node or not node["MAC"]:
# without MAC, we cannot merge this data with others, so
# we might as well ignore it
continue
newnode = {
"network": {
"mac": node.get("MAC").lower(),
},
"location": {
"latitude": float(node.get("GPS", " ").split(" ")[0]),
"longitude": float(node.get("GPS", " ").split(" ")[1]),
"description": node.get("Ort"),
} if " " in node.get("GPS", "") else None,
"hostname": node.get("Knotenname"),
"hardware": {
"model": node["Router"],
} if node.get("Router") else None,
"software": {
"firmware": {
"base": "LFF",
"release": node.get("LFF Version"),
},
},
"owner": {
"contact": node["Betreiber"],
} if node.get("Betreiber") else None,
}
# remove keys with None as value
newnode = {k: v for k,v in newnode.items() if v is not None}
nodedb.add_or_update([newnode["network"]["mac"]], newnode)

91
ffmap/node.py Normal file
View File

@ -0,0 +1,91 @@
from collections import defaultdict
class NoneDict:
"""Act like None but return a NoneDict for every item request.
This is similar to the behaviour of collections.defaultdict in that
even previously inexistent keys can be accessed, but nothing is
stored permanently in this class.
"""
def __repr__(self):
return 'NoneDict()'
def __bool__(self):
return False
def __getitem__(self, k):
return NoneDict()
def __json__(self):
return None
def __float__(self):
return float('NaN')
def __iter__(self):
# empty generator
return
yield
def __setitem__(self, key, value):
raise RuntimeError("NoneDict is readonly")
class Node(defaultdict):
_id = None
def __init__(self, id_=None):
self._id = id_
super().__init__(NoneDict)
def __repr__(self):
return "Node(%s)" % self.id
@property
def id(self):
return self._id
def __hash__(self):
"""Generate hash from the node's id.
WARNING: Obviously this hash doesn't cover all of the node's
data, but we need nodes to be hashable in order to eliminate
duplicates in the NodeDB.
At least the id cannot change after initialization...
"""
return hash(self.id)
def deep_update(self, other):
"""Update the dictionary like dict.update() but recursively."""
def dmerge(a, b):
for k, v in b.items():
if isinstance(v, dict) and isinstance(a.get(k), dict):
dmerge(a[k], v)
else:
a[k] = v
dmerge(self, other)
@property
def vpn_neighbors(self):
try:
vpn_neighbors = []
for neighbor in self['neighbors']:
if neighbor['neighbor']['vpn']:
vpn_neighbors.append(neighbor)
return vpn_neighbors
except TypeError:
return []
def export(self):
"""Generate a serializable dict of the node.
In particular, this replaces any references to other nodes by
their id to prevent circular references.
"""
ret = dict(self)
if "neighbors" in self:
ret["neighbors"] = []
for neighbor in self["neighbors"]:
new_neighbor = {}
for key, val in neighbor.items():
if isinstance(val, Node):
new_neighbor[key] = val.id
else:
new_neighbor[key] = val
ret["neighbors"].append(new_neighbor)
if "id" not in ret:
ret["id"] = self.id
return ret

60
ffmap/nodedb.py Normal file
View File

@ -0,0 +1,60 @@
from .node import Node
class AmbiguityError(Exception):
"""Indicate the ambiguity of identifiers.
This exception is raised if there is more than one match for a set
of identifiers.
Attributes:
identifiers -- set of ambiguous identifiers
"""
identifiers = []
def __init__(self, identifiers):
self.identifiers = identifiers
def __str__(self):
return "Ambiguous identifiers: %s" % ", ".join(self.identifiers)
class NodeDB(dict):
def add_or_update(self, ids, other=None):
"""Add or update a node in the database.
Searches for an already existing node and updates it, or adds a new
one if no existing one is found. Raises an AmbiguityException if
more than one different nodes are found matching the criteria.
Arguments:
ids -- list of possible identifiers (probably MAC addresses) of the
node
other -- dict of values to update in an existing node or add to
the new one. Defaults to None, in which case no values
are added or updated, only the aliases of the
(possibly freshly created) node are updated.
"""
# Find existing node, if any
node = None
node_id = None
for id_ in ids:
if id_ == node_id:
continue
if id_ in self:
if node is not None and node is not self[id_]:
raise AmbiguityError([node_id, id_])
node = self[id_]
node_id = id_
# If no node was found, create a new one
if node is None:
node = Node(ids[0])
# Update the node with the given properties using its own update method.
if other is not None:
node.deep_update(other)
# Add new aliases if any
for id_ in ids:
self[id_] = node

View File

@ -0,0 +1 @@

91
ffmap/outputs/d3json.py Normal file
View File

@ -0,0 +1,91 @@
import json
from datetime import datetime
__all__ = ["Exporter"]
class CustomJSONEncoder(json.JSONEncoder):
"""
JSON encoder that uses an object's __json__() method to convert it
to something JSON-compatible.
"""
def default(self, obj):
try:
return obj.__json__()
except AttributeError:
pass
return super().default(obj)
class Output:
def __init__(self, filepath="nodes.json"):
self.filepath = filepath
@staticmethod
def generate(nodedb):
indexes = {}
nodes = []
count = 0
for node in set(nodedb.values()):
node_export = node.export()
node_export["flags"] = {
"gateway": "vpn" in node and node["vpn"],
"client": False,
"online": True
}
nodes.append(node_export)
indexes[node.id] = count
count += 1
links = {}
for node in set(nodedb.values()):
for neighbor in node.get("neighbors", []):
key = (neighbor["neighbor"].id, node.id)
rkey = tuple(reversed(key))
if rkey in links:
links[rkey]["quality"] += ","+neighbor["metric"]
else:
links[key] = {
"source": indexes[node.id],
"target": indexes[neighbor["neighbor"].id],
"quality": neighbor["metric"],
"type": "vpn" if neighbor["neighbor"]["vpn"] or node["vpn"] else None,
"id": "-".join((node.id, neighbor["neighbor"].id)),
}
clientcount = 0
for client in node.get("clients", []):
nodes.append({
"id": "%s-%s" % (node.id, clientcount),
"flags": {
"client": True,
"online": True,
"gateway": False
}
})
indexes[client] = count
links[(node.id, client)] = {
"source": indexes[node.id],
"target": indexes[client],
"quality": "TT",
"type": "client",
"id": "%s-%i" % (node.id, clientcount),
}
count += 1
clientcount += 1
return {
"nodes": nodes,
"links": list(links.values()),
"meta": {
"timestamp": datetime.utcnow()
.replace(microsecond=0)
.isoformat()
}
}
def output(self, nodedb):
with open(self.filepath, "w") as nodes_json:
json.dump(
self.generate(nodedb),
nodes_json,
cls=CustomJSONEncoder
)

30
ffmap/outputs/rrd.py Normal file
View File

@ -0,0 +1,30 @@
import os
from ffmap.rrd.rrds import NodeRRD, GlobalRRD
class Output:
def __init__(self, directory="nodedb"):
self.directory = directory
try:
os.mkdir(self.directory)
except OSError:
pass
def output(self, nodedb):
nodes = set(nodedb.values())
clients = 0
nodecount = 0
for node in nodes:
clients += len(node.get("clients", []))
nodecount += 1
NodeRRD(
os.path.join(
self.directory,
str(node.id).replace(':', '') + '.rrd'
),
node
).update()
GlobalRRD(os.path.join(self.directory, "nodes.rrd")).update(
nodecount,
clients
)

View File

@ -1,20 +1,19 @@
import subprocess import subprocess
import re import re
import io
import os import os
from tempfile import TemporaryFile
from operator import xor, eq from operator import xor, eq
from functools import reduce from functools import reduce
from itertools import starmap from itertools import starmap
import math import math
class RRDIncompatibleException(Exception): class RRDIncompatibleException(Exception):
""" """
Is raised when an RRD doesn't have the desired definition and cannot be Is raised when an RRD doesn't have the desired definition and cannot be
upgraded to it. upgraded to it.
""" """
pass pass
class RRDOutdatedException(Exception): class RRDOutdatedException(Exception):
""" """
Is raised when an RRD doesn't have the desired definition, but can be Is raised when an RRD doesn't have the desired definition, but can be
@ -26,8 +25,7 @@ if not hasattr(__builtins__, "FileNotFoundError"):
class FileNotFoundError(Exception): class FileNotFoundError(Exception):
pass pass
class RRD:
class RRD(object):
""" """
An RRD is a Round Robin Database, a database which forgets old data and An RRD is a Round Robin Database, a database which forgets old data and
aggregates multiple records into new ones. aggregates multiple records into new ones.
@ -51,7 +49,7 @@ class RRD(object):
def _exec_rrdtool(self, cmd, *args, **kwargs): def _exec_rrdtool(self, cmd, *args, **kwargs):
pargs = ["rrdtool", cmd, self.filename] pargs = ["rrdtool", cmd, self.filename]
for k, v in kwargs.items(): for k,v in kwargs.items():
pargs.extend(["--" + k, str(v)]) pargs.extend(["--" + k, str(v)])
pargs.extend(args) pargs.extend(args)
subprocess.check_output(pargs) subprocess.check_output(pargs)
@ -59,7 +57,7 @@ class RRD(object):
def __init__(self, filename): def __init__(self, filename):
self.filename = filename self.filename = filename
def ensure_sanity(self, ds_list, rra_list, **kwargs): def ensureSanity(self, ds_list, rra_list, **kwargs):
""" """
Create or upgrade the RRD file if necessary to contain all DS in Create or upgrade the RRD file if necessary to contain all DS in
ds_list. If it needs to be created, the RRAs in rra_list and any kwargs ds_list. If it needs to be created, the RRAs in rra_list and any kwargs
@ -67,13 +65,13 @@ class RRD(object):
database are NOT modified! database are NOT modified!
""" """
try: try:
self.check_sanity(ds_list) self.checkSanity(ds_list)
except FileNotFoundError: except FileNotFoundError:
self.create(ds_list, rra_list, **kwargs) self.create(ds_list, rra_list, **kwargs)
except RRDOutdatedException: except RRDOutdatedException:
self.upgrade(ds_list) self.upgrade(ds_list)
def check_sanity(self, ds_list=()): def checkSanity(self, ds_list=()):
""" """
Check if the RRD file exists and contains (at least) the DS listed in Check if the RRD file exists and contains (at least) the DS listed in
ds_list. ds_list.
@ -83,11 +81,8 @@ class RRD(object):
info = self.info() info = self.info()
if set(ds_list) - set(info['ds'].values()) != set(): if set(ds_list) - set(info['ds'].values()) != set():
for ds in ds_list: for ds in ds_list:
if ds.name in info['ds'] and\ if ds.name in info['ds'] and ds.type != info['ds'][ds.name].type:
ds.type != info['ds'][ds.name].type: raise RRDIncompatibleException("%s is %s but should be %s" % (ds.name, ds.type, info['ds'][ds.name].type))
raise RRDIncompatibleException(
"{} is {} but should be {}".format(
ds.name, ds.type, info['ds'][ds.name].type))
else: else:
raise RRDOutdatedException() raise RRDOutdatedException()
@ -110,10 +105,8 @@ class RRD(object):
if ds.name in info['ds']: if ds.name in info['ds']:
old_ds = info['ds'][ds.name] old_ds = info['ds'][ds.name]
if info['ds'][ds.name].type != ds.type: if info['ds'][ds.name].type != ds.type:
raise RuntimeError( raise RuntimeError('Cannot convert existing DS "%s" from type "%s" to "%s"' %
"Cannot convert existing DS '{}'" (ds.name, old_ds.type, ds.type))
"from type '{}' to '{}'".format(
ds.name, old_ds.type, ds.type))
ds.index = old_ds.index ds.index = old_ds.index
new_ds[ds.index] = ds new_ds[ds.index] = ds
else: else:
@ -123,11 +116,12 @@ class RRD(object):
dump = subprocess.Popen( dump = subprocess.Popen(
["rrdtool", "dump", self.filename], ["rrdtool", "dump", self.filename],
stdout=subprocess.PIPE) stdout=subprocess.PIPE
)
restore = subprocess.Popen( restore = subprocess.Popen(
["rrdtool", "restore", "-", self.filename + ".new"], ["rrdtool", "restore", "-", self.filename + ".new"],
stdin=subprocess.PIPE) stdin=subprocess.PIPE
)
echo = True echo = True
ds_definitions = True ds_definitions = True
for line in dump.stdout: for line in dump.stdout:
@ -149,17 +143,19 @@ class RRD(object):
<value>%s</value> <value>%s</value>
<unknown_sec> %i </unknown_sec> <unknown_sec> %i </unknown_sec>
</ds> </ds>
""" % (ds.name, """ % (
ds.type, ds.name,
ds.args[0], ds.type,
ds.args[1], ds.args[0],
ds.args[2], ds.args[1],
ds.last_ds, ds.args[2],
ds.value, ds.last_ds,
ds.unknown_sec), "utf-8")) ds.value,
ds.unknown_sec)
, "utf-8"))
if b'</cdp_prep>' in line: if b'</cdp_prep>' in line:
restore.stdin.write(added_ds_num * b""" restore.stdin.write(added_ds_num*b"""
<ds> <ds>
<primary_value> NaN </primary_value> <primary_value> NaN </primary_value>
<secondary_value> NaN </secondary_value> <secondary_value> NaN </secondary_value>
@ -173,7 +169,7 @@ class RRD(object):
restore.stdin.write( restore.stdin.write(
line.replace( line.replace(
b'</row>', b'</row>',
(added_ds_num * b'<v>NaN</v>') + b'</row>' (added_ds_num*b'<v>NaN</v>')+b'</row>'
) )
) )
@ -241,8 +237,7 @@ class RRD(object):
for line in out.splitlines(): for line in out.splitlines():
base = info base = info
for match in self._info_regex.finditer(line): for match in self._info_regex.finditer(line):
section, key, name, value = match.group( section, key, name, value = match.group("section", "key", "name", "value")
"section", "key", "name", "value")
if section and key: if section and key:
try: try:
key = int(key) key = int(key)
@ -263,8 +258,7 @@ class RRD(object):
base[name] = value base[name] = value
dss = {} dss = {}
for name, ds in info['ds'].items(): for name, ds in info['ds'].items():
ds_obj = DS(name, ds['type'], ds['minimal_heartbeat'], ds_obj = DS(name, ds['type'], ds['minimal_heartbeat'], ds['min'], ds['max'])
ds['min'], ds['max'])
ds_obj.index = ds['index'] ds_obj.index = ds['index']
ds_obj.last_ds = ds['last_ds'] ds_obj.last_ds = ds['last_ds']
ds_obj.value = ds['value'] ds_obj.value = ds['value']
@ -273,14 +267,12 @@ class RRD(object):
info['ds'] = dss info['ds'] = dss
rras = [] rras = []
for rra in info['rra'].values(): for rra in info['rra'].values():
rras.append(RRA(rra['cf'], rra['xff'], rras.append(RRA(rra['cf'], rra['xff'], rra['pdp_per_row'], rra['rows']))
rra['pdp_per_row'], rra['rows']))
info['rra'] = rras info['rra'] = rras
self._cached_info = info self._cached_info = info
return info return info
class DS:
class DS(object):
""" """
DS stands for Data Source and represents one line of data points in a Round DS stands for Data Source and represents one line of data points in a Round
Robin Database (RRD). Robin Database (RRD).
@ -292,7 +284,6 @@ class DS(object):
last_ds = 'U' last_ds = 'U'
value = 0 value = 0
unknown_sec = 0 unknown_sec = 0
def __init__(self, name, dst, *args): def __init__(self, name, dst, *args):
self.name = name self.name = name
self.type = dst self.type = dst
@ -302,7 +293,7 @@ class DS(object):
return "DS:%s:%s:%s" % ( return "DS:%s:%s:%s" % (
self.name, self.name,
self.type, self.type,
":".join(map(str, self._nan_to_u_args())) ":".join(map(str, self._nan_to_U_args()))
) )
def __repr__(self): def __repr__(self):
@ -314,23 +305,22 @@ class DS(object):
) )
def __eq__(self, other): def __eq__(self, other):
return all(starmap(eq, zip(self.compare_keys(), other.compare_keys()))) return all(starmap(eq, zip(self._compare_keys(), other._compare_keys())))
def __hash__(self): def __hash__(self):
return reduce(xor, map(hash, self.compare_keys())) return reduce(xor, map(hash, self._compare_keys()))
def _nan_to_u_args(self): def _nan_to_U_args(self):
return tuple( return tuple(
'U' if type(arg) is float and math.isnan(arg) 'U' if type(arg) is float and math.isnan(arg)
else arg else arg
for arg in self.args for arg in self.args
) )
def compare_keys(self): def _compare_keys(self):
return self.name, self.type, self._nan_to_u_args() return (self.name, self.type, self._nan_to_U_args())
class RRA:
class RRA(object):
def __init__(self, cf, *args): def __init__(self, cf, *args):
self.cf = cf self.cf = cf
self.args = args self.args = args

115
ffmap/rrd/rrds.py Normal file
View File

@ -0,0 +1,115 @@
import os
import subprocess
from ffmap.node import Node
from . import RRD, DS, RRA
class NodeRRD(RRD):
ds_list = [
DS('upstate', 'GAUGE', 120, 0, 1),
DS('clients', 'GAUGE', 120, 0, float('NaN')),
DS('neighbors', 'GAUGE', 120, 0, float('NaN')),
DS('vpn_neighbors', 'GAUGE', 120, 0, float('NaN')),
DS('loadavg', 'GAUGE', 120, 0, float('NaN')),
DS('rx_bytes', 'DERIVE', 120, 0, float('NaN')),
DS('rx_packets', 'DERIVE', 120, 0, float('NaN')),
DS('tx_bytes', 'DERIVE', 120, 0, float('NaN')),
DS('tx_packets', 'DERIVE', 120, 0, float('NaN')),
DS('mgmt_rx_bytes', 'DERIVE', 120, 0, float('NaN')),
DS('mgmt_rx_packets', 'DERIVE', 120, 0, float('NaN')),
DS('mgmt_tx_bytes', 'DERIVE', 120, 0, float('NaN')),
DS('mgmt_tx_packets', 'DERIVE', 120, 0, float('NaN')),
DS('forward_bytes', 'DERIVE', 120, 0, float('NaN')),
DS('forward_packets', 'DERIVE', 120, 0, float('NaN')),
]
rra_list = [
RRA('AVERAGE', 0.5, 1, 120), # 2 hours of 1 minute samples
RRA('AVERAGE', 0.5, 5, 1440), # 5 days of 5 minute samples
RRA('AVERAGE', 0.5, 60, 720), # 30 days of 1 hour samples
RRA('AVERAGE', 0.5, 720, 730), # 1 year of 12 hour samples
]
def __init__(self, filename, node = None):
"""
Create a new RRD for a given node.
If the RRD isn't supposed to be updated, the node can be omitted.
"""
self.node = node
super().__init__(filename)
self.ensureSanity(self.ds_list, self.rra_list, step=60)
@property
def imagename(self):
return os.path.basename(self.filename).rsplit('.', 2)[0] + ".png"
def update(self):
values = {
'upstate': 1,
'clients': float(len(self.node.get('clients', []))),
'neighbors': float(len(self.node.get('neighbors', []))),
'vpn_neighbors': float(len(self.node.vpn_neighbors)),
'loadavg': float(self.node['statistics']['loadavg']),
}
for item in ('rx', 'tx', 'mgmt_rx', 'mgmt_tx', 'forward'):
try:
values[item + '_bytes'] = int(self.node['statistics']['traffic'][item]['bytes'])
except TypeError:
pass
try:
values[item + '_packets'] = int(self.node['statistics']['traffic'][item]['packets'])
except TypeError:
pass
super().update(values)
def graph(self, directory, timeframe):
"""
Create a graph in the given directory. The file will be named
basename.png if the RRD file is named basename.rrd
"""
args = ['rrdtool','graph', os.path.join(directory, self.imagename),
'-s', '-' + timeframe ,
'-w', '800',
'-h', '400',
'-l', '0',
'-y', '1:1',
'DEF:clients=' + self.filename + ':clients:AVERAGE',
'VDEF:maxc=clients,MAXIMUM',
'CDEF:c=0,clients,ADDNAN',
'CDEF:d=clients,UN,maxc,UN,1,maxc,IF,*',
'AREA:c#0F0:up\\l',
'AREA:d#F00:down\\l',
'LINE1:c#00F:clients connected\\l',
]
subprocess.check_output(args)
class GlobalRRD(RRD):
ds_list = [
# Number of nodes available
DS('nodes', 'GAUGE', 120, 0, float('NaN')),
# Number of client available
DS('clients', 'GAUGE', 120, 0, float('NaN')),
]
rra_list = [
RRA('AVERAGE', 0.5, 1, 120), # 2 hours of 1 minute samples
RRA('AVERAGE', 0.5, 60, 744), # 31 days of 1 hour samples
RRA('AVERAGE', 0.5, 1440, 1780),# ~5 years of 1 day samples
]
def __init__(self, filepath):
super().__init__(filepath)
self.ensureSanity(self.ds_list, self.rra_list, step=60)
def update(self, nodeCount, clientCount):
super().update({'nodes': nodeCount, 'clients': clientCount})
def graph(self, filename, timeframe):
args = ["rrdtool", 'graph', filename,
'-s', '-' + timeframe,
'-w', '800',
'-h' '400',
'DEF:nodes=' + self.filename + ':nodes:AVERAGE',
'LINE1:nodes#F00:nodes\\l',
'DEF:clients=' + self.filename + ':clients:AVERAGE',
'LINE2:clients#00F:clients',
]
subprocess.check_output(args)

69
ffmap/run.py Normal file
View File

@ -0,0 +1,69 @@
#!/usr/bin/env python3
import argparse
import sys
from ffmap import run_names
class MyAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
if self.dest.startswith(("input_", "output_")):
collection_name = self.dest.split("_")[0] + "s"
name = self.dest.split("_", 1)[1]
if not hasattr(namespace, collection_name):
setattr(namespace, collection_name, [])
collection = getattr(namespace, collection_name)
collection.append({
"name": name,
"options": {self.metavar.lower(): values}
if values is not None else {}
})
else:
raise Exception("Unexpected dest=" + self.dest)
def parser_add_myarg(parser, name, metavar="OPT", help=None):
parser.add_argument("--" + name,
metavar=metavar,
type=str,
nargs='?',
const=None,
action=MyAction,
help=help)
parser = argparse.ArgumentParser(
description="""Merge node data from multiple sources and generate
various output formats from this data""",
)
input_group = parser.add_argument_group("Inputs", description="""
Inputs are used in the order given on the command line, where later
inputs can overwrite attributes of earlier inputs if named equally,
but the first input encountering a node sets its id, which is
immutable afterwards.
The same input can be given multiple times, probably with different
options.
""")
output_group = parser.add_argument_group("Outputs")
parser_add_myarg(input_group, 'input-alfred', metavar="REQUEST_DATA_TYPE",
help="read node details from A.L.F.R.E.D.")
parser_add_myarg(input_group, 'input-wiki', metavar="URL",
help="read node details from a Wiki page")
parser_add_myarg(input_group, 'input-batadv', metavar="MESH_INTERFACE",
help="add node's neighbors and clients from batadv-vis")
parser_add_myarg(output_group, 'output-d3json', metavar="FILEPATH",
help="generate JSON file compatible with ffmap-d3")
parser_add_myarg(output_group, 'output-rrd', metavar="DIRECTORY",
help="update RRDs with statistics, one global and one per node")
args = parser.parse_args()
if "inputs" not in args or not args.inputs:
parser.print_help(sys.stderr)
sys.stderr.write("\nERROR: No input has been defined!\n")
sys.exit(1)
if "outputs" not in args or not args.outputs:
parser.print_help(sys.stderr)
sys.stderr.write("\nERROR: No output has been defined!\n")
sys.exit(1)
run_names(inputs=args.inputs, outputs=args.outputs)

View File

@ -1,74 +0,0 @@
"""
RRD for gateways
"""
import os
import subprocess
from lib.RRD import DS, RRA, RRD
class GateRRD(RRD):
ds_list = [
DS('upstate', 'GAUGE', 120, 0, 1),
DS('clients', 'GAUGE', 120, 0, float('NaN')),
DS('loadavg', 'GAUGE', 120, 0, float('NaN')),
DS('leases', 'GAUGE', 120, 0, float('NaN')),
]
rra_list = [
RRA('AVERAGE', 0.5, 1, 120), # 2 hours of 1 minute samples
RRA('AVERAGE', 0.5, 5, 1440), # 5 days of 5 minute samples
RRA('AVERAGE', 0.5, 15, 672), # 7 days of 15 minute samples
RRA('AVERAGE', 0.5, 60, 720), # 30 days of 1 hour samples
RRA('AVERAGE', 0.5, 720, 730), # 1 year of 12 hour samples
]
def __init__(self, filename, node=None):
"""
Create a new RRD for a given node.
If the RRD isn't supposed to be updated, the node can be omitted.
"""
self.node = node
super().__init__(filename)
self.ensure_sanity(self.ds_list, self.rra_list, step=60)
@property
def imagename(self):
return "{basename}.png".format(
basename=os.path.basename(self.filename).rsplit('.', 2)[0])
# TODO: fix this, python does not support function overloading
def update(self):
values = {
'upstate': int(self.node['flags']['online']),
'clients': float(self.node['statistics']['clients']),
}
if 'loadavg' in self.node['statistics']:
values['loadavg'] = float(self.node['statistics'].get('loadavg', 0))
# Gateways can send the peer count. We use the clients field to store data
if 'peers' in self.node['statistics']:
values['clients'] = self.node['statistics']['peers']
if 'leases' in self.node['statistics']:
values['leases'] = self.node['statistics']['leases']
super().update(values)
def graph(self, directory, timeframe):
"""
Create a graph in the given directory. The file will be named
basename.png if the RRD file is named basename.rrd
"""
args = ['rrdtool', 'graph', os.path.join(directory, self.imagename),
'-s', '-' + timeframe,
'-w', '800',
'-h', '400',
'-l', '0',
'-y', '1:1',
'DEF:clients=' + self.filename + ':clients:AVERAGE',
'VDEF:maxc=clients,MAXIMUM',
'CDEF:c=0,clients,ADDNAN',
'CDEF:d=clients,UN,maxc,UN,1,maxc,IF,*',
'AREA:c#0F0:up\\l',
'AREA:d#F00:down\\l',
'LINE1:c#00F:clients connected\\l']
subprocess.check_output(args)

View File

@ -1,40 +0,0 @@
import os
import subprocess
from lib.RRD import DS, RRA, RRD
class GlobalRRD(RRD):
ds_list = [
# Number of nodes available
DS('nodes', 'GAUGE', 120, 0, float('NaN')),
# Number of client available
DS('clients', 'GAUGE', 120, 0, float('NaN')),
]
rra_list = [
# 2 hours of 1 minute samples
RRA('AVERAGE', 0.5, 1, 120),
# 31 days of 1 hour samples
RRA('AVERAGE', 0.5, 60, 744),
# ~5 years of 1 day samples
RRA('AVERAGE', 0.5, 1440, 1780),
]
def __init__(self, directory):
super().__init__(os.path.join(directory, "nodes.rrd"))
self.ensure_sanity(self.ds_list, self.rra_list, step=60)
# TODO: fix this, python does not support function overloading
def update(self, node_count, client_count):
super().update({'nodes': node_count, 'clients': client_count})
def graph(self, filename, timeframe):
args = ["rrdtool", 'graph', filename,
'-s', '-' + timeframe,
'-w', '800',
'-h' '400',
'DEF:nodes=' + self.filename + ':nodes:AVERAGE',
'LINE1:nodes#F00:nodes\\l',
'DEF:clients=' + self.filename + ':clients:AVERAGE',
'LINE2:clients#00F:clients']
subprocess.check_output(args)

View File

@ -1,67 +0,0 @@
"""
RRD for nodes
"""
import os
import subprocess
from lib.RRD import DS, RRA, RRD
class NodeRRD(RRD):
ds_list = [
DS('upstate', 'GAUGE', 120, 0, 1),
DS('clients', 'GAUGE', 120, 0, float('NaN')),
DS('loadavg', 'GAUGE', 120, 0, float('NaN')),
]
rra_list = [
RRA('AVERAGE', 0.5, 1, 120), # 2 hours of 1 minute samples
RRA('AVERAGE', 0.5, 5, 1440), # 5 days of 5 minute samples
RRA('AVERAGE', 0.5, 60, 720), # 30 days of 1 hour samples
RRA('AVERAGE', 0.5, 720, 730), # 1 year of 12 hour samples
]
def __init__(self, filename, node=None):
"""
Create a new RRD for a given node.
If the RRD isn't supposed to be updated, the node can be omitted.
"""
self.node = node
super().__init__(filename)
self.ensure_sanity(self.ds_list, self.rra_list, step=60)
@property
def imagename(self):
return "{basename}.png".format(
basename=os.path.basename(self.filename).rsplit('.', 2)[0])
# TODO: fix this, python does not support function overloading
def update(self):
values = {
'upstate': int(self.node['flags']['online']),
'clients': self.node['statistics']['clients']
}
if 'loadavg' in self.node['statistics']:
values['loadavg'] = float(self.node['statistics']['loadavg'])
super().update(values)
def graph(self, directory, timeframe):
"""
Create a graph in the given directory. The file will be named
basename.png if the RRD file is named basename.rrd
"""
args = ['rrdtool', 'graph', os.path.join(directory, self.imagename),
'-s', '-' + timeframe,
'-w', '800',
'-h', '400',
'-l', '0',
'-y', '1:1',
'DEF:clients=' + self.filename + ':clients:AVERAGE',
'VDEF:maxc=clients,MAXIMUM',
'CDEF:c=0,clients,ADDNAN',
'CDEF:d=clients,UN,maxc,UN,1,maxc,IF,*',
'AREA:c#0F0:up\\l',
'AREA:d#F00:down\\l',
'LINE1:c#00F:clients connected\\l']
subprocess.check_output(args)

View File

@ -1 +0,0 @@
__author__ = 'hexa'

View File

@ -1,37 +0,0 @@
import subprocess
import json
import os
class Alfred(object):
"""
Bindings for the alfred-json utility
"""
def __init__(self, unix_sockpath=None):
self.unix_sock = unix_sockpath
if unix_sockpath is not None and not os.path.exists(unix_sockpath):
raise RuntimeError('alfred: invalid unix socket path given')
def _fetch(self, data_type):
cmd = ['alfred-json',
'-z',
'-f', 'json',
'-r', str(data_type)]
if self.unix_sock:
cmd.extend(['-s', self.unix_sock])
# There should not be any warnings which would be sent by cron
# every minute. Therefore suppress error output of called program
FNULL = open(os.devnull, 'w')
output = subprocess.check_output(cmd, stderr=FNULL)
FNULL.close()
return json.loads(output.decode("utf-8")).values()
def nodeinfo(self):
return self._fetch(158)
def statistics(self):
return self._fetch(159)
def vis(self):
return self._fetch(160)

View File

@ -1,101 +0,0 @@
import subprocess
import json
import os
import re
class Batman(object):
"""
Bindings for B.A.T.M.A.N. Advanced
commandline interface "batctl"
"""
def __init__(self, mesh_interface='bat0', alfred_sockpath=None):
self.mesh_interface = mesh_interface
self.alfred_sock = alfred_sockpath
# ensure /usr/sbin and /usr/local/sbin are in PATH
env = os.environ
path = set(env['PATH'].split(':'))
path.add('/usr/sbin/')
path.add('/usr/local/sbin')
env['PATH'] = ':'.join(path)
self.environ = env
# compile regular expressions only once on startup
self.mac_addr_pattern = re.compile(r'(([a-f0-9]{2}:){5}[a-f0-9]{2})')
def vis_data(self):
return self.vis_data_batadv_vis()
@staticmethod
def vis_data_helper(lines):
vd_tmp = []
for line in lines:
try:
utf8_line = line.decode('utf-8')
vd_tmp.append(json.loads(utf8_line))
except UnicodeDecodeError:
pass
return vd_tmp
def vis_data_batadv_vis(self):
"""
Parse "batadv-vis -i <mesh_interface> -f json"
into an array of dictionaries.
"""
cmd = ['batadv-vis', '-i', self.mesh_interface, '-f', 'json']
if self.alfred_sock:
cmd.extend(['-u', self.alfred_sock])
output = subprocess.check_output(cmd, env=self.environ)
lines = output.splitlines()
return self.vis_data_helper(lines)
def gateway_list(self):
"""
Parse "batctl meshif <mesh_interface> gwl -n"
into an array of dictionaries.
"""
cmd = ['batctl', 'meshif', self.mesh_interface, 'gwl', '-n']
if os.geteuid() > 0:
cmd.insert(0, 'sudo')
output = subprocess.check_output(cmd, env=self.environ)
output_utf8 = output.decode('utf-8')
rows = output_utf8.splitlines()
gateways = []
# local gateway
header = rows.pop(0)
mode, bandwidth = self.gateway_mode()
if mode == 'server':
local_gw_mac = self.mac_addr_pattern.search(header).group(0)
gateways.append(local_gw_mac)
# remote gateway(s)
for row in rows:
match = self.mac_addr_pattern.search(row)
if match:
gateways.append(match.group(1))
return gateways
def gateway_mode(self):
"""
Parse "batctl meshif <mesh_interface> gw"
return: tuple mode, bandwidth, if mode != server then bandwidth is None
"""
cmd = ['batctl', 'meshif', self.mesh_interface, 'gw']
if os.geteuid() > 0:
cmd.insert(0, 'sudo')
output = subprocess.check_output(cmd, env=self.environ)
chunks = output.decode("utf-8").split()
return chunks[0], chunks[3] if 3 in chunks else None
if __name__ == "__main__":
bc = Batman()
vd = bc.vis_data()
gw = bc.gateway_list()
for x in vd:
print(x)
print(gw)
print(bc.gateway_mode())

View File

@ -1,85 +0,0 @@
from functools import reduce
from itertools import chain
import networkx as nx
from lib.nodes import build_mac_table
def import_vis_data(graph, nodes, vis_data):
macs = build_mac_table(nodes)
nodes_a = map(lambda d: 2 * [d['primary']],
filter(lambda d: 'primary' in d, vis_data))
nodes_b = map(lambda d: [d['secondary'], d['of']],
filter(lambda d: 'secondary' in d, vis_data))
graph.add_nodes_from(map(lambda a, b:
(a, dict(primary=b, node_id=macs.get(b))),
*zip(*chain(nodes_a, nodes_b))))
edges = filter(lambda d: 'neighbor' in d, vis_data)
graph.add_edges_from(map(lambda d: (d['router'], d['neighbor'],
dict(tq=float(d['label']))), edges))
def mark_vpn(graph, vpn_macs):
components = map(frozenset, nx.weakly_connected_components(graph))
components = filter(vpn_macs.intersection, components)
nodes = reduce(lambda a, b: a | b, components, set())
for node in nodes:
for k, v in graph[node].items():
v['vpn'] = True
def to_multigraph(graph):
def f(a):
node = graph.node[a]
return node['primary'] if node else a
def map_node(node, data):
return (data['primary'],
dict(node_id=data['node_id'])) if data else (node, dict())
digraph = nx.MultiDiGraph()
digraph.add_nodes_from(map(map_node, *zip(*graph.nodes_iter(data=True))))
digraph.add_edges_from(map(lambda a, b, data: (f(a), f(b), data),
*zip(*graph.edges_iter(data=True))))
return digraph
def merge_nodes(graph):
def merge_edges(data):
tq = min(map(lambda d: d['tq'], data))
vpn = all(map(lambda d: d.get('vpn', False), data))
return dict(tq=tq, vpn=vpn)
multigraph = to_multigraph(graph)
digraph = nx.DiGraph()
digraph.add_nodes_from(multigraph.nodes_iter(data=True))
edges = chain.from_iterable([[(e, d, merge_edges(
multigraph[e][d].values()))
for d in multigraph[e]] for e in multigraph])
digraph.add_edges_from(edges)
return digraph
def to_undirected(graph):
multigraph = nx.MultiGraph()
multigraph.add_nodes_from(graph.nodes_iter(data=True))
multigraph.add_edges_from(graph.edges_iter(data=True))
def merge_edges(data):
tq = max(map(lambda d: d['tq'], data))
vpn = all(map(lambda d: d.get('vpn', False), data))
return dict(tq=tq, vpn=vpn, bidirect=len(data) == 2)
graph = nx.Graph()
graph.add_nodes_from(multigraph.nodes_iter(data=True))
edges = chain.from_iterable([[(e, d, merge_edges(
multigraph[e][d].values()))
for d in multigraph[e]] for e in multigraph])
graph.add_edges_from(edges)
return graph

View File

@ -1,32 +0,0 @@
def export_nodelist(now, nodedb):
nodelist = list()
for node_id, node in nodedb["nodes"].items():
node_out = dict()
node_out["id"] = node_id
node_out["name"] = node["nodeinfo"]["hostname"]
if "location" in node["nodeinfo"]:
node_out["position"] = {"lat": node["nodeinfo"]["location"]["latitude"],
"long": node["nodeinfo"]["location"]["longitude"]}
node_out["status"] = dict()
node_out["status"]["online"] = node["flags"]["online"]
if "firstseen" in node:
node_out["status"]["firstcontact"] = node["firstseen"]
if "lastseen" in node:
node_out["status"]["lastcontact"] = node["lastseen"]
if "clients" in node["statistics"]:
node_out["status"]["clients"] = node["statistics"]["clients"]
if "role" in node["nodeinfo"]["system"]:
node_out["role"] = node["nodeinfo"]["system"]["role"]
else:
node_out["role"] = "node"
nodelist.append(node_out)
return {"version": "1.0.1", "nodes": nodelist, "updated_at": now.isoformat()}

View File

@ -1,168 +0,0 @@
from collections import Counter, defaultdict
from datetime import datetime
from functools import reduce
def build_mac_table(nodes):
macs = dict()
for node_id, node in nodes.items():
try:
for mac in node['nodeinfo']['network']['mesh_interfaces']:
macs[mac] = node_id
except KeyError:
pass
try:
for upper_if in node['nodeinfo']['network']['mesh'].values():
for lower_if in upper_if['interfaces'].values():
for mac in lower_if:
macs[mac] = node_id
except KeyError:
pass
return macs
def prune_nodes(nodes, now, days):
prune = []
for node_id, node in nodes.items():
if 'lastseen' not in node:
prune.append(node_id)
continue
lastseen = datetime.strptime(node['lastseen'], '%Y-%m-%dT%H:%M:%S')
delta = (now - lastseen).days
if delta >= days:
prune.append(node_id)
for node_id in prune:
del nodes[node_id]
def mark_online(node, now):
node['lastseen'] = now.isoformat()
node.setdefault('firstseen', now.isoformat())
node['flags']['online'] = True
def import_nodeinfo(nodes, nodeinfos, now, assume_online=False):
for nodeinfo in filter(lambda d: 'node_id' in d, nodeinfos):
node = nodes.setdefault(nodeinfo['node_id'], {'flags': dict()})
node['nodeinfo'] = nodeinfo
node['flags']['online'] = False
node['flags']['gateway'] = False
if assume_online:
mark_online(node, now)
def reset_statistics(nodes):
for node in nodes.values():
node['statistics'] = {'clients': 0}
def import_statistics(nodes, stats):
def add(node, statistics, target, source, f=lambda d: d):
try:
node['statistics'][target] = f(reduce(dict.__getitem__,
source,
statistics))
except (KeyError, TypeError, ZeroDivisionError):
pass
macs = build_mac_table(nodes)
stats = filter(lambda d: 'node_id' in d, stats)
stats = filter(lambda d: d['node_id'] in nodes, stats)
for node, stats in map(lambda d: (nodes[d['node_id']], d), stats):
add(node, stats, 'clients', ['clients', 'total'])
add(node, stats, 'gateway', ['gateway'], lambda d: macs.get(d, d))
add(node, stats, 'uptime', ['uptime'])
add(node, stats, 'loadavg', ['loadavg'])
add(node, stats, 'memory_usage', ['memory'],
lambda d: 1 - d['free'] / d['total'])
add(node, stats, 'rootfs_usage', ['rootfs_usage'])
add(node, stats, 'traffic', ['traffic'])
def import_mesh_ifs_vis_data(nodes, vis_data):
macs = build_mac_table(nodes)
mesh_ifs = defaultdict(lambda: set())
for line in filter(lambda d: 'secondary' in d, vis_data):
primary = line['of']
mesh_ifs[primary].add(primary)
mesh_ifs[primary].add(line['secondary'])
def if_to_node(ifs):
a = filter(lambda d: d in macs, ifs)
a = map(lambda d: nodes[macs[d]], a)
try:
return next(a), ifs
except StopIteration:
return None
mesh_nodes = filter(lambda d: d, map(if_to_node, mesh_ifs.values()))
for v in mesh_nodes:
node = v[0]
ifs = set()
try:
ifs = ifs.union(set(node['nodeinfo']['network']['mesh_interfaces']))
except KeyError:
pass
try:
ifs = ifs.union(set(node['nodeinfo']['network']['mesh']['bat0']['interfaces']['wireless']))
except KeyError:
pass
try:
ifs = ifs.union(set(node['nodeinfo']['network']['mesh']['bat0']['interfaces']['tunnel']))
except KeyError:
pass
try:
ifs = ifs.union(set(node['nodeinfo']['network']['mesh']['bat0']['interfaces']['other']))
except KeyError:
pass
node['nodeinfo']['network']['mesh_interfaces'] = list(ifs | v[1])
def import_vis_clientcount(nodes, vis_data):
macs = build_mac_table(nodes)
data = filter(lambda d: d.get('label', None) == 'TT', vis_data)
data = filter(lambda d: d['router'] in macs, data)
data = map(lambda d: macs[d['router']], data)
for node_id, clientcount in Counter(data).items():
nodes[node_id]['statistics'].setdefault('clients', clientcount)
def mark_gateways(nodes, gateways):
macs = build_mac_table(nodes)
gateways = filter(lambda d: d in macs, gateways)
for node in map(lambda d: nodes[macs[d]], gateways):
node['flags']['gateway'] = True
def mark_vis_data_online(nodes, vis_data, now):
macs = build_mac_table(nodes)
online = set()
for line in vis_data:
if 'primary' in line:
online.add(line['primary'])
elif 'secondary' in line:
online.add(line['secondary'])
elif 'gateway' in line:
# This matches clients' MACs.
# On pre-Gluon nodes the primary MAC will be one of it.
online.add(line['gateway'])
for mac in filter(lambda d: d in macs, online):
mark_online(nodes[macs[mac]], now)

View File

@ -1,61 +0,0 @@
#!/usr/bin/env python3
import time
import os
from lib.GlobalRRD import GlobalRRD
from lib.NodeRRD import NodeRRD
from lib.GateRRD import GateRRD
class RRD(object):
def __init__(self,
database_directory,
image_path,
display_time_global="7d",
display_time_node="1d"):
self.dbPath = database_directory
self.globalDb = GlobalRRD(self.dbPath)
self.imagePath = image_path
self.displayTimeGlobal = display_time_global
self.displayTimeNode = display_time_node
self.currentTimeInt = (int(time.time()) / 60) * 60
self.currentTime = str(self.currentTimeInt)
def update_database(self, nodes):
online_nodes = dict(filter(
lambda d: d[1]['flags']['online'], nodes.items()))
client_count = sum(map(
lambda d: d['statistics']['clients'], online_nodes.values()))
# Refresh global database
self.globalDb.update(len(online_nodes), client_count)
# Refresh databases for all single nodes
for node_id, node in online_nodes.items():
if node['flags']['gateway']:
rrd = GateRRD(os.path.join(self.dbPath, node_id + '.rrd'), node)
else:
rrd = NodeRRD(os.path.join(self.dbPath, node_id + '.rrd'), node)
rrd.update()
def update_images(self):
# Create image path if it does not exist
try:
os.stat(self.imagePath)
except OSError:
os.mkdir(self.imagePath)
self.globalDb.graph(os.path.join(self.imagePath, "globalGraph.png"),
self.displayTimeGlobal)
nodedb_files = os.listdir(self.dbPath)
for file_name in nodedb_files:
if not os.path.isfile(os.path.join(self.dbPath, file_name)):
continue
node_name = os.path.basename(file_name).split('.')
if node_name[1] == 'rrd' and not node_name[0] == "nodes":
rrd = NodeRRD(os.path.join(self.dbPath, file_name))
rrd.graph(self.imagePath, self.displayTimeNode)

View File

@ -1,19 +0,0 @@
import json
def validate_nodeinfos(nodeinfos):
result = []
for nodeinfo in nodeinfos:
if validate_nodeinfo(nodeinfo):
result.append(nodeinfo)
return result
def validate_nodeinfo(nodeinfo):
if 'location' in nodeinfo:
if 'latitude' not in nodeinfo['location'] or 'longitude' not in nodeinfo['location']:
return False
return True

10
setup.py Normal file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env python3
from distutils.core import setup
setup(name='FFmap',
version='0.1',
description='Freifunk map backend',
url='https://github.com/ffnord/ffmap-backend',
packages=['ffmap', 'ffmap.inputs', 'ffmap.outputs', 'ffmap.rrd'],
)