Module prometheus_sd.service¶
-
prometheus_sd.service.
filter_tasks
(tasks)¶ Filter tasks that seems impossible to parse further
-
async
prometheus_sd.service.
get_target_objects
(config, service)¶ Docker Swarm Candidate for get_targets().
But not really it’s an internal method that returns containers
-
prometheus_sd.service.
get_targets
(prom_config, target_objects)¶ Get targets for the scrape config, in theory this should be part of the get_target_objects so get_targets return a complete list itself.
TODO make it generic so targets can use one method with proper parameters to build a target object.
One example would be to define target.get_context()
This methods return a correct context that can be used by the service discovery method:
scrape_config = internal.parse_config(config, target.get_context())
Instead of hard coding container._container[“NetworkSettings”][“Networks”]
It could be rewritten as basic rules such as:
Expand(@container.NetworkSettings.Networks.*.IPAddress)
This way each backend can define a set of rules specific to their backend but would all call
Obviously this method could be called at different level, for example a prom_config can result in multiple targets being found for one object
while at a later point we could have a similar method to parse the configs for the actual individual final targets.
Get Targets: Returns a list of target
config = target.get_default_configs() config.update(prom_config) return internal.get_targets(config, target.get_context())
Get Configs: Returns a configuration
config = target.get_default_configs() config.update(prom_config) return internal.parse_configs(config, target.get_context())
-
async
prometheus_sd.service.
listen_events
(config)¶ Listen for events and recreate the config whenever a container start/stop
-
async
prometheus_sd.service.
load_existing_services
(config)¶ Rebuild all the services scrape configurations.
-
async
prometheus_sd.service.
load_service_configs
(config, service)¶ Load service configs
A service config has the following format:
TODO:
Change this method so service is a ServiceObject that has multiple targets.
Each targets has a context built from the service and the target itself.
For example a target has get_context() which is target.context + target.service.context combined. The context could expose something like this:
Swarm Mode:
@container: The container
@task: The task of the container
@service: The service of the container
Container Mode:
@container: The container
Each possible backend would have context specifics to their backend but the way to generate the context and access their properties would be standardized so only a few methods would be necessary to implement in a simple interface.
Label
Value
prometheus.enable
true | false
prometheus.jobs.<job>.port
“port” | null # default 80
prometheus.jobs.<job>.path
“/metrics” | null # default /metrics
prometheus.jobs.<job>.scheme
“http” | “https” | null # default “http”
prometheus.jobs.<job>.hosts
“host1,host2,host3” | null | default ip of containers
prometheus.jobs.<job>.params.<key>
“value”
prometheus.jobs.<job>.networks
“network1,network2,network3” # default all networks
prometheus.jobs.<job>.labels.<key>
“value”
-
async
prometheus_sd.service.
main_loop
(config)¶ Main loop for service discovery.
Infinite loop running 2 tasks.
First task save the current configuration
Wait for events and on each event, rewrite the configuration the same way 1 does.
If for some reasons, all task are completed start again. This could happen if the event socket times out or for any other reason why step 1 and step 2 completes with an exception
In a perfect world, it should not loop more than 1 time.
-
prometheus_sd.service.
relabel_prometheus
(job_config)¶ Get some prometheus configuration labels.
-
async
prometheus_sd.service.
save_all_configs
(config)¶ Get all services configs and save them.
This is mainly important to initialize the scrape configurations when the service starts. Or from time to time to keep things in sync in case an event was missed.
-
async
prometheus_sd.service.
save_configs
(config, sd_configs)¶ Save a configuration based on fetched configs from docker