Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PromQL: matchers on labels with an empty value are not correctly handled #1873

Closed
stevehorsfield opened this issue Aug 13, 2019 · 4 comments · Fixed by #1986
Closed

PromQL: matchers on labels with an empty value are not correctly handled #1873

stevehorsfield opened this issue Aug 13, 2019 · 4 comments · Fixed by #1986
Labels
area:query All issues pertaining to query P: Medium T: Bug

Comments

@stevehorsfield
Copy link

Service: m3-query

Context

We are sourcing multiple Kubernetes clusters into a single M3 environment. Metrics have the usual names found in kubelet, kubelet/cadvisor and kube-state-metrics. An additional label environment is used to disambiguate the clusters.

We are building similar dashboards to some of those provided by prometheus-operator with customisations for our needs.

Problem statement

The PromQL matcher container_name!="" does not appear to be working in the following query:

sum(
  container_memory_usage_bytes{
    environment="test",
    namespace="my-namespace", 
    pod_name=~"my-pod.*", 
    container_name!="POD", 
    container_name!=""
  }
) by (container_name)

We get two results back, one with the expected container name and one with an empty container_name label.

Verification

Executing this query directly against Prometheus yields the expected outcome.

Analysis

@robskillington noted that:

the difference is that Prometheus doesn’t keep time series with empty labels, but M3 does I believe. On top of that though it seems “!=“ is not being applied properly

Workaround

Using the !~ operator does not work any better, however combining with unless does work:

sum(
  container_memory_usage_bytes{
    environment="test",
    namespace="my-namespace", 
    pod_name=~"my-pod.*", 
    container_name!="POD", 
    container_name!=""
  }
  unless
  container_memory_usage_bytes{
    environment="test",
    namespace="my-namespace", 
    pod_name=~"my-pod.*", 
    container_name!="POD", 
    container_name!~".*"
  }
) by (container_name)

M3 Configuration

Placement

{
    "num_shards": 32,
    "replication_factor": 3,
    "instances": [
        {
            "id": "m3-data0.domain",
            "isolation_group": "us-east-1a",
            "zone": "embedded",
            "weight": 100,
            "endpoint": "m3-data0.domain:9000",
            "hostname": "m3-data0.domain",
            "port": 9000
        },
        {
            "id": "m3-data1.domain",
            "isolation_group": "us-east-1b",
            "zone": "embedded",
            "weight": 100,
            "endpoint": "m3-data1.domain:9000",
            "hostname": "m3-data1.domain",
            "port": 9000
        },
        {
            "id": "m3-data2.domain",
            "isolation_group": "us-east-1c",
            "zone": "embedded",
            "weight": 100,
            "endpoint": "m3-data2.domain:9000",
            "hostname": "m3-data2.domain",
            "port": 9000
        }
    ]
}

Namespaces

{
  "name": "default",
  "options": {
    "bootstrapEnabled": true,
    "flushEnabled": true,
    "writesToCommitLog": true,
    "cleanupEnabled": true,
    "snapshotEnabled": true,
    "repairEnabled": false,
    "retentionOptions": {
      "retentionPeriodDuration": "792h",
      "blockSizeDuration": "3h",
      "bufferFutureDuration": "10m",
      "bufferPastDuration": "10m",
      "blockDataExpiry": true,
      "blockDataExpiryAfterNotAccessPeriodDuration": "5m"
    },
    "indexOptions": {
      "enabled": true,
      "blockSizeDuration": "6h"
    }
  }
}
{
    "namespaceName": "agg_100days_1h",
    "retentionTime": "2400h"
}
{
    "namespaceName": "agg_1year_4h",
    "retentionTime": "17520h"
}
@benraskin92 benraskin92 added area:query All issues pertaining to query P: Medium T: Bug labels Aug 13, 2019
@martin-mao
Copy link
Collaborator

@benraskin92 @arnikola ^

@vsliouniaev
Copy link

This bug is still present in v0.15.0-rc.9

We have a proxy in front of m3query and are doing this to fix the problem:

// m3Query incorrectly treats label matchers like:
//   `up{not_exists!=""}`
// as
//   `up{}`
// instead of
//   `up{not_exists=~".+"}
// This should have been fixed in v0.14.2 (https://github.com/m3db/m3/commit/e3d0f4fc3396158873023ae775948335875a75af)
//but the queries still behave the same way
func NormalizeMatchers(matchers []*prompb.LabelMatcher) {
	for _, m := range matchers {
		if m.Type == prompb.LabelMatcher_NEQ && m.Value == "" {
			m.Type = prompb.LabelMatcher_RE
			m.Value = ".+"
		}
	}
}

@robskillington
Copy link
Collaborator

@vsliouniaev with https://github.com/m3db/m3/releases/tag/v0.15.0 this should now be the case if you use the Prometheus engine I believe.

@vsliouniaev
Copy link

The behaviour we're seeing happens when using the remote-read endpoint. I have created a more complete description of it here: #2424

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:query All issues pertaining to query P: Medium T: Bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants