Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into tiramisu-stats-tsc-…
Browse files Browse the repository at this point in the history
…final

Signed-off-by: Peter Alfonsi <[email protected]>
  • Loading branch information
Peter Alfonsi committed Apr 29, 2024
2 parents 4ffef16 + 2a37bdd commit 4f03a11
Show file tree
Hide file tree
Showing 68 changed files with 2,412 additions and 378 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/links.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
- uses: actions/checkout@v4
- name: lychee Link Checker
id: lychee
uses: lycheeverse/lychee-action@v1.9.3
uses: lycheeverse/lychee-action@v1.10.0
with:
args: --accept=200,403,429 --exclude-mail **/*.html **/*.md **/*.txt **/*.json --exclude-file .lychee.excludes
fail: true
Expand Down
7 changes: 7 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,17 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Add changes for overriding remote store and replication settings during snapshot restore. ([#11868](https://github.com/opensearch-project/OpenSearch/pull/11868))
- Add an individual setting of rate limiter for segment replication ([#12959](https://github.com/opensearch-project/OpenSearch/pull/12959))
- [Tiered Caching] Add dimension-based stats to TieredSpilloverCache ([#13236](https://github.com/opensearch-project/OpenSearch/pull/13236))
- [Tiered Caching] Expose new cache stats API ([#13237](https://github.com/opensearch-project/OpenSearch/pull/13237))
- [Streaming Indexing] Ensure support of the new transport by security plugin ([#13174](https://github.com/opensearch-project/OpenSearch/pull/13174))
- Add cluster setting to dynamically configure the buckets for filter rewrite optimization. ([#13179](https://github.com/opensearch-project/OpenSearch/pull/13179))
- [Tiered Caching] Gate new stats logic behind FeatureFlags.PLUGGABLE_CACHE ([#13238](https://github.com/opensearch-project/OpenSearch/pull/13238))
- [Tiered Caching] Add a dynamic setting to disable/enable disk cache. ([#13373](https://github.com/opensearch-project/OpenSearch/pull/13373))
- [Remote Store] Add capability of doing refresh as determined by the translog ([#12992](https://github.com/opensearch-project/OpenSearch/pull/12992))
- [Tiered caching] Make Indices Request Cache Stale Key Mgmt Threshold setting dynamic ([#12941](https://github.com/opensearch-project/OpenSearch/pull/12941))
- Batch mode for async fetching shard information in GatewayAllocator for unassigned shards ([#8746](https://github.com/opensearch-project/OpenSearch/pull/8746))
- [Remote Store] Add settings for remote path type and hash algorithm ([#13225](https://github.com/opensearch-project/OpenSearch/pull/13225))
- [Remote Store] Upload remote paths during remote enabled index creation ([#13386](https://github.com/opensearch-project/OpenSearch/pull/13386))
- [Search Pipeline] Handle default pipeline for multiple indices ([#13276](https://github.com/opensearch-project/OpenSearch/pull/13276))

### Dependencies
- Bump `org.apache.commons:commons-configuration2` from 2.10.0 to 2.10.1 ([#12896](https://github.com/opensearch-project/OpenSearch/pull/12896))
Expand All @@ -53,6 +57,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
- Bump `com.google.api.grpc:proto-google-iam-v1` from 0.12.0 to 1.33.0 ([#13332](https://github.com/opensearch-project/OpenSearch/pull/13332))
- OpenJDK Update (April 2024 Patch releases), update to Eclipse Temurin 21.0.3+9 ([#13389](https://github.com/opensearch-project/OpenSearch/pull/13389))
- Bump `com.squareup.okio:okio` from 3.8.0 to 3.9.0 ([#12997](https://github.com/opensearch-project/OpenSearch/pull/12997))
- Bump `com.netflix.nebula.ospackage-base` from 11.8.1 to 11.9.0 ([#13440](https://github.com/opensearch-project/OpenSearch/pull/13440))
- Bump `org.bouncycastle:bc-fips` from 1.0.2.4 to 1.0.2.5 ([#13446](https://github.com/opensearch-project/OpenSearch/pull/13446))
- Bump `lycheeverse/lychee-action` from 1.9.3 to 1.10.0 ([#13447](https://github.com/opensearch-project/OpenSearch/pull/13447))

### Changed
- [BWC and API enforcement] Enforcing the presence of API annotations at build time ([#12872](https://github.com/opensearch-project/OpenSearch/pull/12872))
Expand Down
11 changes: 11 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
- [Changelog](#changelog)
- [Review Process](#review-process)
- [Tips for Success](#tips)
- [Troubleshooting Failing Builds](#troubleshooting-failing-builds)

# Contributing to OpenSearch

Expand Down Expand Up @@ -180,3 +181,13 @@ We have a lot of mechanisms to help expedite towards an accepted PR. Here are so

In general, adding more guardrails to your changes increases the likelihood of swift PR acceptance. We can always relax these guard rails in smaller followup PRs. Reverting a GA feature is much more difficult. Check out the [DEVELOPER_GUIDE](./DEVELOPER_GUIDE.md#submitting-changes) for more useful tips.

## Troubleshooting Failing Builds

The OpenSearch testing framework offers many capabilities but exhibits significant complexity (it does lot of randomization internally to cover as many edge cases and variations as possible). Unfortunately, this posses a challenge by making it harder to discover important issues/bugs in straightforward way and may lead to so called flaky tests - the tests which flip randomly from success to failure without any code changes.

If your pull request reports a failing test(s) on one of the checks, please:
- look if there is an existing [issue](https://github.com/opensearch-project/OpenSearch/issues) reported for the test in question
- if not, please make sure this is not caused by your changes, run the failing test(s) locally for some time
- if you are sure the failure is not related, please open a new [bug](https://github.com/opensearch-project/OpenSearch/issues/new?assignees=&labels=bug%2C+untriaged&projects=&template=bug_template.md&title=%5BBUG%5D) with `flaky-test` label
- add a comment referencing the issue(s) or bug report(s) to your pull request explaining the failing build(s)
- as a bonus point, try to contribute by fixing the flaky test(s)
2 changes: 1 addition & 1 deletion distribution/packages/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ import java.util.regex.Pattern
*/

plugins {
id "com.netflix.nebula.ospackage-base" version "11.8.1"
id "com.netflix.nebula.ospackage-base" version "11.9.0"
}

void addProcessFilesTask(String type, boolean jdk) {
Expand Down
2 changes: 1 addition & 1 deletion distribution/tools/plugin-cli/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ dependencies {
compileOnly project(":server")
compileOnly project(":libs:opensearch-cli")
api "org.bouncycastle:bcpg-fips:1.0.7.1"
api "org.bouncycastle:bc-fips:1.0.2.4"
api "org.bouncycastle:bc-fips:1.0.2.5"
testImplementation project(":test:framework")
testImplementation 'com.google.jimfs:jimfs:1.3.0'
testRuntimeOnly("com.google.guava:guava:${versions.guava}") {
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
704e65f7e4fe679e5ab2aa8a840f27f8ced4c522
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@
import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.Comparator;
import java.util.Date;
import java.util.EnumSet;
import java.util.HashMap;
Expand All @@ -90,6 +91,8 @@
import java.util.Locale;
import java.util.Map;
import java.util.Set;
import java.util.SortedMap;
import java.util.TreeMap;
import java.util.concurrent.TimeUnit;
import java.util.function.IntFunction;

Expand Down Expand Up @@ -642,12 +645,47 @@ public <K, V> Map<K, V> readMap(Writeable.Reader<K> keyReader, Writeable.Reader<
return Collections.emptyMap();
}
Map<K, V> map = new HashMap<>(size);
readIntoMap(keyReader, valueReader, map, size);
return map;
}

/**
* Read a serialized map into a SortedMap using the default ordering for the keys. If the result is empty it might be immutable.
*/
public <K extends Comparable<K>, V> SortedMap<K, V> readOrderedMap(Writeable.Reader<K> keyReader, Writeable.Reader<V> valueReader)
throws IOException {
return readOrderedMap(keyReader, valueReader, null);
}

/**
* Read a serialized map into a SortedMap, specifying a Comparator for the keys. If the result is empty it might be immutable.
*/
public <K extends Comparable<K>, V> SortedMap<K, V> readOrderedMap(
Writeable.Reader<K> keyReader,
Writeable.Reader<V> valueReader,
@Nullable Comparator<K> keyComparator
) throws IOException {
int size = readArraySize();
if (size == 0) {
return Collections.emptySortedMap();
}
SortedMap<K, V> sortedMap;
if (keyComparator == null) {
sortedMap = new TreeMap<>();
} else {
sortedMap = new TreeMap<>(keyComparator);
}
readIntoMap(keyReader, valueReader, sortedMap, size);
return sortedMap;
}

private <K, V> void readIntoMap(Writeable.Reader<K> keyReader, Writeable.Reader<V> valueReader, Map<K, V> map, int size)
throws IOException {
for (int i = 0; i < size; i++) {
K key = keyReader.read(this);
V value = valueReader.read(this);
map.put(key, value);
}
return map;
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -289,8 +289,8 @@ public void close() throws IOException {
}

@Override
public ImmutableCacheStatsHolder stats() {
return statsHolder.getImmutableCacheStatsHolder();
public ImmutableCacheStatsHolder stats(String[] levels) {
return statsHolder.getImmutableCacheStatsHolder(levels);
}

/**
Expand Down Expand Up @@ -350,13 +350,13 @@ void updateStatsOnRemoval(String removedFromTierValue, boolean wasEvicted, ICach
if (wasEvicted) {
statsHolder.incrementEvictions(dimensionValues);
}
statsHolder.decrementEntries(dimensionValues);
statsHolder.decrementItems(dimensionValues);
statsHolder.decrementSizeInBytes(dimensionValues, weigher.applyAsLong(key, value));
}

void updateStatsOnPut(String destinationTierValue, ICacheKey<K> key, V value) {
List<String> dimensionValues = statsHolder.getDimensionsWithTierValue(key.dimensions, destinationTierValue);
statsHolder.incrementEntries(dimensionValues);
statsHolder.incrementItems(dimensionValues);
statsHolder.incrementSizeInBytes(dimensionValues, weigher.applyAsLong(key, value));
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,10 @@ public class TieredSpilloverCacheStatsHolder extends DefaultCacheStatsHolder {
* @param diskCacheEnabled whether the disk tier starts out enabled
*/
public TieredSpilloverCacheStatsHolder(List<String> originalDimensionNames, boolean diskCacheEnabled) {
super(getDimensionNamesWithTier(originalDimensionNames));
super(
getDimensionNamesWithTier(originalDimensionNames),
TieredSpilloverCache.TieredSpilloverCacheFactory.TIERED_SPILLOVER_CACHE_NAME
);
this.diskCacheEnabled = diskCacheEnabled;
}

Expand Down Expand Up @@ -134,17 +137,17 @@ public void decrementSizeInBytes(List<String> dimensionValues, long amountBytes)
}

@Override
public void incrementEntries(List<String> dimensionValues) {
public void incrementItems(List<String> dimensionValues) {
validateTierDimensionValue(dimensionValues);
// Entries from either tier should be included in the total values.
super.incrementEntries(dimensionValues);
super.incrementItems(dimensionValues);
}

@Override
public void decrementEntries(List<String> dimensionValues) {
public void decrementItems(List<String> dimensionValues) {
validateTierDimensionValue(dimensionValues);
// Entries from either tier should be included in the total values.
super.decrementEntries(dimensionValues);
super.decrementItems(dimensionValues);
}

void setDiskCacheEnabled(boolean diskCacheEnabled) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ public MockDiskCache(int maxSize, long delay, RemovalListener<ICacheKey<K>, V> r
if (useNoopStats) {
this.statsHolder = NoopCacheStatsHolder.getInstance();
} else {
this.statsHolder = new DefaultCacheStatsHolder(List.of());
this.statsHolder = new DefaultCacheStatsHolder(List.of(), "mock_disk_cache");
}
}

Expand All @@ -60,15 +60,15 @@ public V get(ICacheKey<K> key) {
public void put(ICacheKey<K> key, V value) {
if (this.cache.size() >= maxSize) { // For simplification
this.removalListener.onRemoval(new RemovalNotification<>(key, value, RemovalReason.EVICTED));
this.statsHolder.decrementEntries(List.of());
this.statsHolder.decrementItems(List.of());
}
try {
Thread.sleep(delay);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
this.cache.put(key, value);
this.statsHolder.incrementEntries(List.of());
this.statsHolder.incrementItems(List.of());
}

@Override
Expand Down Expand Up @@ -111,7 +111,12 @@ public void refresh() {}
public ImmutableCacheStatsHolder stats() {
// To allow testing of useNoopStats logic in TSC, return a dummy ImmutableCacheStatsHolder with the
// right number of entries, unless useNoopStats is true
return statsHolder.getImmutableCacheStatsHolder();
return statsHolder.getImmutableCacheStatsHolder(null);
}

@Override
public ImmutableCacheStatsHolder stats(String[] levels) {
return null;
}

@Override
Expand Down
Loading

0 comments on commit 4f03a11

Please sign in to comment.