Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

♻️ volumes are removed via agent (⚠️ devops) #3941

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
165 commits
Select commit Hold shift + click to select a range
ed19ad0
first version of the RPC client via rabbitmq
Feb 22, 2023
ec3b363
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Feb 22, 2023
46e84d3
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Feb 23, 2023
7c37064
added a server namespace
Feb 23, 2023
7b24567
moved robust rpc to rabbitmq
Feb 23, 2023
eeb30de
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Feb 23, 2023
a91bd89
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Feb 23, 2023
8b3a0a3
rabbitmq rpc refactor
Feb 24, 2023
b7a7742
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Feb 27, 2023
b4890bd
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Feb 28, 2023
bebdbdb
added extra test
Feb 28, 2023
6a5805d
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Mar 1, 2023
d1cfab7
fix error when closing channel
Mar 1, 2023
dce584f
add missing type
Mar 1, 2023
6d7ad01
some more progress
Mar 1, 2023
8616e3b
remove extension module
Mar 2, 2023
26f4708
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Mar 2, 2023
1241850
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Mar 2, 2023
c0a1d8a
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Mar 3, 2023
b929d93
added get_namespace
Mar 3, 2023
fe3239b
Merge remote-tracking branch 'origin/pr-osparc-aiopika-solidrpc' into…
Mar 3, 2023
839b84e
refactor
Mar 7, 2023
4ab92e0
added registration helper
Mar 7, 2023
9eeb9bf
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Mar 7, 2023
0ebee22
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Mar 7, 2023
fbd46da
added new module for removing volumes
Mar 7, 2023
9de8990
moved docker volume
Mar 7, 2023
72e1d94
rename test
Mar 7, 2023
22dd56e
injecting a unique nodeid
Mar 7, 2023
1ba1939
adding missing default node id
Mar 7, 2023
2109c67
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Mar 8, 2023
4f1d38b
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Mar 8, 2023
d5f0eb5
refactor import
Mar 8, 2023
fddef59
refactor
Mar 8, 2023
993cef8
refactor
Mar 8, 2023
ca3bdc2
refactor
Mar 8, 2023
d7cf000
moved to helpers
Mar 8, 2023
13fef45
refactor names
Mar 8, 2023
d8f7bcb
refactor
Mar 8, 2023
50a9c17
fix utility
Mar 8, 2023
58b292e
replaced helpers with pydantic models
Mar 8, 2023
0603de9
using hostname
Mar 9, 2023
ede5a09
Merge branch 'master' into pr-osparc-aiopika-solidrpc
GitHK Mar 9, 2023
f6c12cd
refactor task_monitor
Mar 9, 2023
716762d
rename
Mar 9, 2023
2a7b2bb
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Mar 9, 2023
05ebcad
Merge remote-tracking branch 'origin/pr-osparc-aiopika-solidrpc' into…
Mar 9, 2023
5a79662
Merge remote-tracking branch 'upstream/master' into pr-osparc-aiopika…
Mar 9, 2023
c9b2d0c
Merge remote-tracking branch 'origin/pr-osparc-aiopika-solidrpc' into…
Mar 9, 2023
47db156
updated libraries
Mar 9, 2023
372dde0
agent exposes rpc method to remove volume
Mar 9, 2023
82de021
remove defaults
Mar 9, 2023
b70e0ef
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Mar 9, 2023
e31ab56
not compatible with current
Mar 9, 2023
bbbcfd2
making more clear what is happening
Mar 10, 2023
722f7e2
making rabbitmq required
Mar 10, 2023
fcb27d0
rabbitmq is required not optional
Mar 10, 2023
c34f164
rabbitmq required for agent
Mar 10, 2023
c9fd018
refactor removal via agent
Mar 10, 2023
829538b
revert todo
Mar 10, 2023
9aeb52c
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Mar 10, 2023
5d5ba45
fix bug
Mar 10, 2023
8ef3dff
moved test to integration
Mar 10, 2023
bc08176
refactor upgrade deprecation
Mar 10, 2023
13a0431
reverting
Mar 10, 2023
0cf893d
update requirements
Mar 10, 2023
43d3a6b
replacing error with warning
Mar 10, 2023
1cb5547
refactor
Mar 10, 2023
1086b1f
refactor
Mar 10, 2023
5f7a8c7
revert rabbitmq mandatory
Mar 10, 2023
c9d2682
refactor volume removal to support concurrent requests
Mar 10, 2023
05d5bd1
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Mar 10, 2023
3af40f1
fix docstring
Mar 10, 2023
73d0d5c
added more tests
Mar 10, 2023
5902de6
fix broken tests
Mar 10, 2023
bdd5696
fix broken test
Mar 10, 2023
a1f2166
gathering all errors and raising them
Mar 10, 2023
7bd5347
fixed issue with double try to remove volumes
Mar 13, 2023
b972bf3
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Mar 13, 2023
04296fd
fixed errors
Mar 13, 2023
dd872b1
rename fixture
Mar 13, 2023
4ccdcc3
attempt to fix failing test
Mar 13, 2023
d5de4cc
trying to make test more reliable
Mar 13, 2023
b941db0
not an error
Mar 13, 2023
c9ecaa1
fix test
Mar 13, 2023
c2e3336
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Mar 13, 2023
9e9da7e
adding timeout for parallel operation
Mar 13, 2023
4ca56fe
making call resilient
Mar 13, 2023
ccb86fa
fix import order
Mar 13, 2023
8abadde
rabbitmq data is persisted to volume
Mar 13, 2023
b845efc
fixing log message
Mar 14, 2023
dc89358
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Mar 14, 2023
ea12a8c
added missing test
Mar 14, 2023
34206d5
renamed timeouts and updated docs
Mar 14, 2023
c09d2e1
fixed wrong error matching
Mar 14, 2023
6567b74
refactor to use individual timeouts
Mar 14, 2023
d3af6b2
refactor agent rabbit test
Mar 14, 2023
7694bca
refactor using explicit timeouts
Mar 14, 2023
9e922c4
added policy
Mar 14, 2023
61e77aa
using correct wait policy
Mar 14, 2023
460868f
renamed to RPCExceptionGroup
Mar 14, 2023
d7f0cac
refactor volume removal request
Mar 14, 2023
e08dbfa
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Mar 14, 2023
91dadda
using swarm_stack_name
Apr 19, 2023
d9d0a32
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Apr 19, 2023
401a8e3
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Jun 1, 2023
ee02e63
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Jun 5, 2023
a32a84c
refactor after merge
Jun 5, 2023
6fbacb5
refactor
Jun 5, 2023
4b4f9f0
fixed agent
Jun 5, 2023
a2f5727
fixed issue in servicelib
Jun 5, 2023
0869bae
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Jun 5, 2023
32d9f47
refactor tests
Jun 5, 2023
1974ee0
making test portable
Jun 5, 2023
42611a2
refactor
Jun 5, 2023
40e242b
refactor
Jun 5, 2023
c12a46e
skipping failing test
Jun 5, 2023
22fc0d7
reverting
Jun 5, 2023
7534986
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Jun 6, 2023
b2295f7
added base serial executor
Jun 6, 2023
4ace2b4
added VolumeUtils
Jun 6, 2023
cfc0b53
add is_volume_present
Jun 7, 2023
5fc9348
upgraded serial executor
Jun 7, 2023
fb06df7
added parallel different key
Jun 7, 2023
0b32885
refactor dy_sidecar shared store
Jun 7, 2023
941221f
removed
Jun 7, 2023
537eec9
renamed
Jun 7, 2023
98970eb
refactored tests
Jun 7, 2023
3ea3bd4
fixed dy-sidecar
Jun 8, 2023
14e8ecc
added legacy format
Jun 8, 2023
40a433e
refactor _core
Jun 14, 2023
f9df804
remove comments
Jun 14, 2023
c132d9b
refactor
Jun 14, 2023
ad73c4e
refactor
Jun 14, 2023
a6880be
removed
Jun 14, 2023
516a0e9
refactor
Jun 14, 2023
f594433
refactor volume removal
Jun 14, 2023
157d46d
refactor to accept timeout
Jun 14, 2023
72709bf
no longer required
Jun 14, 2023
be43fbf
refactor to use new internals
Jun 14, 2023
a112e08
removed cast
Jun 14, 2023
7ccc90f
refactored
Jun 14, 2023
354e627
using correct import
Jun 14, 2023
ed47a82
fixed imports
Jun 14, 2023
52d9cb5
refactor test
Jun 14, 2023
25d066e
refactored task_monitor again
Jun 14, 2023
724a017
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Jun 14, 2023
1dab67a
refactor
Jun 14, 2023
1d99c0a
moved task registration to separate module
Jun 14, 2023
4094ed4
refactor
Jun 14, 2023
3a9ffac
refactor test
Jun 15, 2023
74deb28
extract to utils
Jun 15, 2023
0d32514
refactor
Jun 15, 2023
586c8d5
more refactoring
Jun 15, 2023
423ba11
refactored rabbitmq to work with volumes
Jun 15, 2023
acecb5a
refactor tests and removal
Jun 15, 2023
4f58a3f
refactor to pull image
Jun 16, 2023
ecc33f6
drop comment
Jun 16, 2023
5cc16ad
restructure import paths
Jun 16, 2023
ddd74e7
refactor
Jun 16, 2023
571db81
refactor
Jun 16, 2023
e49bd7e
update message
Jun 16, 2023
422071f
refactor
Jun 16, 2023
169672c
refactored more tests
Jun 16, 2023
de6916d
Merge remote-tracking branch 'upstream/master' into pr-osparc-remove-…
Jun 16, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/coding-conventions.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,13 @@ In short we use the following naming convention ( roughly [PEP8](https://peps.p
We should avoid merging PRs with ``TODO:`` and ``FIXME:`` into master. One of our bots detects those and flag them as code-smells. If we still want to keep this idea/fix noted in the code, those can be rewritten as ``NOTE:`` and should be extended with a link to a github issue with more details. For a context, see [discussion here](https://github.com/ITISFoundation/osparc-simcore/pull/3380#discussion_r979893502).


## Retries

[Tenacity](https://github.com/jd/tenacity) wherever a retry is required. While most retries are straight forward consider [the following article](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/) regarding retries services and how to avoid overwhelming them.

When retrying an API call (or some sort of request) to an external system, consider that that system can have trouble replying.
It is most effective to create a retry using `wait_random_exponential` from tenacity which implements what the article above describes.

### CC2: No commented code

Avoid commented code, but if you *really* want to keep it then add an explanatory `NOTE:`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,10 @@ class VolumeStatus(StrAutoEnum):

class VolumeState(BaseModel):
status: VolumeStatus
volume_names: list[str] = Field(
...,
description="agent uses the volume's name to search it's status",
)
last_changed: datetime = Field(default_factory=lambda: arrow.utcnow().datetime)

def __eq__(self, other: object) -> bool:
Expand Down
51 changes: 39 additions & 12 deletions packages/service-library/src/servicelib/rabbitmq.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,16 @@

import aio_pika
import aiormq
from aio_pika.exceptions import AMQPConnectionError, ChannelInvalidStateError
from aio_pika.patterns import RPC
from pydantic import PositiveInt
from pydantic import PositiveFloat
from servicelib.logging_utils import log_catch, log_context
from settings_library.rabbit import RabbitSettings
from tenacity._asyncio import AsyncRetrying
from tenacity.before_sleep import before_sleep_log
from tenacity.retry import retry_if_exception_type
from tenacity.stop import stop_after_delay
from tenacity.wait import wait_random_exponential

from .rabbitmq_errors import RemoteMethodNotRegisteredError, RPCNotInitializedError
from .rabbitmq_utils import (
Expand Down Expand Up @@ -122,8 +128,7 @@ async def rpc_initialize(self) -> None:
},
)
self._rpc_channel = await self._rpc_connection.channel()

self._rpc = RPC(self._rpc_channel)
self._rpc = RPC(self._rpc_channel, host_exceptions=True)
await self._rpc.initialize()

async def close(self) -> None:
Expand Down Expand Up @@ -314,13 +319,19 @@ async def rpc_request(
namespace: RPCNamespace,
method_name: RPCMethodName,
*,
timeout_s: PositiveInt | None = 5,
timeout_s_method: PositiveFloat,
timeout_s_connection_error: PositiveFloat,
**kwargs: dict[str, Any],
) -> Any:
"""
Call a remote registered `handler` by providing it's `namespace`, `method_name`
and `kwargs` containing the key value arguments expected by the remote `handler`.

param: `timeout_s_method` amount of seconds to wait for a reply from the remote handler
invoked via `method_name`
param: `timeout_s_connection_error` amount of seconds to wait for rabbit to
be available again in case there was a connection error

:raises asyncio.TimeoutError: when message expired
:raises CancelledError: when called :func:`RPC.cancel`
:raises RuntimeError: internal error
Expand All @@ -335,13 +346,23 @@ async def rpc_request(
namespace, method_name
)
try:
queue_expiration_timeout = timeout_s
awaitable = self._rpc.call(
namespaced_method_name,
expiration=queue_expiration_timeout,
kwargs=kwargs,
)
return await asyncio.wait_for(awaitable, timeout=timeout_s)
async for attempt in AsyncRetrying(
wait=wait_random_exponential(max=5),
stop=stop_after_delay(timeout_s_connection_error),
retry=retry_if_exception_type(
(AMQPConnectionError, ChannelInvalidStateError)
),
before_sleep=before_sleep_log(_logger, logging.WARNING),
reraise=True,
):
with attempt:
queue_expiration_timeout = timeout_s_method
awaitable = self._rpc.call(
namespaced_method_name,
expiration=queue_expiration_timeout,
kwargs=kwargs,
)
return await asyncio.wait_for(awaitable, timeout=timeout_s_method)
except aio_pika.MessageProcessError as e:
if e.args[0] == "Message has been returned":
raise RemoteMethodNotRegisteredError(
Expand All @@ -366,8 +387,14 @@ async def rpc_register_handler(
if self._rpc is None:
raise RPCNotInitializedError()

namespaced_method_name = RPCNamespacedMethodName.from_namespace_and_method(
namespace, method_name
)
_logger.info(
"RPC registered handler '%s' to queue '%s'", handler, namespaced_method_name
)
await self._rpc.register(
RPCNamespacedMethodName.from_namespace_and_method(namespace, method_name),
namespaced_method_name,
handler,
auto_delete=True,
)
Expand Down
173 changes: 173 additions & 0 deletions packages/service-library/src/servicelib/serial_executor.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
import asyncio
import functools
import logging
from abc import abstractmethod
from asyncio import Future, Queue, Task, create_task, wait_for
from dataclasses import dataclass
from typing import Any, Final, TypeAlias

from .background_task import cancel_task
from .logging_utils import log_context

_logger = logging.getLogger(__name__)

_CANCEL_TASK_TIMEOUT_S: Final[float] = 5

ContextKey: TypeAlias = str


@dataclass
class _Request:
context_key: ContextKey
future: Future
args: tuple[Any]
kwargs: dict[str, Any]


class BaseSerialExecutor:
"""
If a bunch of wait_for_result operations are launched together,
they are tried to be executed in parallel except the ones that have the same context_key.

"""

def __init__(self, polling_interval: float = 0.1) -> None:
self.polling_interval: float = polling_interval

self._requests_queue: Queue[_Request | None] = Queue()
self._request_ingestion_task: Task | None = None

# TODO maybe a better name here?
self._context_queue: Queue[int | None] = Queue()
self._context_task: Task | None = None

self._requests_to_start: dict[ContextKey, list[_Request]] = {}
self._running_requests: dict[ContextKey, Task] = {}

async def _handle_payload(self, request: _Request) -> None:
try:
request.future.set_result(await self.run(*request.args, **request.kwargs))
except Exception as e: # pylint: disable=broad-exception-caught
request.future.set_exception(e)

async def _request_processor_worker(self) -> None:
with log_context(
_logger, logging.DEBUG, f"request processor for {self.__class__}"
):
while True:
message: int | None = await self._context_queue.get()
if message is None:
break

# NOTE: this entry logs are supposed to stop after all
# tasks for the context are processed
_logger.debug("Received request to start a task")

# find the next context_key that can be started
found_context_key: ContextKey | None = None
for context_key in self._requests_to_start:
if context_key not in self._running_requests:
found_context_key = context_key

# if we expect any other jobs to be started do this
if found_context_key is None and len(self._requests_to_start) != 0:
# waiting a bit to give time for the current task to finish
# before creating a new one
await asyncio.sleep(self.polling_interval)
await self._context_queue.put(1) # trigger
continue

if found_context_key is None:
_logger.debug("Done processing enqueued requests")
continue

# there are requests which can be picked up and started

requests: list[_Request] = self._requests_to_start[found_context_key]
request = requests.pop()

self._running_requests[request.context_key] = create_task(
self._handle_payload(request)
)
self._running_requests[request.context_key].add_done_callback(
functools.partial(
lambda s, _: self._running_requests.pop(s, None),
request.context_key,
)
)

async def _request_ingestion_worker(self) -> None:
with log_context(
_logger, logging.DEBUG, f"request ingestion for {self.__class__}"
):
while True:
request: _Request | None = await self._requests_queue.get()
if request is None:
break

if request.context_key not in self._requests_to_start:
self._requests_to_start[request.context_key] = []
self._requests_to_start[request.context_key].append(request)

await self._context_queue.put(1) # trigger

async def start(self):
self._request_ingestion_task = create_task(self._request_ingestion_worker())
self._context_task = create_task(self._request_processor_worker())

async def stop(self):
if self._request_ingestion_task:
await self._requests_queue.put(None)
await cancel_task(
self._request_ingestion_task, timeout=_CANCEL_TASK_TIMEOUT_S
)
if self._context_task:
await self._context_queue.put(None)
await cancel_task(self._context_task, timeout=_CANCEL_TASK_TIMEOUT_S)

# cancel all existing tasks
for task in tuple(self._running_requests.values()):
await cancel_task(task, timeout=_CANCEL_TASK_TIMEOUT_S)

async def wait_for_result(
self, *args: Any, context_key: ContextKey, timeout: float, **kwargs: Any
) -> Any:
"""
Starts task and executes the code defined by `run`, waits for the task to
finish and returns its result.
All calls in parallel with the same `context_key` will get executed
sequentially. It guarantees only one task with the same `context_key`
is active at the same time.

If `run` raises an error that same error is raised in the context where this
method is called.

params:
context_key: calls sharing the same `context_key` will be ran in sequence,
all others will be ran in parallel
timeout: float seconds before giving up on waiting gor the result;
needs to take into consideration the fact that other tasks might be already
started for the same `context_key`, before those are finished this request
will not execute

raises:
asyncio.TimeoutError: if no result is provided after timeout seconds have massed
"""
future = Future()
request = _Request(
context_key=context_key, future=future, args=args, kwargs=kwargs
)
await self._requests_queue.put(request)

# NOTE: this raises TimeoutError which has to be handled by tha caller
await wait_for(future, timeout=timeout)

# NOTE: will raise an exception if the future has raised na exception
# This also has to be handled by the caller
result = future.result()

return result

@abstractmethod
async def run(self, *args: Any, **kwargs: Any) -> Any:
"""code to be executed for each request"""
75 changes: 75 additions & 0 deletions packages/service-library/src/servicelib/sidecar_volumes.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
import os
from pathlib import Path
from typing import Final
from uuid import UUID

from attr import dataclass
from pydantic import PositiveInt

from .docker_constants import PREFIX_DYNAMIC_SIDECAR_VOLUMES

_UUID_LEN: Final[PositiveInt] = 36
_UNDER_SCORE_LEN: Final[PositiveInt] = 1
REGULAR_SOURCE_PORTION_LEN: Final[PositiveInt] = (
len(PREFIX_DYNAMIC_SIDECAR_VOLUMES) + 2 * _UUID_LEN + 3 * _UNDER_SCORE_LEN
)

STORE_FILE_NAME: Final[str] = "data.json"


@dataclass
class VolumeInfo:
node_uuid: UUID
run_id: UUID
possible_volume_name: str


class VolumeUtils:
_MAX_VOLUME_NAME_LEN: Final[int] = 255

@classmethod
def get_name(cls, path: Path) -> str:
return f"{path}".replace(os.sep, "_")

@classmethod
def get_source(cls, path: Path, node_uuid: UUID, run_id: UUID) -> str:
"""Returns a valid and unique volume name that is composed out of identifiers, namely
- relative target path
- node_uuid
- run_id

Guarantees that the volume name is unique between runs while also
taking into consideration the limit for the volume name's length
(255 characters).

SEE examples in `tests/unit/test_modules_dynamic_sidecar_volumes_resolver.py`
"""
# NOTE: issues can occur when the paths of the mounted outputs, inputs
# and state folders are very long and share the same subdirectory path.
# Reversing volume name to prevent these issues from happening.
reversed_volume_name = cls.get_name(path)[::-1]
unique_name = f"{PREFIX_DYNAMIC_SIDECAR_VOLUMES}_{run_id}_{node_uuid}_{reversed_volume_name}"
return unique_name[: cls._MAX_VOLUME_NAME_LEN]

@classmethod
def get_volume_info(cls, source: str) -> VolumeInfo:
print(f"{source=}")
if len(source) <= REGULAR_SOURCE_PORTION_LEN:
raise ValueError(
f"source '{source}' must be at least {REGULAR_SOURCE_PORTION_LEN} characters"
)

# example: dyv_5813058f-8ec4-4aa9-bae1-f46c01040481_e3d42f3f-e1ad-418b-90e1-c44a95e97b91
without_volume_name = source[: REGULAR_SOURCE_PORTION_LEN - 1]

# example:_erots_pmet_
possible_reverted_volume_name = source[REGULAR_SOURCE_PORTION_LEN:]

_, run_id_str, node_uuid_str = without_volume_name.split("_")
possible_volume_name = possible_reverted_volume_name[::-1]

return VolumeInfo(
node_uuid=UUID(node_uuid_str),
run_id=UUID(run_id_str),
possible_volume_name=possible_volume_name,
)
Loading