-
Notifications
You must be signed in to change notification settings - Fork 718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async Support #171
Comments
Yeah, out-of-the-box support for async functions is definitely something I would love to see integrated in Loguru! Actually, it's a thing I've shortly investigated, but I didn't really have an use case in mind. I decided to wait for someone to open an issue on this subject so we could discuss it. Here you are! 😄 Precisely, about your use case: isn't it something which can be solved using the I'm not a big fan of adding |
Ah, I didn't know that's what that was doing. Ok. However, I don't think I have any control over how the queue gets processed, right? For example, in my code, my sink puts messages into a Maybe what I really need is batching support. Batching would help for any sink that uses a resource that requires overhead to access. For example, it looks like I'm not sure how the API would look if batching was possible. Or if this is worth it. Just thinking out loud. I think I can manage as-is.
Yes, I think this would be a step in the right direction. (I think Maybe loguru would manage its own loop ( |
Yes, Loguru uses a class BatcherHTTP:
def __init__(self, limit):
self._limit = limit
self._batch = []
def write(self, message):
self._batch.append(message)
if len(self._batch) == self._limit:
batched_request(self._batch)
self._batch.clear()
logger.add(BatcherHTTP(10), enqueue=True) That's way you benefit from both asynchronous and batched logging. Integrating a Logging to a file supports some kind of batching through the Back to |
Nice with the BatcherHTTP code. Thank you. I'll experiment with that, but probably also add a (FWIW, a reason I'm interested in async for this use case is because asyncio has its own queue, sleep, and worker concepts.) You make solid points about the I didn't realize |
Ok, I realized it's a little bit more complicated than just wrapping the coroutine with import asyncio
async def log():
await asyncio.sleep(0.1)
print("Done")
async def main():
asyncio.ensure_future(log()) # Called by logger.debug() for example
asyncio.run(main()) # Nothing is printed That means I need to add a new awaitable method to the I looked a little bit at what was being done by other libraries. As I'm not very familiar with asynchronous programming, it's possible that I'm overlooking some things, though. 😄 |
This also adds the new "complete()" method so that scheduled tasks can be awaited before leaving the event loop. There were some subtleties to be taken into account: - We use "get_event_loop()" instead of "get_running_loop()" so that it does not raise error if called outside of the event loop - It not possible to wait for a task created in a different loop, so "complete()" must filter the scheduled ones - Before Python 3.5.3, coroutines did not have access to the running loop, so "get_event_loop()" do not return the "expected" loop - If "enqueue=True" and "loop=None", the worker thread dot not have access to the running loop, so we use the global one at the time of "add()" so user can `await` it - If "enqueue=True", there must be a way to notify the awaiting "complete()" that all tasks has been scheduled and can be awaited
Merged in async def sink(msg):
print(msg)
async def main():
logger.info("Start...")
res = await your_function()
logger.info("Result: {}", res)
await logger.complete()
logger.add(sink)
asyncio.run(main()) There were some corner cases to resolve (interoperability with |
Hello! logger.add(
os.path.join(LOGS_DIR, "ERROR", "log_{time}.log"),
rotation="100 MB",
compression="zip",
enqueue=True,
level="ERROR"
) Will logging to a file be done asynchronously? And is also sufficient to And does the presence of And also, please tell me, what is the best way to log exceptions thrown inside asynchronous tasks? For example, in the following code, the exception inside async def test() -> None:
raise ValueError("Oops!")
@logger.catch
async def main() -> None:
asyncio.create_task(test())
# await test()
await asyncio.sleep(1)
await logger.complete()
asyncio.run(main()) Does this mean that all functions that will be called as tasks should be wrapped with the It's just that only a couple of lines were mentioned about asynchrony, from which it is not at all obvious whether I am approaching this issue correctly so that the code that continues to execute correctly does not turn out to be an asynchronous illusion. |
Hello @PieceOfGood. :)
The logging will be made asynchronous in the sense that calling a logging function won't wait for the message to be written on disk. Using
Yes, it makes totally sense. This is the way to do it if you want logging to
The
Hum, I copy/pasted your snippet and it seems the exception is properly caught by Loguru. Basically, |
Thanks for your reply. import asyncio
from loguru import logger
async def test() -> None:
raise ValueError("Oops!")
@logger.catch
async def main() -> None:
asyncio.create_task(test())
# await test()
await asyncio.sleep(1)
asyncio.run(main()) my console output in PyCharm look like this: But if I swap the comment on lines 9 and 10: import asyncio
from loguru import logger
async def test() -> None:
raise ValueError("Oops!")
@logger.catch
async def main() -> None:
# asyncio.create_task(test())
await test()
await asyncio.sleep(1)
asyncio.run(main()) As the documentation says, to get an exception that may have been thrown by a task, you need to call the I'm talking more about various handlers, or "fire and forget" tasks, the result of which does not interest me and which can be generated in a very large number. For such a case, it is necessary to wrap each such function with the Of course, I can argue incorrectly, but the only experience available to me says that for greater convenience in this situation, such functions should be described in a separate file, the import from which should be wrapped in the way mentioned earlier. And since this solution still looks cumbersome, I thought maybe someone would share their wisdom and suggest the best way to do this. And thanks for your hard work. Loguru is really a good tool! |
Sorry for the late answer @PieceOfGood, but I'm not sure of the best practices in your case. What bothers me in your 1st example is that the task is never awaited. This is a problem, in my opinion, because that means you can't be sure the function will be entirely and properly executed. Try adding Also, the documentation of
I think the best practice is to explicitly |
I would like to see async support in loguru. In my otherwise asynchronous application, I was a bit frustrated that I had to resort to threading because I couldn't call my sinks asynchronously (I'm calling an HTTP API with log records in batches, which would kill a synchronous application for no good reason).
To me, the cleanest approach would be to add a whole 'nother set of logging functions on the Logger that are coroutines (
async def
) liketrace_async
,debug_async
, etc. I'm not totally sure how this would work with the concurrency mechanisms in this project (like theenqueue
flag provides) without more investigation.I've done a bit of asynchronous programming in Python. From 20,000 feet, I don't think this would be a huge addition.
@Delgan, would you have an appetite for this?
The text was updated successfully, but these errors were encountered: