Skip to content

Commit

Permalink
Merge pull request #3681 from friendly-wolfbat/develop
Browse files Browse the repository at this point in the history
docs: remove references to convert-document
  • Loading branch information
stchris authored Apr 17, 2024
2 parents b58c0ee + b970411 commit 04c39de
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions docs/src/pages/developers/technical-faq/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -121,15 +121,15 @@ If you're encountering this issue in production mode, try to check the worker lo

## How can I make imports run faster?

The included `docker-compose` configuration for production mode has no understanding of how powerful your server is. It will run just a single instance of the services involved in data imports, `worker` , `ingest-file` and `convert-document`.
The included `docker-compose` configuration for production mode has no understanding of how powerful your server is. It will run just a single instance of the services involved in data imports, `worker` and `ingest-file`. While by default, both workers will create as many threads as Python's [multiprocessing.cpu_count()](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.cpu_count) reports, this may not achieve the speed you desire.

The easiest way to speed up processing is to scale up those services. Make a shell script to start docker-compose with a set of arguments like this:

```bash
docker-compose up --scale ingest-file=8 --scale convert-document=4 --scale worker=2
docker-compose up --scale ingest-file=8 --scale worker=4
```

Check failure on line 130 in docs/src/pages/developers/technical-faq/index.mdx

View workflow job for this annotation

GitHub Actions / build

bad links: developers/installation.mdx#scaling-workers

The number of `ingest-file` processes could be the number of CPUs in your machine, and `convert-document` needs to be scaled up for imports with many office documents, but never higher than `ingest-file`.
The number of `ingest-file` and `worker` processes could be the number of CPUs in your machine. Since performance and scaling depends a lot on types of workloads you are seeing you may need to review and adjust these settings accordingly. Please see the [Scaling Workers](/developers/installation.mdx#scaling-workers) section in the installation guide.

## ElasticSearch will not start. What's wrong?

Expand Down Expand Up @@ -270,12 +270,12 @@ All of this said, we'd really love to hear about any experiments regarding this.
We have well-defined graph semantics for FollowTheMoney data and you can export any data \(including documents like emails\) in Aleph [into various graph formats](/developers/followthemoney/ftm#exporting-data-to-a-network-graph) \(RDF, Neo4J, and GEXF for Gephi\).
</Callout>

## The document converter service keeps crashing on startup, what's wrong?
## The `ingest-file` container keeps crashing on startup, what's wrong?

You can find out specifically what went wrong with the document converter service by consulting the logs for that container:
You can find out specifically what went wrong with the `ingest-file` container by consulting the logs for that container:

```bash
docker-compose -f docker-compose.dev.yml logs convert-document
docker-compose -f docker-compose.dev.yml logs ingest-file
```

If `LibreOffice` keeps crashing on startup with `Fatal exception: Signal 11`, [AppArmor](https://help.ubuntu.com/community/AppArmor) can be one possible cause. `AppArmor` running on the host machine could be blocking `LibreOffice` from starting up. Try disabling the `AppArmor` profiles related to `LibreOffice` by following these instructions: [https://askubuntu.com/a/1214363](https://askubuntu.com/a/1214363)

0 comments on commit 04c39de

Please sign in to comment.