Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Valhalla for routing #461

Merged
merged 22 commits into from
Mar 19, 2021
Merged

Use Valhalla for routing #461

merged 22 commits into from
Mar 19, 2021

Conversation

jcoupey
Copy link
Collaborator

@jcoupey jcoupey commented Feb 24, 2021

Issue

Fixes #306

Tasks

  • Draft a ValhallaWrapper class as a HttpWrapper base class
  • Make HttpWrapper more generic to handle stuff that currently works for both OSRM and ORS but not with Valhalla
  • Implement matrix queries to Valhalla
  • Implement route queries to Valhalla
  • Update CHANGELOG.md
  • review

@jcoupey jcoupey self-assigned this Feb 24, 2021
@jcoupey
Copy link
Collaborator Author

jcoupey commented Feb 24, 2021

I've given this a good start but I'm running into a problem with the current way the response reading goes:

for (;;) {
std::size_t len = s.read_some(asio::buffer(buf), error);
response.append(buf, len);
if (error == asio::error::eof) {
// Connection closed cleanly.
break;
} else {
if (error) {
throw std::system_error(error);
}
}
}

Normally, when all of the response has been received read_some will update the value of error to match asio::error::eof and we're out. With the instance of Valhalla I'm trying this on, it looks like the end of reception is never properly sent, so read_some hangs forever once response is fully populated. This is puzzling as firing the same requests with curl or wget works like a charm.

I have no idea if this is related to the server-side internals of Valhalla (prime-server?) or if this is even reproducible on other Valhalla instances. Until further digging I'll probably implement a hack around this based on the Content-Length header.

@nilsnolde
Copy link
Contributor

Hm, weird.. Never had that problem in any other client library (and never saw a Valhalla issue complaining about it), so my assumption is it's probably the way you do the request (although it seems to follow the asio tutorial..).

Without having a closer look it's hard to tell for me, I also never build a client app in C++. Maybe @kevinkreiser has a clue off the top of his head, he's one of the main people behind valhalla and mostly built prime_server.

@jcoupey
Copy link
Collaborator Author

jcoupey commented Feb 25, 2021

FWIW the way the query is built is borrowed from what currently works for both OSRM and ORS:

std::string ValhallaWrapper::get_matrix_query(
const std::vector<Location>& locations) const {
// Building matrix query for Valhalla.
std::string query = "GET /" + _matrix_service + "?json=";
// List locations.
std::string all_locations;
for (auto const& location : locations) {
all_locations += "{\"lon\":" + std::to_string(location.lon()) + "," +
"\"lat\":" + std::to_string(location.lat()) + "},";
}
all_locations.pop_back(); // Remove trailing ','.
query += "{\"sources\":[" + all_locations;
query += "],\"targets\":[" + all_locations;
query += "],\"costing\":\"" + profile + "\"}";
query += " HTTP/1.1\r\n";
query += "Host: " + _server.host + "\r\n";
query += "Accept: */*\r\n";
query += "Connection: close\r\n\r\n";
return query;
}

Maybe some more header magic is required...

@krypt-n
Copy link
Contributor

krypt-n commented Feb 25, 2021

I am not entirely sure if I found the correct code that is responsible for this but this line https://github.com/kevinkreiser/prime_server/blob/e7035b586aa6fdb79132a48eef4889da07e9a38a/src/http_protocol.cpp#L272
seems to indicate that prime_server only respects Connection: Close, not Connection: close (case sensitive).

I tested it locally and Connection: Close seems to work (i.e. the connection is closed after the matrix response)

@jcoupey
Copy link
Collaborator Author

jcoupey commented Feb 25, 2021

@krypt-n thanks a lot for investigating and spotting the problem. (Sigh) I did waste a couple hours debugging and working around this...

@jcoupey
Copy link
Collaborator Author

jcoupey commented Feb 25, 2021

This is now pretty much in a working state. I had to take the trouble to merge route legs provided by Valhalla and included https://github.com/vahancho/polylineencoder as a dependency for this purpose.

@jcoupey jcoupey added this to the v1.10.0 milestone Feb 25, 2021
@kevinkreiser
Copy link

@jcoupey and @krypt-n sorry for the annoyance on this ☹️ i can make these case-less compares. ive always wanted to dump the http parsing in favor of proven http parsing library but i havent found the time.

@jcoupey jcoupey merged commit 76d04c1 into master Mar 19, 2021
@jcoupey jcoupey deleted the feature/valhalla-integration branch March 19, 2021 11:08
@marko-bogdanovic
Copy link

Hi I'm a beginner in this. I started Valhalla locally on the computer, but now I'm interested in how to do the integration with Vroom and show it all on a map using Leaflet. Any help would be most welcome, thanks in advance.

@nilsnolde
Copy link
Contributor

Give me a few minutes, see VROOM-Project/vroom-express#77.

I'll add the defaults real quick in a PR, then you can take it from there.

You'll need the full stack, i.e. also the repo above, vroom-express. The easiest is a docker-compose setup, where you add vroom docker (contains vroom and vroom-express) and e.g. our valhalla image. Then you can call vroom-express via HTTP which returns the JSON response. You'll need to parse the JSON in JS and build your leaflet objects manually (i.e. the response is not a GeoJSON or similar), Julien's example repo probably helps: https://github.com/VROOM-Project/vroom-frontend.

@nilsnolde
Copy link
Contributor

Ok, here's the right config.yml: VROOM-Project/vroom-express#78.

@nilsnolde
Copy link
Contributor

Also, be aware you can't (yet) set custom matrix and/or routing parameters, though we do hope to support that very soon. If you always need the same request parameters you can hardcode them in vroom though and re-compile the source.

@marko-bogdanovic
Copy link

My config.yml file now looks like this. But now when I run npm start, I get the following error "Error: listen EACCES: permission denied 0.0.0.0:3000". How to change host 0.0.0.0 with my host ?

@nilsnolde
Copy link
Contributor

You're not using it right somehow.. Sorry, but I have no idea which steps you took to get there (apparently not the docker route, but rather host-native installation). Best to think about it again, go through the documentation and either open a new issue in the appropriate repository or rather stackoverflow, as there's nothing wrong with vroom(-express) on a technical side.

@marko-bogdanovic
Copy link

Now I have changed port and launched vroom-express, but I get an error "/ bin / sh: 1: vroom: not found". Sorry for too many questions I am an absolute beginner in this.

@jcoupey
Copy link
Collaborator Author

jcoupey commented Apr 13, 2021

You'll have to check your vroom-express configuration and use the relevant vroom path for you own setup.

I'm locking this conversation as it initially only had to do with integrating Valhalla in the core solving engine. @marko-bogdanovic please make sure to post issues in the relevant repository.

@VROOM-Project VROOM-Project locked as off-topic and limited conversation to collaborators Apr 13, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add Valhalla as routing provider
5 participants