top of page
Image by GeoJango Maps

HERE Routing API

Conducting Usability Testing on a new Routing API and Documentation

The Routing API for HERE Platform was created to help support users to create optimized routes for their applications leveraging HERE's location data on the platform. 

I conducted usability testing on the API to uncover any usability issues with the API and its documentation.

overview

​HERE Platform is a cloud platform that allows users to bring and work with location data, combining and transforming it to develop solutions for their own use cases.

As part of the platform, HERE created APIs called Services such as vehicle routing, pedestrian routing, geocoding, and search, that would allow users to leverage their location data from the platform by using these out-of-the-box tools to develop products.

Before these were introduced onto the HERE Platform, we wanted to conduct user testing to better understand if users understood what these services were, what they were able to do with them, and whether they were actually able to get started using them.

Learn more about HERE routing below:

https://developer.here.com/products/routing

Team

Ashley Callaway, UX Research

Maria Von Watzdorf, Product Manager

Method

Remote Moderated Usability Testing

Outcome

Research provided guidance for adding both the Routing API and other HERE APIs to the Platform services.  In addition to reworking the concept & language of "services," the research informed later initiatives to improve learning resources and documentation for the HERE Platform.

What is the Design Problem?

With the Routing API moving to the HERE Platform to be available for customers to leverage in their development work under a collection of APIs called "Services", we wanted to better understand whether our users could understand what services were for, locate the correct API for their project, and then understand the documentation sufficiently to get started. 

Who are the users?

The users for the Routing API and documentation is broader than for other aspects of the HERE Platform. Our typical users are recruited from data engineers & data scientists, particularly those with familiarity working with geolocation data. 

In this case, any developer working a project where they need to generate a route should be able to use the Routing API in their work. 

methodology

As our goal was to evaluate the design of services, the API, and the documentation, usability testing was my method of choice. I conducted remote moderated usability testing with 5 software developers. One was external to HERE, the rest were from internal HERE teams that used the Platform in their work, but were not part of the Platform team itself. All had experience working with APIs in their work, but their frequency of using new APIs ranged from infrequent to daily.

Each session started with an interview to better understand the participant’s background, experience, and familiarity working with both APIs and the HERE Platform. The sessions were scheduled for an hour, with at least half of that saved for the process of making the service request.

Usability testing allowed for the participants to be given tasks within the scope of what we knew they could complete with the state of the API, and participants shared their own screens to enable them to use whichever tool they preferred to make the API request.

Primary Research Questions:
  • Are users able to figure out which service to use, and then successfully use the service to make an API call?

  • ​Are users able to readily identify the appropriate services for the task at hand? How do they make sense of the concept of a service?

  • What information is most/least helpful in finding a service API to use?

  • How usable is the documentation in helping users get started with and implement solutions using the particular service?

  • Does the API structure make sense to the user?

  • How do users understand working with services as relating to the rest of the Platform

  • Does it feel like a consistent experience?

  • What is clear vs. unclear?

Tasks

Participants were given a scenario that they were a developer working on an application, and they were looking for an API to use with their location data on the Platform.​

​Task 1: Explore HERE’s offerings on the Platform to see if there is anything that looks like it could be relevant.

 

​Success for this task was finding the correct service (Vehicle Routing API). Because this was such an open-ended task, we were able to see what information and documents participants used, where they thought to look

 

Task 2: Using the service you previously found. make a request to return two different possible routes for a car from Boston, Massachusetts (42.3601° N, -71.0589° W) to New York, NY (40.7128° N, -74.0060° W) for a car which avoids highways and toll roads, but still minimizes travel time.​

 

Success for this task was measured in making the correct API call and getting the correct response. There were only had two tasks because they were time intensive and we wanted to ensure we had plenty of time to troubleshoot any errors.

constraints

Readiness of the APIs

As the goal was to get early feedback on the APIs, a major constraint was the readiness of the APIs to test with. Ideally, we would be able to get feedback on the services before too much effort went into development. One method I considered using was using 'Wizard of Oz' testing, where I as the tester would replicate the API in order to get earlier stage feedback. However, the wide range of errors, responses, etc. I would have to be able to replicate would have been too great to reliably 'fake.' Ultimately we decided to start the test with the Vehicle Routing API to start as it was the closest to being ready, and apply the feedback to the others as they were developed. Further testing was also going to be planned for these other services later on.

User Availability

Another constraint was availability of users. The users would likely be software engineers/developers working to develop a solution using location data, but due to time constraints and a limited ability to provide incentives, this is a difficult group to recruit from. External users and customers were limited and the teams managing these accounts were reluctant to provide access. Recruiting from internal HERE employees using the Platform allowed us to speed up the recruitment process and alleviated budget constraints. Working with internal users introduces bias because they are more likely to have a positive view of HERE, and typically know more than the average user. Luckily, there were a good number of teams who used the Platform for their data and development work, but were still separated from any teams that planned and built the Platform, which reduced the amount of bias introduced.

Time Limitations

We also had a limited time we could reasonably ask our participants to spend with us. In planning the tasks, some steps such as authentication with the Platform were skipped as they were already known pain points. Conducting pilot tests also enabled me to have a sense of how long the tests would take before putting them in front of actual participants. These pilot tests also allowed me to get familiar with the different errors users might encounter, and get support from engineering to know how to provide support to get participants 'unstuck' and able to move on to subsequent parts of the test.

analysis

Each test was recorded for analysis afterwards. To analyze the data, I used affinity diagraming to group the data into similar findings. This allows me to see how often issues came up throughout the tests- if an issue was a random mistake or a consistent issue- and makes it easier to pull out larger issues that may not be tied to one specific task, or part of the experience or interface.

I typically assign each issue a ‘UX severity’ rating based on how often an issue came up and how major of a blocker it was for the user. If it was a minor mistake and only a couple users made it, then it would be a low severity issue. An issue that many users encountered that also was a major blocker to the task would be a high severity issue. Medium issues would be those that either lots of users faced but didn’t cause too much trouble, and those that few encountered, but blocked them from moving forward with the task or their work.

I like to emphasize to stakeholders that this is not necessarily a prioritization- sometimes the low and medium severity issues are easier to fix and are ‘low hanging fruit’ to make UX improvements. They are more intended to help compare which issues had the biggest impact to the user experience.

results

For the first task, asking users to locate the correct API to use, findings included:

Users did not have a clear sense of what a service was, or how they fit in with the rest of the HERE Platform

This was a high severity issue, both because many participants mentioned it but also because it was a major pain point to figuring out what they could do with the services. Recommendations for improvements included reworking the text describing services on the launcher to better indicate what they are for, as well as including “API” in the title.

Users struggled to find Services

When scanning the page for options that would meet their need, users mentioned looking for “API” as a keyword. Again, including this in the name would help users find them and would better meet user expectations.

Search was used heavily (and unsuccessfully) by users

When looking for something that would be useful for the scenario, or when looking for something specific in the documentation, users used search. This information-seeking behavior was worth noting, and supported its prioritization on the roadmap for improvement.

Users were of unsure of granularity of services options

Users weren’t sure why they had to use a different service for pedestrian routing than vehicle routing. The recommendation made here was to work to bundle the services together so that a singular “routing” service API could be used by users without having to specifically make a distinction as to mode of transportation.

When it came to the second task of actually making the API call, issues included:

Documentation is difficult to navigate, especially between Developers Guide and API Reference

The organization of the documentation was difficult for users to find the information they needed as they tried to figure out what all information to include in their call. They frequently jumped between the Developers Guide and API Reference. Including these in a single document would allow them to have all relevant information on the same page.

 

Users did not expect an encoded polyline as the response

A long encoded polyline wasn’t what users expected to get back from their API request. Just looking at the polyline, users weren’t sure what it was or how it could be used. The recommendation was to add information about what the response would include and how to use it in the concepts section of the documentation.

Users were unclear what “Motorway” was meant

Our participants were from a wide variety of locations- Europe, the US, India, and therefore had varied interpretations as to what a motorway was.  The recommendation was to add clear definitions as to what was or wasn’t included in these parameters in the documentation.

Outcomes

Results were presented to stakeholders in two separate sharing sessions- one for direct stakeholders on the services team, with time to discuss and triage findings, and another for the wider Platform team with a shorter Q&A at the end.

As a result of the usability testing, the design and product team was able to make prioritization decisions to the vehicle routing service (now just “Routing”). The feedback about learning was able to be carried forward into other UX initiatives surrounding developer learning experiences, and was applied to HERE’s other services as well.

Full Report

© Ashley Callaway 2023
bottom of page