Skip to content

Direct API Calls

The VitalLens API is a web-based service that estimates a person's vital signs from a video of their face and upper body. To get your API Key or learn more, please visit the API page.

You can directly call the VitalLens API, which uses the same inference engine as our free iOS app VitalLens.

  • Expects a 40x40px downsampled (cropped) video of a person's face and upper body
  • Supports heart rate, respiratory rate, pulse waveform, and respiratory waveform estimation. In addition, it returns an estimation confidence for each vital.

Endpoint

POST /vitallens-v2

This endpoint is used to submit a video for processing and receive vital sign estimates in return.

Base URL:

https://api.rouast.com/vitallens-v2

Request

Headers

  • x-api-key: Your unique API key (required).
  • Content-Type: application/json (required).

Body

The body of the request should be a JSON object that includes the following fields:

  • video: The base64-encoded (cropped) video file containing a person's face and upper body. This video must be downsampled to 40x40 pixels and formatted as raw RGB24. Number of video frames can be between 16 and 900.
  • fps: The frames per second of the video (floating point value). This can be determined using ffprobe as shown in the example below.

Example Request

Here is an example of how to prepare the video and send a request using curl:

input_video="video.mp4"
api_key="YOUR_API_KEY"

# Extract the frames per second (fps) from the video
fps=$(ffprobe -v error -select_streams v:0 -show_entries stream=r_frame_rate -of default=nw=1:nk=1 "$input_video" | awk '{printf "%.2f", $1}')

# Convert the video to base64 encoding, downsample to 40x40px
video=$(ffmpeg -i "$input_video" -vf "scale=40:40" -pix_fmt rgb24 -f rawvideo - | base64)

# Prepare the JSON payload
echo "{\"video\": \"$video\", \"fps\": \"$fps\"}" > payload.json

# Send the request to the API
curl -X POST -H "x-api-key: $api_key" -H "Content-Type: application/json" --data-binary @payload.json https://api.rouast.com/vitallens-v2

Please note that this example assumes that video.mp4 has already been cropped to the subject's face and upper body.

Returned estimation results

When directly called with between 16 frames and 900 frames, the API returns estimates of the following vital signs:

Name Type Returned if
heart_rate Global value Video at least 8 seconds long and fps provided
ppg_waveform Continuous waveform Always returned
respiratory_rate Global value Video at least 8 seconds long and fps provided
respiratory_waveform Continuous waveform Always returned

The estimation results are returned with the following structure:

{
  'face': {
    'confidence': <Face live confidence for each frame as np.ndarray of shape (n_frames,)>,
    'note': <Explanatory note>
  },
  'vital_signs': {
    'heart_rate': {
      'value': <Estimated global value as float scalar>,
      'unit': <Value unit>,
      'confidence': <Estimation confidence as float scalar>,
      'note': <Explanatory note>
    },
    'respiratory_rate': {
      'value': <Estimated global value as float scalar>,
      'unit': <Value unit>,
      'confidence': <Estimation confidence as float scalar>,
      'note': <Explanatory note>
    },
    'ppg_waveform': {
      'data': <Estimated waveform value for each frame as np.ndarray of shape (n_frames,)>,
      'unit': <Data unit>,
      'confidence': <Estimation confidence for each frame as np.ndarray of shape (n_frames,)>,
      'note': <Explanatory note>
    },
    'respiratory_waveform': {
      'data': <Estimated waveform value for each frame as np.ndarray of shape (n_frames,)>,
      'unit': <Data unit>,
      'confidence': <Estimation confidence for each frame as np.ndarray of shape (n_frames,)>,
      'note': <Explanatory note>
    }
  },
  "message": <Message about estimates>
}