Skip to content

Introduction

The VitalLens API is a web-based service that estimates a person's vital signs from a video of their face and upper body. It uses the same inference engine as our free iOS app VitalLens.

The following vital signs are currently supported:

  • ❤️ Heart rate (HR)
  • 🫁 Respiratory rate (RR)
  • ❤️ Pulse waveform
  • 🫁 Respiratory waveform

It is theoretically possible to also derive heart rate variability (HRV) from the pulse waveform, however we have not yet validated the accuracy of this - such a study is on our roadmap. Our roadmap also includes additional vital signs such as blood oxygen (SpO2) and blood pressure (BP).

Getting access to the VitalLens API

To get your API Key, please visit the API page.

  1. Register for an API Key:
    You will receive an email with a link to verify your account.
  2. Verification & Activation:
    Once verified, we will automatically generate your unique API Key associated with the free plan.
  3. Access:
    Log in on the API page to view your API Key and start using the VitalLens API.

Ways to use the VitalLens API

  • Python Client:
    Our Python client vitallens makes it easy to use the VitalLens API directly from Python. Read the Python Client page to learn more or visit its GitHub Repository.

  • JavaScript Client:
    Our JavaScript client library vitallens.js lets you access the API from both browser environments and Node.js. vitallens.js wraps the API calls and provides additional features such as event-driven updates, fast face detection, and flexible input support. Read the JavaScript Client page to learn more or visit its GitHub Repository.

  • Direct API Calls:
    Make direct HTTP requests to the VitalLens API from the terminal using tools like curl. This approach is useful for quick tests or when integrating with non-Python workflows. Read the Direct API Calls page to learn more.

Coming Soon:

  • iOS SDK
  • Android SDK

How the VitalLens API works

For each request, the VitalLens API accepts a maximum of 900 frames of spatially downsampled video (40x40px) depicting a single person's face and upper body. This video is processed by our estimation engine, which produces estimates for several vital signs. After processing, the API returns the estimates and immediately disposes of both the video frames and the results.

Accuracy of the estimates

TL;DR: Estimates have a high degree of accuracy for high-quality video. Each estimate includes an estimation confidence.

In our study conducted on the Vital Videos dataset (289 unique participants), we observed a mean absolute error of 0.71 bpm for heart rate and 0.76 breaths/min for respiratory rate.

For each vital sign, the API returns an estimation confidence (0% to 100%). Lower confidence values indicate that the quality or conditions of the video may have affected accuracy. Following our guidelines (see below) can help maximize estimation reliability.

Guidelines to follow for accurate vital signs estimation

To ensure optimal performance:

  • Minimize Camera Movement:
    Keep the camera steady.
  • Subject Positioning:
    Have the subject remain still and face the camera directly.
  • Lighting:
    Ensure a bright and steady light source.
  • Video Quality:
    Avoid excessive video compression—compression can degrade the vitals signal.
  • Internet Connection:
    Use a stable and fast connection for best API responsiveness.

These guidelines are based on our study, which also examined how environmental factors influence estimation accuracy.

Notes on user privacy and data collection

We do not store any of your video or vital sign estimate data. After processing a request, all video frames and results are immediately disposed of. It is your responsibility to store the returned estimates if needed. For more details, please review our Terms of Service and Privacy Policy.

Disclaimer

VitalLens is not a medical device and its estimates are not intended for any medical purposes. Please also see our Terms of Service.