Python API Reference
This reference is automatically generated from the source code.
The Client
vitallens.client.VitalLens
__init__(method='vitallens', api_key=None, proxies=None, detect_faces=True, estimate_rolling_vitals=True, fdet_max_faces=1, fdet_fs=1.0, fdet_score_threshold=0.9, fdet_iou_threshold=0.3, export_to_json=True, export_dir='.', mode=None)
Initialises the client. Loads face detection model if necessary.
You can choose from several rPPG method:
vitallens: Recommended. Uses the VitalLens API and automatically selects the best model for your API key.vitallens-2.0: Force the use of the VitalLens 2.0 model.vitallens-1.0: Force the use of the VitalLens 1.0 model.vitallens-1.1: Force the use of the VitalLens 1.1 model.pos,chrom,g: Classic rPPG algorithms that run locally and do not require an API key.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
method
|
Union[Method, str]
|
The rPPG method to be used for inference. |
'vitallens'
|
api_key
|
str
|
Usage key for the VitalLens API (required for vitallens methods, unless using proxy) |
None
|
proxies
|
dict
|
Dictionary mapping protocol to the URL of the proxy. |
None
|
detect_faces
|
bool
|
|
True
|
estimate_rolling_vitals
|
bool
|
Set |
True
|
fdet_max_faces
|
int
|
The maximum number of faces to detect (if necessary). |
1
|
fdet_fs
|
float
|
Frequency [Hz] at which faces should be scanned. Detections are linearly interpolated for remaining frames. |
1.0
|
fdet_score_threshold
|
float
|
Face detection score threshold. |
0.9
|
fdet_iou_threshold
|
float
|
Face detection iou threshold. |
0.3
|
export_to_json
|
bool
|
If |
True
|
export_dir
|
str
|
The directory to which json files are written. |
'.'
|
__call__(video, faces=None, fps=None, override_fps_target=None, override_global_parse=None, export_filename=None)
Runs rPPG inference from a video file or in-memory video data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
video
|
Union[ndarray, str]
|
The video to analyze. Either a np.ndarray of shape (n_frames, h, w, 3) with a sequence of frames in unscaled uint8 RGB format, or a path to a video file. Note that aggressive video encoding destroys the rPPG signal. |
required |
faces
|
Union[ndarray, list]
|
Face boxes in flat point form, containing [x0, y0, x1, y1] coords.
Ignored unless detect_faces=False. Pass a list or np.ndarray of
- shape (n_faces, n_frames, 4) for multiple faces detected on multiple frames,
- shape (n_frames, 4) for single face detected on mulitple frames, or
- shape (4,) for a single face detected globally, or
- |
None
|
fps
|
float
|
Sampling frequency of the input video. Required if type(video) == np.ndarray. |
None
|
override_fps_target
|
float
|
Target fps at which rPPG inference should be run (optional). If not provided, will use default of the selected method. |
None
|
override_global_parse
|
bool
|
If True, always use global parse. If False, don't use global parse. If None, choose based on video. |
None
|
export_filename
|
str
|
Filename for json export if applicable. |
None
|
Returns: result: Analysis results as a list of faces in the following format:
[
{
'face': {
'coordinates': [[247, 52, 444, 332], ...],
'confidence': [0.6115, 0.9207, 0.9183, ...],
'note': "Face detection coordinates..."
},
'vitals': {
'heart_rate': {
'value': 60.5,
'unit': 'bpm',
'confidence': 0.9242,
'note': 'Global estimate of heart rate...'
},
<other vitals...>
},
'waveforms': {
'ppg_waveform': {
'data': [0.1, 0.2, ...],
'unit': 'unitless',
'confidence': [0.9, 0.9, ...],
'note': '...'
},
<other waveforms...>
},
'message': <Message about estimates>
},
{
<same structure for face 2 if present>
},
...
]
stream(on_result=None)
Returns a context manager for real-time vital sign estimation.
This method creates a StreamSession that manages background inference threads,
sliding window buffers, and signal state via vitallens-core. It is designed
for low-latency applications like webcam feeds.
Usage
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
on_result
|
Callable
|
An optional callback function triggered automatically whenever new inference results are available. The function should accept one argument (the results list). |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
session |
StreamSession
|
A |
Note
The results returned by get_result() or the callback follow the same
format as __call__, but represent the physiological state of the
current sliding window rather than a global file average.