Python API Reference
This reference is automatically generated from the source code.
The Client
vitallens.client.VitalLens
__init__(method='vitallens', mode=Mode.BATCH, api_key=None, proxies=None, detect_faces=True, estimate_rolling_vitals=True, fdet_max_faces=1, fdet_fs=1.0, fdet_score_threshold=0.9, fdet_iou_threshold=0.3, export_to_json=True, export_dir='.')
Initialises the client. Loads face detection model if necessary.
You can choose from several rPPG method:
vitallens: Recommended. Uses the VitalLens API and automatically selects the best model for your API key.vitallens-2.0: Force the use of the VitalLens 2.0 model.vitallens-1.0: Force the use of the VitalLens 1.0 model.vitallens-1.1: Force the use of the VitalLens 1.1 model.pos,chrom,g: Classic rPPG algorithms that run locally and do not require an API key.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
method
|
Union[Method, str]
|
The rPPG method to be used for inference. |
'vitallens'
|
mode
|
Mode
|
Operate in batch or burst mode |
BATCH
|
api_key
|
str
|
Usage key for the VitalLens API (required for vitallens methods, unless using proxy) |
None
|
proxies
|
dict
|
Dictionary mapping protocol to the URL of the proxy. |
None
|
detect_faces
|
bool
|
|
True
|
estimate_rolling_vitals
|
bool
|
Set |
True
|
fdet_max_faces
|
int
|
The maximum number of faces to detect (if necessary). |
1
|
fdet_fs
|
float
|
Frequency [Hz] at which faces should be scanned. Detections are linearly interpolated for remaining frames. |
1.0
|
fdet_score_threshold
|
float
|
Face detection score threshold. |
0.9
|
fdet_iou_threshold
|
float
|
Face detection iou threshold. |
0.3
|
export_to_json
|
bool
|
If |
True
|
export_dir
|
str
|
The directory to which json files are written. |
'.'
|
__call__(video, faces=None, fps=None, override_fps_target=None, override_global_parse=None, export_filename=None)
Runs rPPG inference from a video file or in-memory video data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
video
|
Union[ndarray, str]
|
The video to analyze. Either a np.ndarray of shape (n_frames, h, w, 3) with a sequence of frames in unscaled uint8 RGB format, or a path to a video file. Note that aggressive video encoding destroys the rPPG signal. |
required |
faces
|
Union[ndarray, list]
|
Face boxes in flat point form, containing [x0, y0, x1, y1] coords.
Ignored unless detect_faces=False. Pass a list or np.ndarray of
- shape (n_faces, n_frames, 4) for multiple faces detected on multiple frames,
- shape (n_frames, 4) for single face detected on mulitple frames, or
- shape (4,) for a single face detected globally, or
- |
None
|
fps
|
float
|
Sampling frequency of the input video. Required if type(video) == np.ndarray. |
None
|
override_fps_target
|
float
|
Target fps at which rPPG inference should be run (optional). If not provided, will use default of the selected method. |
None
|
override_global_parse
|
bool
|
If True, always use global parse. If False, don't use global parse. If None, choose based on video. |
None
|
export_filename
|
str
|
Filename for json export if applicable. |
None
|
Returns: result: Analysis results as a list of faces in the following format:
[
{
'face': {
'coordinates': <Face coordinates for each frame as np.ndarray of shape (n_frames, 4)>,
'confidence': <Face live confidence for each frame as np.ndarray of shape (n_frames,)>,
'note': <Explanatory note>
},
'vital_signs': {
'heart_rate': {
'value': <Estimated global value as float scalar>,
'unit': <Value unit>,
'confidence': <Estimation confidence as float scalar>,
'note': <Explanatory note>
},
<other vitals...>
},
'message': <Message about estimates>
},
{
<same structure for face 2 if present>
},
...
]
reset()
Resets the client state if applicable.
Configuration Enums
vitallens.enums.Mode
Bases: IntEnum