Sensors

This module provides a way to specify sensor properties using a URDF-like format. We provide a few implementation examples for depth camera, RGB camera, and IMU.

We use RaiSim to interface with the real hardware. This makes sense because we want to use the same code to control both the simulated and real robot. This sensor package is also designed for real robots. This might be confusing, but it keeps the code simpler and shorter.

Update

The sensor update method can be set using raisim::Sensor::setMeasurementSource. Available sources are:

  • RAISIM: measurement is updated from the simulation side. This is the default for IMU and spinning LiDAR sensors, and can be invoked manually via sensor->update(*world) where supported.

  • VISUALIZER: a visualizer fills the camera buffers (RaisimUnity/Unreal or the rayrai TCP viewer).

  • MANUAL: user code is responsible for writing the buffers (recommended for in-process rayrai rendering or real hardware integration).

Notes by sensor type (per the public headers):

  • RGBCamera does not implement a RaiSim-side update. Use a visualizer or manual updates (rayrai + raisin::Camera::getRawImage).

  • DepthCamera can be updated by RaiSim (DepthCamera::update(*world)) or by visualizers/manual updates.

  • InertialMeasurementUnit is updated by RaiSim and rejects visualizer updates.

The update is performed at the specified frequency (i.e., update_rate in the sensor XML). When reading sensor data from another thread, use RAII locking via the built-in lock()/unlock() hooks:

{
  std::lock_guard<raisim::Sensor> lock(*depthSensor);
  const auto &depth = depthSensor->getDepthArray();
  // use depth here
}

Example

Example: https://github.com/raisimTech/raisim2Lib/blob/master/examples/src/server/sensor_suite.cpp

Using sensors with rayrai

rayrai can render RaiSim camera sensors in-process. The rayrai complete showcase (examples/src/rayrai/rayrai_complete_showcase.cpp) demonstrates how to wire RGB, depth, and LiDAR sensors together. The key steps are:

  1. Load a sensored URDF and fetch the sensors by name:

auto anymal = world->addArticulatedSystem(anymalUrdf);
auto lidar = anymal->getSensorSet("lidar_link")
               ->getSensor<raisim::SpinningLidar>("lidar");
auto rgbCam = anymal->getSensorSet("depth_camera_front_camera_parent")
               ->getSensor<raisim::RGBCamera>("color");
auto depthCam = anymal->getSensorSet("depth_camera_front_camera_parent")
                 ->getSensor<raisim::DepthCamera>("depth");
  1. Let rayrai control sensor updates by setting the measurement source to MANUAL:

lidar->setMeasurementSource(raisim::Sensor::MeasurementSource::MANUAL);
rgbCam->setMeasurementSource(raisim::Sensor::MeasurementSource::MANUAL);
depthCam->setMeasurementSource(raisim::Sensor::MeasurementSource::MANUAL);
  1. For RGB/depth cameras, render with rayrai every frame:

auto viewer = std::make_shared<raisin::RayraiWindow>(world, 1280, 720);
auto rgbCamera = std::make_shared<raisin::Camera>(*rgbCam);
auto depthCamera = std::make_shared<raisin::Camera>(*depthCam);

viewer->renderWithExternalCamera(*rgbCam, *rgbCamera, {});
viewer->renderWithExternalCamera(*depthCam, *depthCamera, {});
viewer->renderDepthPlaneDistance(*depthCam, *depthCamera);

renderWithExternalCamera renders with the sensor intrinsics and pose. If you need CPU access to the rendered buffers, call raisin::Camera::getRawImage to populate either the sensor buffers or a custom buffer. renderDepthPlaneDistance produces a linear depth texture for depth sensors.

4) To read raw images on the CPU, allocate a buffer using the sensor properties and call raisin::Camera::getRawImage:

const int w = std::max(1, rgbCam->getProperties().width);
const int h = std::max(1, rgbCam->getProperties().height);
std::vector<char> rgbBuffer(size_t(w) * size_t(h) * 4);
rgbCamera->getRawImage(*rgbCam, raisin::Camera::SensorStorageMode::CUSTOM_BUFFER,
  rgbBuffer.data(), rgbBuffer.size(), false);

Depth uses a float buffer with width * height entries.

5) For LiDAR, update the pose and request a scan from RaiSim, then transform points from the sensor frame to world coordinates:

lidar->updatePose();
const auto sensorPosW = lidar->getPosition();
const auto sensorRotW = lidar->getOrientation();
lidar->update(*world);
const auto& scanS = lidar->getScan();
// pW = R_W * pS + t_W

The example converts the scan into a rayrai PointCloud for visualization.

Notes:

  • For camera sensors, use rayrai for updates (manual source) instead of calling sensor->update(*world).

  • The sensor update rate in the XML is still respected by RaiSim sensors; if you want to throttle updates in rayrai, skip rendering on frames that are too close.

Parent Class API

class Sensor

Subclassed by raisim::DepthCamera, raisim::InertialMeasurementUnit, raisim::RGBCamera, raisim::SpinningLidar

Public Functions

inline const Vec<3> &getPosition()

The pose is updated using updatePose() method. updatePose() can be called by the user. If the visualizer updates the measurement, the server will call this method before the measurement update.

Returns:

The position of the sensor frame

inline const Mat<3, 3> &getOrientation()

The pose is updated using updatePose() method. updatePose() can be called by the user. If the visualizer updates the measurement, the server will call this method before the measurement update.

Returns:

The orientation of the sensor frame

inline const Vec<3> &getFramePosition()
Returns:

The position of the frame w.r.t. the nearest moving parent. It is used to compute the frame position in the world frame

inline const Mat<3, 3> &getFrameOrientation()
Returns:

The orientation of the frame w.r.t. the nearest moving parent. It is used to compute the frame position in the world frame

inline const Vec<3> &getPosInSensorFrame()
Returns:

The position of the frame w.r.t. the sensor frame, which is the frame of the parent joint

inline const Mat<3, 3> &getOriInSensorFrame()
Returns:

The orientation of the frame w.r.t. the sensor frame, which is the frame of the parent joint

inline const std::string &getName()
Returns:

The name of the sensor. This name is given in the sensor xml file

inline const std::string &getFullName()
Returns:

The full name of the sensor. This name includes the sensor set’s name

inline const std::string &getSensorSetModel()
Returns:

The sensor model identifier from the sensor set.

inline const std::string &getSerialNumber()
Returns:

The serial number of the sensor

inline void setSerialNumber(const std::string &serialNumber)

Set the serial number of the sensor.

Parameters:

serialNumber[in] Serial number string.

inline Type getType()
Returns:

The type of the sensor

inline double getUpdateRate() const
Returns:

The update frequency in Hz

inline double getUpdateTimeStamp() const

This method will return negative value if it has never been updated

Returns:

The time when the last sensor measurement was recorded

inline void setUpdateRate(double rate)

change the update rate of the sensor. The rate is given in Hz

Parameters:

rate[in] the update rate in Hz

inline void setUpdateTimeStamp(double time)

Set the time stamp for the last sensor measurement

Parameters:

time[in] the time stamp

virtual char *serializeProp(char *data) const = 0

Used by the server. Do not use it if you don’t know what it does. It serialized the camera property

Parameters:

data[in] the pointer where the property is written

Returns:

the pointer where the next data should be written

inline virtual char *serializeMeasurements(char *data) const

Used by the server. Do not use it if you don’t know what it does. It serializes sensor measurements.

Parameters:

data[in] the pointer where the property is written

Returns:

the pointer where the next data should be written

void updatePose()

update the pose of the sensor from the articulated system

inline MeasurementSource getMeasurementSource()
Returns:

The measurement source.

inline void setMeasurementSource(MeasurementSource source)

change the measurement source

Parameters:

source[in] The measurement source.

inline size_t getFrameId() const

Get the id of the frame on which the sensor is attached

Returns:

frame id

inline void lockMutex()

locks chart mutex. This can be used if you use raisim in a multi-threaded environment.

inline void unlockMutex()

unlock chart mutex. This can be used if you use raisim in a multi-threaded environment.

Depth Camera API

class DepthCamera : public raisim::Sensor

Public Functions

inline void updateRayDirections()

This method must be called after sensor properties are modified. It updates the ray directions

inline virtual char *serializeProp(char *data) const final

Used by the server. Do not use it if you don’t know what it does. It serialized the camera property

Parameters:

data[in] the pointer where the property is written

Returns:

the pointer where the next data should be written

inline void set3DPoints(const std::vector<raisim::Vec<3>> &data)

This method is only useful on the real robot (and you use raisim on the real robot). You can set the 3d point array manually

Parameters:

data[in] The 3d point data

inline const std::vector<float> &getDepthArray() const

This method will return garbage if it has never been updated. Makes sure that the updateTimeStep is positive (negative means that it has never been updated)

Returns:

depthArray

inline void setDepthArray(const std::vector<float> &depthIn)

Set the data manually. This can be useful on the real robot

Parameters:

depthIn[in] Depth values

Returns:

The image data in char vector

inline const std::vector<raisim::Vec<3>, AlignedAllocator<raisim::Vec<3>, 32>> &get3DPoints() const

This method works only if the measurement source is Raisim

Returns:

3D points in the world frame

inline DepthCameraProperties &getProperties()

Get the depth camera properties. MUST call updateRayDirections() after modifying the sensor properties.

Returns:

the depth camera properties

virtual void update(class World &world) final

Update the sensor value

Parameters:

world[in] the world that the sensor is in

void depthToPointCloud(const std::vector<float> &depthArray, std::vector<raisim::Vec<3>> &pointCloud, bool isInSensorFrame = false) const

Convert the depth values to 3D coordinates

Parameters:
  • depthArray[in] input depth array to convert

  • pointCloud[out] output point cloud

  • isInSensorFrame[in] True to output points in the sensor frame, false for the world frame

inline const auto &getPrecomputedRayDir() const
Returns:

Precomputed ray directions in the sensor frame.

struct DepthCameraProperties

Public Types

enum class NoiseType : int

Noise model for the generated measurements (currently not used).

Values:

enumerator GAUSSIAN
enumerator UNIFORM
enumerator NO_NOISE

Public Members

std::string name

Sensor name (without the sensor set prefix).

std::string full_name

Sensor name including the sensor set prefix.

int width

Image width in pixels.

int height

Image height in pixels.

int xOffset

Horizontal pixel offset for the capture window.

int yOffset

Vertical pixel offset for the capture window.

double clipNear

Near clipping plane in meters.

double clipFar

Far clipping plane in meters.

double hFOV

Horizontal field of view in radians.

double mean

Noise mean.

double std

Noise standard deviation.

std::string format

Pixel format (currently fixed to 16-bit depth).

RGB Camera API

class RGBCamera : public raisim::Sensor

Public Functions

inline virtual char *serializeProp(char *data) const final

Used by the server. Do not use it if you don’t know what it does. It serialized the camera property

Parameters:

data[in] the pointer where the property is written

Returns:

the pointer where the next data should be written

inline RGBCameraProperties &getProperties()

Return the camera properties. Any updates to the properties should be made before connection to the server

Returns:

camera properties

inline const std::vector<char> &getImageBuffer() const

rgb image in bgra format. It is updated only if the measurement source is the visualizer and raisimUnreal is used

Returns:

The image data in char vector

inline void setImageBuffer(const std::vector<char> &rgbaIn)

Set RGBA data. This method is only useful on the real robot.

Parameters:

rgbaIn[in] rgba

inline virtual void update(class World &world) final

Update the sensor output using the current world state.

Parameters:

world[in] World instance containing the sensor’s parent system.

struct RGBCameraProperties

Public Types

enum class NoiseType : int

Noise model for the generated measurements.

Values:

enumerator GAUSSIAN
enumerator UNIFORM
enumerator NO_NOISE

Public Members

std::string name

Sensor name (without the sensor set prefix).

std::string full_name

Sensor name including the sensor set prefix.

int width

Image width in pixels.

int height

Image height in pixels.

int xOffset

Horizontal pixel offset for the capture window.

int yOffset

Vertical pixel offset for the capture window.

double clipNear

Near clipping plane in meters.

double clipFar

Far clipping plane in meters.

double hFOV

Horizontal field of view in radians.

double mean

Noise mean.

double std

Noise standard deviation.

std::string format

Pixel format (currently fixed to RGBA8).

Inertial Measurement Unit API

class InertialMeasurementUnit : public raisim::Sensor

Public Functions

inline virtual char *serializeProp(char *data) const final

Used by the server. Do not use it if you don’t know what it does. It serialized the camera property

Parameters:

data[in] the pointer where the property is written

Returns:

the pointer where the next data should be written

inline virtual char *serializeMeasurements(char *data) const

Used by the server. Do not use it if you don’t know what it does. It serializes sensor measurements.

Parameters:

data[in] the pointer where the property is written

Returns:

the pointer where the next data should be written

inline const Eigen::Vector3d &getLinearAcceleration() const

Get the linear acceleration measured by the sensor. In simulation, this is updated every update loop (set by the updateRate). On the real robot, this value should be set externally by setLinearAcceleration()

Returns:

linear acceleration

inline const Eigen::Vector3d &getAngularVelocity() const

Get the angular velocity measured by the sensor. In simulation, this is updated every update loop (set by the updateRate). On the real robot, this value should be set externally by setAngularVelocity()

Returns:

angular velocity

inline const Vec<4> &getOrientation() const

Get the orientation measured by the sensor as a quaternion (w, x, y, z). In simulation, this is updated every update loop (set by the updateRate). On the real robot, this value should be set externally by setOrientation()

Returns:

orientation quaternion

inline void setLinearAcceleration(const Eigen::Vector3d &acc)

This set method make sense only on the real robot.

Parameters:

acc – acceleration measured from sensor

inline void setAngularVelocity(const Eigen::Vector3d &vel)

This set method make sense only on the real robot.

Parameters:

vel – angular velocity measured from sensor

inline void setOrientation(const Vec<4> &orientation)

This set method make sense only on the real robot.

Parameters:

orientation – orientation of the imu sensor in quaternion (w,x,y,z convention)

inline ImuProperties &getProperties()
Returns:

The IMU configuration properties.

inline virtual void update(class World &world) final

The update is done by ArticulatedSystem class and this method is not necessary

struct ImuProperties

Public Types

enum class NoiseType : int

Noise model for the generated measurements.

Values:

enumerator GAUSSIAN
enumerator UNIFORM
enumerator NO_NOISE

Public Members

std::string name

Sensor name (without the sensor set prefix).

std::string full_name

Sensor name including the sensor set prefix.

double maxAcc

Maximum measurable linear acceleration (m/s^2).

double maxAngVel

Maximum measurable angular velocity (rad/s).

double mean

Noise mean.

double std

Noise standard deviation.