CameraModels.CameraCalibrationTType
CameraCalibrationT

A Union type for users to implement against both structandmutable structdefinitions ofCameraCalibration`

CameraModels.CameraCalibrationType
struct CameraCalibration{R<:Real, N} <: AbstractCameraModel

Standard pinhole camera model with distortion parameters (aka camera intrinsics).

Notes:

  • Image origin assumed as top-left.
  • Keeping with Images.jl,
    • width of the image are matrix columns from left to right.
    • height of the image are matrix rows from top to bottom.
    • E.g. mat[i,j] == img[h,w] == mat[h,w] == img[i,j]
      • This is to leverage the unified Julia Arrays infrastructure, incl vectors, view, Static, CPU, GPU, etc.

Legacy Comments:

Pinhole Camera model is the most simplistic.

Notes

  • https://en.wikipedia.org/wiki/Pinhole_camera
  • Standard Julia Images.jl-frame convention is, size(img) <==> (i,j) <==> (height,width) <==> (y,x),
    • Common camera-frame in computer vision and robotics, (x,y) <==> (width,height) <==> (j,i),
    • Using top left corner of image as (0,0) in all cases.
    • Direct comparison with OpenCV convention is:
      • (x,y) [CamXYZ] <==> (j,i) [Images.jl] <==> (u,v) [OpenCV] – look very carfully at (u,v) <==> (j,i) in image-frame
  • Always follow right hand rule for everything.
  • An abstract type AbstractCameraModel is provided to develop implementations against struct and mutable struct types.

DevNotes

  • https://en.wikipedia.org/wiki/Distortion_(optics)

Also see: AbstractCameraModel CameraCalibrationMutable, (TODO: ProjectiveCameraModel)

CameraModels.CameraCalibrationMutableMethod
CameraCalibrationMutable(img)

Constructor helper assuming you just have a camera image and need to start somewhere for a basic camera model.

Notes:

CameraModels.backprojectMethod
backproject(model, px_coord)

Backproject from an image into a world scene.

Return a transformation that converts real-world coordinates to camera coordinates. This currently ignores any tangential distortion between the lens and the image plane.

Deprecates:

  • yakir12: pixel2ray: @deprecate pixel2ray(model, px) backproject(model, px)[[1;3;2]]

Also see: project

CameraModels.canreprojectMethod
canreproject(camera::CameraModel)

Confirms if point2pixel is implemented for this camera model.

CameraModels.directionFunction
direction(ray)

Return the origin of ray, typically just a zero StaticArraysCore.SVector{3, Float64} for normal cameras.

CameraModels.heightFunction
rows(model::AbstractCameraModel)

Returns the height of the camera sensor.

CameraModels.intersectRayToPlaneMethod
intersectRayToPlane(
    c_H_a,
    a_F,
    l_nFL,
    l_FL;
    M,
    R0,
    l_T_ex,
    ex_T_c
)

Ray trace from pixel coords to a floor in local level reference which is assumed aligned with gravity. Returns intersect in local level frame (coordinates).

Notes

  • Implemented against local level to allow easier local or world reference usage,
    • Just assume world is local level, i.e. l_nFL = w_nFL and l_FL = w_FL.
    • User must provide (assumed dynamic) local level transform via l_T_ex – see example below!
  • Coordinate chain used is from ( pixel-array (a) –> camera (c) –> extrinsic (ex) –> level (l) )
    • c_H_a from pixel-array to camera (as homography matrix)
    • a_F feature in array pixel frame
    • l_T_ex extrinsic in body (or extrinsic to local level), SE3 Manifold element using ArrayPartition
    • ex_T_c camera in extrinsic (or camera to extrinsic)
  • line-to-plane intersect computation done in the local level frame

Example

# Assume body coords x,y,z == fwd,prt,upw

# Camera setup
f = 800.0        # pixels
ci,cj = 360,640  # assuming 720x1280 image
# going from imaging array to camera frame
c_H_a = [0 1 -cj; 1 0 -ci; 0 0 f] # camera matrix

# body to extrinsic of camera -- e.g. camera looking down 0.2 and left 0.2
# local level to body to extrinsic transform 
l_T_b = ArrayPartition([0;0;0.], R0)
b_T_ex = ArrayPartition([0;0;0.], exp_lie(Mr, hat(Mr, R0, [0;0.2;0.2])))
l_T_ex = compose(M, l_T_b, b_T_ex) # this is where body reference is folded in.

# FLOOR in local level coordinates, normal and point
l_nFL = [0; -0.05; 1.]
l_FL = [0; 0; -2.]

# Ray trace to plane
l_Forb = intersectRayToPlane(
  c_H_a,
  a_Forb,
  l_nFL,
  l_FL;
  l_T_ex
)

See also: CameraModels.intersectLineToPlane3D

CameraModels.pixel2rayFunction
pixel2ray(cameramodel::AbstractCameraModel, pixelIndex::PixelIndex)

Returns the ray in space (origin + direction) corresponding to this pixelIndex.

CameraModels.point2pixelFunction
point2pixel(camera::AbstractCameraModel, pointincamera::StaticArraysCore.SVector{3, Float64})

Returns the pixel location onto which the 3D coordinate pointincamera is projected.

CameraModels.projectMethod
project(model, r_P; c_T_r)

Project a world scene onto an image.

Return a transformation that converts real-world coordinates to camera coordinates. This currently ignores any tangential distortion between the lens and the image plane.

Notes

  • r_P is a point in reference frame tranformed the camera's reference frame:
    • c_P = c_T_r * r_P

Deprecates:

  • yakir12: point2pixel: @deprecate point2pixel(model, pt) project(model, pt[[1;3;2]])

Also see: backproject

CameraModels.radialDistortion!Method
radialDistortion!(cc, dest, src)

Slightly general Radial Distortion type, currently limited to StaticArrays.jl on CPU,

Notes

  • Make sure dest image is large enough to encapsulate the resulting image after un-distortion
  • TODO extended to utilize GPUs.

Example

using Images, FileIO, CameraModels

# load the image
img = load("myimg.jpg")

# TODO, store radialDistortion in CameraCalibration

Reference: From Wikipedia: https://en.wikipedia.org/wiki/Distortion_(optics) ( xd , yd ) = distorted image point as projected on image plane using specified lens, ( xu , yu ) = undistorted image point as projected by an ideal pinhole camera, ( xc , yc ) = distortion center, r = sqrt( (xd - xc)^2 + (yd - yc)^2 )

\[xu = xc + (xd + xc) / (1 + K1*(r^2) + K2*(r^4) + ...) yu = yc + (yd + yc) / (1 + K1*(r^2) + K2*(r^4) + ...)\]

DevNotes (Contributions welcome):

  • TODO manage image clamping if dest is too small and data should be cropped out.
  • TODO buffer radii matrix for better reuse on repeat image size sequences
  • TODO dispatch with either CUDA.jl or AMDGPU.jl <:AbstractArray objects.
  • TODO use Tullio.jl with multithreading and GPU
  • TODO check if LoopVectorization.jl tools like @avx help performance
CameraModels.sensorsizeFunction
sensorsize(model::AbstractCameraModel)

Return the size of the camera sensor. By default calling out to columns(model) and rows(model) to build an SVector{2}

sensorsize(cameramodel::AbstractCameraModel) = SVector{2}(columns(cameramodel), rows(cameramodel))

CameraModels.updirectionFunction
updirection(camera::AbstractCameraModel)

Return the updirection of this camera model.

CameraModels.widthFunction
columns(model::AbstractCameraModel)

Returns the width of the camera sensor.