3D cameras are matrix sensors that estimate depth and capture range maps. Similar to color cameras, they provide images of the surrounding environment as seen from a single point of view. Alongside the color information, Three-dimension (3D) cameras provides depth measurements by exploiting visual information. Different techniques exist to measure 3D shapes, mainly by triangulating keypoints from two point of view or by directly estimating the range (Fig.2.1). In this section, we present the theoretical foundation behind common techniques used in 3D cameras. First, we introduce the linear camera model and show the non-linearity introduced by the optical lens (Sect.2.1). Second, we provide the theoretical background for estimating depth through triangulation (Sect.2.2) and Time-of-Flight (TOF) (Sect.2.3). Last, we elaborate on the depth maps to point cloud transformation and on the color information integration. (Image presented).