Imagine a bucolic scene on a clear sunny day at the equator, sand warmed by the tropical sun with a typical irradiance (
) of about 1000 watts per square meter. As discussed earlier we could express this quantity as illuminance in lumens per square meter (
) – or equivalently as a certain number of photons per second (
) over an area of interest (
)
(1) ![]()
How many
per unit area can we expect on the camera’s image plane (irradiance
)?

In answering this question we will discover the Camera Equation as a function of opening angles – and set the stage for the next article on lens pupils. By the way, all quantities in this article depend on wavelength and position in the Field of View, which will be assumed in the formulas to make them readable, see Appendix I for a slightly more correct version of Equation (1).
Spherical Cones and Lambertian Reflectance
Irradiance is not directional, meaning that the photon flux on the surface of interest can arrive from any direction or even from multiple/extended sources, for example an overcast sky. What matters in this article is the number of photons incident on the relevant area of the object of interest, so that’s what we will refer to as
, the subscript
indicating a quantity referred to the object.
Think of
as the input to a photodiode flush with the surface, counting every photon impinging on it, regardless of where it came from. The assumption is that in the limit every small differential area
is flat so flux is collected from a hemisphere on top of it.
The object transmits/absorbs some photons and reflects some depending on its morphology and physical characteristics. Those reflected propagate in straight lines in air in all directions. Neighboring photons reflected from small uniform portions on the surface of the object (
) form conical volumes. The direction and relative number of photons within each cone depends on the object’s physical characteristics, their distribution the result of so-called specular, diffused or mixed reflection.
Figure 2 shows photons in air reflected from such a small sample area
, the specific spherical cone chosen because it is the one that points directly towards the lens of the camera, which is what this article is concerned with.

We will assume that the object presents an ideal Lambertian surface with effective reflectance
– so a total number of photons per unit area equivalent to
will be reflected diffusely in a predictable fashion, in the set of all directions contained within the hemisphere above the small uniform reference area
.
For a given conical half opening angle (
) the number of photons reflected into the relative cone will decrease the more the observed cone axis is tilted away from the direction perpendicular to the surface (its normal
). In fact with Lambertian reflectors the number of encompassed photons decreases with the cosine of the angle (
) between the surface normal and the axis of the cone – because less and less projected area is visible from the direction of interest as tilt increases. This is known as Lambert’s Cosine Law.

Figure 3 shows the relative number of photons that can be expected in the shown directions within the central slice of the set of all hemispherical directions: the longer the red line, the more the photons reflected in that direction. It turns out that many natural surfaces tend to behave this way when incident light is within about 40 degrees of the normal (angle
in Figure 1). Beyond that reflections become more and more directional, hence mixed.
Photons Reflect into Spherical Caps
Since the lengths of the red arrows in Figure 3 represent a relative number of photons reflected in the given direction from the same small area on the object, clearly the larger the opening angle of the cone of interest (also known as its angular aperture), the larger the proportion of total reflected photons that will be part of it.
If we assume that all photons spreading out inside the cone towards the lens all depart from the same relatively small area
on the object at the same time – like pellets spreading out from a shotgun, all traveling at equal velocity in a straight line – by the time they reach the lens they will have taken the shape of a spherical cap.
Figure 4 below shows an (almost) two-dimensional view of a spherical cap, looking like the arc of a circle with radius
subtended by angle 2
. With
in radians the length of such an arc is
by definition.

In three dimensions the area of the spherical cap on the cone would simply be the semi-arc
rotated through 360 degrees – the arriving photons will be spread out evenly over it. Its area is
(2) 
The bottom two approximations are valid for paraxial photography (see Appendix III for exactly how valid). We will see further down that the last one is most relevant to radiation transfer.
Solid Angles are Normalized Spherical Caps
Just like radians are normalized for
in one dimension it is useful to normalize the area of the cap for
, a quantity that is then referred to as solid angle ![]()
(3) ![]()
Its units are radians rotated through 360°, formally steradians (
). Per Equation (2)
(4) ![]()
We can see intuitively why this would be useful in our case, because for a given irradiance
interacting with a Lambertian surface the number of photons
instantaneously reflected out through the cone from a small uniform area on the object
towards the lens depends only on the opening angle
– hence solid angle
– but stays otherwise the same from when it departs the surface as it spreads out into an enlarging spherical cap on its way to the lens.
Radiance and Luminance
The number of photons (
) reflected per small area (
) on the object per normalized spherical cap (the solid angle
) in the direction of interest is denoted radiance (
, in photometry Luminance). We can write this definition generically as follows
(5) ![]()
the latter units apply to the Luminance familiar to photographers. Since a Lambertian surface reflects all incident photons per unit area
into the hemisphere above itself subject to reflectance
, we can calculate constant
by integrating over all possible solid angles (I’ll spare you the gory details[1]) obtaining
(6) ![]()
the subscript
indicating quantities at the object as seen from the lens. Radiance/Luminance
from a Lambertian surface is indeed only a function of reflected incident total power density
and is therefore the same in all directions and at all distances.
It’s easy to prove to oneself that radiance/Luminance is conserved in radiation transfer, just observe whether an image on a monitor appears equally bright from 300mm or from an order of magnitude farther away. Or whether a quasi-Lambertian gray card looks equally bright when tilted at different angles, within its limits of Lambertian diffuse reflection. This property of Luminance comes in handy in the discussion that follows.
The Camera’s Perspective
We now understand that the number of photons reflected/emitted towards the lens is by the definition in Equation (5)
(7) ![]()
with subscript
indicating quantities on the object side and
Radiance/Luminance from the object, sometimes referred to as ‘brightness’ (
for a Lambertian reflector);
because the small uniform reflecting area
is ‘foreshortened‘ by the cosine of
in the direction of propagation since the recipient of those photons (say the camera) sees less of it the more it is tilted with respect to the surface normal;
the solid angle function of opening angle
from the object plane to the edge of the pupil of the lens.
At typical photographic distances, all photons from uniform small reflecting area of interest
in the neighborhood of the vertex of the lens-subtending cone are assumed to end up in the same spherical cap, in the limit this assumption is exact.

Assuming no losses through the optics, the same number of photons that arrive at the lens (
) will leave it on the way to the image plane (
), with subscript
indicating quantities on the image side.
In air, the setup there is the reciprocal (the conjugate) of that on the object side, with
being constant and the solid angle
a function of opening angle
. Per geometrical optics, area
on the image plane near the vertex of the cone of arriving photons will be a (de)magnified version of area
on the object from which they left, with lens magnification
typically less than one.
Refer to Appendix II for the consequences of these facts that lead to the concept of Throughput or Etendue.
Bringing it All Together: the Camera Equation
So assuming an ideal lens with no losses and geometrical solid angle
, the total number of photons per second arriving on the sensing plane in air
is
(8) ![]()
because of conservation of energy and reciprocity. Since radiance/Luminance is conserved,
and with paraxial approximation the number of photons per unit area near the vertex of the cone on the image plane (sensor irradiance
) is from the definitions in Equations 1 and 8
![]()
However, when dealing with radiation transfer the geometric solid angle is only valid with the paraxial approximation because, even on the optical axis, energy striking the sensor arrives from a set of directions bounded by the geometric solid angle cone with varying obliquity, hence foreshortening, resulting in an effective solid angle
. Therefore
(9) ![]()
As Brad Paul points out in an insightful comment at bottom, in the center of the sensor
and, since lens working f-number in well corrected lenses that meet the Abbe Sine Condition is
in air, the effective solid angle for irradiance is
(10) ![]()
Irradiance in the center of the image plane then becomes a photographically meaningful
(11) ![]()
This is known as the Camera Equation valid with a perfect lens with a circular aperture on the optical axis, see Appendix I for a slightly less perfect lens.
Since Exposure is
, with
exposure time in seconds
(12) ![]()
a formula photographers know well and apply daily since it is the basis of the Exposure Value system.
With Lambertian reflectors the last two equations can be expressed as a function of incoming interacting irradiance
instead of
thanks to Equation (6), answering the question posed at the beginning of this article. Therefore, with a reflector parallel to a perfect lens, the photon flux per unit area in the center of the sensor is
(13) ![]()
with
surface reflectance,
photon flux per unit area interacting with the object (irradiance/illuminance) and
the lens working f-number. As mentioned up top these are typically spectral quantities valid both in radiometric and photometric contexts. A discussion of pupils is next.
PS. A similar result was obtained straight from a light source’s Radiant Exitance in the Appendix of the article on Photons Emitted From a Light Source.
Appendix I – Slightly More Formal Equations
As mentioned, incoming energy/photon flux (
) depends on the spectral power distribution of the illuminant, hence on wavelength (
). So Irradiance in Equation (1) becomes
(14) ![]()
The limits of integration for wavelength in photography and colorimetry are the visible range, usually taken to be 380-780nm.
The incoming energy is reflected towards the camera by an object at the scene according to spectral reflectance
. In daylight, Luminance in Equation (7) is reflected flux weighted by the photopic Luminosity Function
per unit area per the solid angle subtended by lens aperture at the object
(15) ![]()
A great lens in real life may be close to unaberrated but it will have some transmission losses
, that in stops are often almost negligible today. It will also most likely have some mechanical vignetting
as a function of position on the imaging plane, defined by the angle of view
, with theta equal to zero on the optical axis – and something approaching
fall-off as light reaches the corners of the sensing area.
So with Luminance conserved on the image side we then have the updated Camera Equation
(16) ![]()
Similarly for Exposure in Equation (12).
Appendix II – Throughput and Magnification
As we are assuming an ideal imaging system with no losses, the number of photons reflected by the object towards the lens is the same that hits the sensing plane, that is
or
(17) ![]()
with
the effective solid angles. Since radiance/Luminance is conserved,
cancel out and we are left with
(18) ![]()
The quantities on either side of the Equation above are known as Throughput (or Etendue, or a multitude of other names depending on the discipline). Throughput is obviously also conserved in radiation transfer. It is independent of shutter speed because exposure time would be the same on both sides of Equation (17) so it would also cancel out.
We know from Equation (10) that effective solid angle
is equal to
, with
the lens working f-number, so in photography it can be expressed as follows:
(19) ![]()
can be the size of a pixel for instance.
In a photographic context Etendue reflects the fact that, for a given radiance/ luminance, the photons from a small area on the object
are ideally the same that make up its image
on the sensing plane. Rearranging terms in Equation (18) we get
(20) ![]()
and if we assume that the small projected area on the object is circular or square with height ![]()
(21) ![]()
In other words lens magnification
is equal to
(22) ![]()
with half opening angles
in radians. The preferred expression for working f-number for well corrected lenses in air that take into consideration the Abbe Sine Condition is
(23) ![]()
with
the image-referred numerical aperture. Then
(24) ![]()
with
and
the object referred numerical aperture and working f-number respectively.
If the photographer increases lens magnification all else equal including the number of photons, image area will increase therefore resulting in decreased photon density there (and vice versa).
Photon density in photons/unit area is Exposure per Equation (12), in this case controlled solely by
since
and
are assumed to stay constant. In other words, a change in magnification will result in a change in Exposure all else equal since photon throughput must stay the same:
(25) ![]()
with
Exposure and
lens magnification. Magnification for a far away object is proportional to focal length. In that case if focal length is doubled Exposure reduces to a fourth as much.
Appendix III – Accuracy of Geometric Solid Angle Approximations
As we have seen in this article a geometric solid angle is the area of the relative conical spherical cap per Equation (2), divided by the radius squared:

The last two are approximations valid only for small
in radians.
With a circular Exit Pupil of diameter
, the lens focused at infinity and a small half opening angle
– the spherical cap can be assumed to be almost flat, therefore a circular disk. In that case its area is the area of the aperture
and the radius of the cone can be considered equal to published focal length at infinity
, so another approximation for geometrical solid angles as used in photography is
![]()
with published f-number at infinity
. Figure 6 shows the error in stops for each solid angle formulation as a function of working f-number for a well corrected lens, per Equation (23).

Pretty good in practice over typical working f-numbers, since most photographers are content with 1/3 of a stop Exposure controls in-camera. You can check my math in the Matlab routine that produced the plot.
Notes and References
1. For a clear derivation of radiance/Luminance from Lambertian sources see Dr. Robert A. Showengerdt’s class notes at the University of Arizona, 2000.
This question is about the solid angle. I think the correct solid angle is the projected or foreshorten solid angle. Therefore
is the correct solid angle. As you point out, they both have the same first term in their expansion.
Why do you think the full solid angle is the one to use to do radiometry? I have always had issues clearly thinking about radiometry behind a lens but from the point of view of the pixel rays are coming in at a range of angles which should be foreshortened.
Hello Brad,
I am not sure I understand your question. If the pixel in question is in the very center of the sensor, it is parallel to the exit pupil and so the solid angle is full by definition, per Figure 5. The rest of the projected infinitesimal area is painted by wabbling the angle around infinitesimally. On the other hand if the pixel is closer to the edge of the sensor there will be vignetting effects.
Jack
Let’s only think about the pixel on the optical axis, and let’s imagine the converging wavefront that illuminates the pixel. From the point of view of the pixel, there is a solid angle from which the radiation is coming. Geometrically this is the full solid angle. But if I were to imagine breaking this solid angle up into small little sub-solid angles, then the radiation coming from the solid angles near the edge of the lens would need to be multiplied by an appropriate cosine factor. Putting this cosine factor in the integral to foreshorten the solid angle gives the result in my previous post.
Let’s focus on the pixel located at the very center, directly behind the optical axis of a circular (rotationally symmetric) optic. We’ll allow this optic to grow arbitrarily large. For simplicity, assume we’re observing a point source at infinity which is equivalent to a uniform flat field facing the entrance aperture directly.
In this setup, we can easily switch between ray optics and wavefront-based interpretations. From the pixel’s point of view, there’s a set of converging rays directed toward it. Alternatively, we can imagine this as a spherical cap of a wavefront converging onto the pixel.
If we ask, “What is the solid angle of this cap?”, the answer is given by:
Now suppose the optic becomes very large. While the total solid angle remains the same, rays originating near the edge of the aperture now intersect the pixel at increasingly oblique angles. Let’s consider a narrow sub-section of the solid angle far from the center. To compute the radiometric contribution from this sub-section, we must account for the foreshortening introduced by the incident angle — specifically via a cosine factor due to the obliquity of the incoming rays.
Therefore, when computing radiometric quantities, we must include this foreshortened cosine factor in the integral to correctly evaluate the projected solid angle. This gives:
I would not consider Ω(θ) and ω(θ) approximations of one another — they represent fundamentally different physical quantities. However, they do share the same first term in their Taylor series expansion around θ=0, which may be the source of confusion in some discussions.
Hi Brad, I see, you are saying it is not approximately
, it is
, and the difference can become relevant at small f-numbers.
You are absolutely correct, I was unwittingly applying the paraxial approximation, your projected solid angle
is the correct ‘effective’ solid angle to use because it takes into consideration that even with dA on axis some rays will hit the sensing area obliquely, foreshortening the receiving area. I will correct the article when I have a moment, thanks!
Jack