Every Camera, Every Angle on Android

At GameChanger, video streaming has become a huge part of our business and thus our tech stack. But as a small company that practices shipping often, we can’t ship everything feature complete from day one and thus video streaming launched with the ability to only stream from your default rear camera lens. But as we […]

At GameChanger, video streaming has become a huge part of our business and thus our tech stack. But as a small company that practices shipping often, we can’t ship everything feature complete from day one and thus video streaming launched with the ability to only stream from your default rear camera lens.

But as we know, ultrawide lenses on phones have become common place and sure enough, customers began writing in, asking to be able to use their ultrawide cameras to stream their event. Baseball and softball fields are actually quite wide and it makes a lot of sense to be able to capture more of the field. So in time, ultrawide streaming became our priority and thus we engaged in battle with one of the most brittle Android APIs we have seen…

Streaming in the olden days

Well, not really in the olden days, because we are using the most up to date APIs, but before we implemented ultrawide streaming, selecting the camera we wanted to stream with was generally pretty simple:

private fun CameraManager.chooseCamera(teamId: TeamId) = cameraIdList.filter { id ->
    val characteristics = getCameraCharacteristics(id)
    val capabilities = characteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!

    characteristics.get(CameraCharacteristics.LENS_FACING) == CameraMetadata.LENS_FACING_BACK &&
        capabilities.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_BACKWARD_COMPATIBLE)
}
    .mapNotNull { id ->
        val characteristics = getCameraCharacteristics(id)
        val cameraConfig = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!

        val (width, height) = arrayOf(1280, 720)
        cameraConfig.getOutputSizes(MediaRecorder::class.java)
            .filter { it.width <= width && it.height <= height }
            .maxByOrNull { it.width * it.height }?.let { id to it }
    }
    .map { (id, resolution) -> CameraArgs(cameraId = id, width = resolution.width, height = resolution.height, fps = 30) }
    .firstOrNull()

TL;DR: Basically, get the first rear camera that supports 720p. Note the cameraId—an ID corresponds to each camera on the device…right?

Nope. Well sometimes, it depends.

Enter Multi-Camera API

At the time of writing, the not-deprecated API for accessing cameras on Android is camera2. camera1 is deprecated. cameraX is built on top of camera2. Obviously.

Here are some references for camera2. We are going to focus on the multi-camera training here as a jumping off point.

The multi-camera training page does a great job of explaining the differences between logical and physical camera setups, when it was introduced and why but for the purposes of this article here’s what you need to know:

  • An Android device running above API level 28 runs either a logical or physical camera setup. Below 28 is strictly a physical camera setup.
  • Physical camera setups expose each camera sensor individually with cameraIds through cameraManager.cameraIdList. If you are lucky, you will have one camera id per physical sensor and be able to choose any id you want to stream with.
  • Logical camera setups hide the details of the different physical cameras sensors on the back side of the phone, giving you just one id for the front and back of the device in the cameraManager.cameraIdList. However, if you continue to poke the camera API, you can get those physical sensor ids, but you still can’t use them to open a camera session. You must use ids from cameraManager.cameraIdList. Thus, to actually stream with an ultrawide sensor on a logical camera setup, you have to do more…things.

Okay, doesn’t sound too bad. It’s easy enough to figure out if a device is a physical or logical camera setup:

private fun CameraManager.getRearCameraIds(): List<CameraId> = cameraIdList.filter {
    val characteristics = getCameraCharacteristics(it)
    val capabilities = characteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)

    characteristics.get(CameraCharacteristics.LENS_FACING) == CameraMetadata.LENS_FACING_BACK &&
        capabilities?.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_BACKWARD_COMPATIBLE) == true
}

private fun CameraManager.hasRearLogicalCameras(): Boolean = this.getRearCameraIds().any {
    this.getCameraCharacteristics(it).get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)
        ?.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA) == true
}

So let’s start with the easy one.

Supporting physical camera setups

Once we know that we are dealing with a physical camera setup, it’s simply a matter of iterating over rear ids and calculating the widest one. Our camera feature only exposes the default and the widest sensor to the user, so this is the logic that works for us:

private fun getWidestPhysicalCamera(streamingResolution: StreamingResolution): CameraInfo? {
    return cameraManager
        .getRearCameraIds()
        .getWidestCameraId()
        ?.mapNotNull { cameraId ->
            cameraId.toString().getMaxSupportedResolution(streamingResolution)
        }
        ?.map { (cameraId, resolution) ->
            CameraInfo(cameraId, CameraLensType.WidePhysical, StreamingResolution(resolution.width, resolution.height, streamingResolution.fps))
        }
        ?.firstOrNull()
}

private fun List<CameraId>.getWidestCameraId(): CameraId? = this.maxByOrNull {
    it.computeCameraWidth()
}

private fun CameraId.computeCameraWidth(): Float {
    val characteristics = cameraManager.getCameraCharacteristics(this)
    val activeSize = characteristics.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE)
    val physicalSize = characteristics.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE)
    val pixelSize = characteristics.get(CameraCharacteristics.SENSOR_INFO_PIXEL_ARRAY_SIZE)
    val focalLengths = characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS)

    var cameraWidth = Float.MIN_VALUE

    if (activeSize != null && physicalSize != null && pixelSize != null && focalLengths != null) {
        val fractionX = activeSize.width().toFloat() / pixelSize.width.toFloat()

        val firstFocalLength = focalLengths.firstOrNull()

        firstFocalLength?.let {
            cameraWidth = Math.toDegrees(2.0 * atan2((physicalSize.width * fractionX).toDouble(), 2.0 * firstFocalLength)).toFloat()
        }
    }

    return cameraWidth
}

Note that we basically ripped the widest calculation logic from various SO posts. Here’s one that offers a good explanation of what’s going on there.

This logic along with the original logic to fetch the default rear camera yields two camera ids. Switching between them is just restarting your preview/capture session with the new id.

Supporting logical camera setups

Okay, we have to jump through a few more hoops when supporting logical camera setups. Once we have determined we do have a logical camera setup present, we have to determine which rear camera id has the logical cameras behind it:

@RequiresApi(Build.VERSION_CODES.P)
private fun List<CameraId>.getLogicalCameras(): List<CameraId> = this.filter {
    val characteristics = cameraManager.getCameraCharacteristics(it)
    val capabilities = characteristics.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)
    capabilities?.contains(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA) == true
}

Now we have a list of rear camera ids that have logical multi camera capabilities. This means that this camera id is a logical camera id. This means that this logical camera id has 2 or more physical camera ids behind it. We need those to address individual lenses. This is how we get them:

@RequiresApi(Build.VERSION_CODES.P)
private fun List<CameraId>.getAllLogicalPhysicalPairs(): List<Pair<CameraId, CameraId>> = this.flatMap { logicalCameraId ->
    val physicalCameraIds = cameraManager.getCameraCharacteristics(logicalCameraId).physicalCameraIds.toList()
    physicalCameraIds.map {
        Pair(logicalCameraId, it)
    }
}

Now we have physical ids paired up with their logical id. Now we need to figure out the widest lens of the physical ones. This is easier than the physical setup, because now we have LENS_INFO_AVAILABLE_FOCAL_LENGTHS available to us:

private fun List<Pair<CameraId, CameraId>>.getWidestLogicalCamera(): CameraId? = this.minByOrNull {
    val cameraCharacteristics = cameraManager.getCameraCharacteristics(it.second)
    cameraCharacteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS)?.minOrNull() ?: Float.MAX_VALUE
}?.first

Exhausted yet? Finally, logical camera setups require you to set a zoom ratio to get the widest focal length. We get the number like so:

@RequiresApi(Build.VERSION_CODES.R)
private fun getMinimumControlZoomRatio(logicalCameraId: CameraId): Float {
    val cameraCharacteristics = cameraManager.getCameraCharacteristics(logicalCameraId)
    return cameraCharacteristics.get(CameraCharacteristics.CONTROL_ZOOM_RATIO_RANGE)?.lower ?: 1F
}

Putting it all together:

@RequiresApi(Build.VERSION_CODES.R)
private fun getWidestLogicalRearCamera(): CameraLensType.WideLogical? {
    return cameraManager
        .getRearCameraIds()
        .getLogicalCameras()
        .getAllLogicalPhysicalPairs()
        .getWidestLogicalCamera()
        ?.let {
            CameraLensType.WideLogical(getMinimumControlZoomRatio(it))
        }
}

So we have the correct id to open the rear camera session with and a control zoom ratio. The capture request is built the same but now we use the control zoom ratio:

if (cameraLensType is CameraLensType.WideLogical && Build.VERSION.SDK_INT >= Build.VERSION_CODES.R) {
    captureRequestBuilder.set(CaptureRequest.CONTROL_ZOOM_RATIO, cameraLensType.controlZoomRatio)
}

Note that you don’t use physicalCameraIds to actually open a camera session. With logical camera setups, you still use a camera id found in cameraManager.cameraIdList to open a camera session. You then just give it a minimum zoom control ratio. Then the OS itself takes care of selecting the widest lens to reach the desired zoom control.

Gotcha!

Okay, so code stuff out of the way. Figuring all the correct ways of doing this was tough as there are not a lot of code samples out there. But there is one…

OpenCamera. OpenCamera is a highly featured, open source camera app. And it includes support for physical and logical multi camera setups! Great, a perfect reference.

So I install OpenCamera on my OnePlus 7 Pro and it seamlessly switches between wide and ultrawide lenses. So a couple cmd+c, cmd+v strokes from the OpenCamera source later I had the multi-camera implementation inside the TeamManager app. And…it didn’t work. cameraManager.cameraIdList showed only the front camera and the rear standard lenses in my app (Note this is a physical setup). But in the OpenCamera app, the same API call cameraManager.cameraIdList showed the front camera, rear standard and rear ultrawide.

This really threw us for a loop. For whatever reason, the OpenCamera package name was white listed and thus allowed to access more camera ids.* Why? We aren’t sure. Just OnePlus things, amirite? But what it means is that our app can not support wide angle streaming for OnePlus devices.

*I can’t find where I found this anymore, but it was buried deep in a SO post. Took us a couple of days at least to find out.

And this was just the tip of the iceberg for dealing with manufacturers’ implementation…

At the mercy of the manufacturers

Reading this whole article, you may ask, how do we know which phones support which setup? Well, the short answer is that we have no idea. Here’s a short list of what we have found so far, if the device has an ultrawide rear lens:

OnePlus devices: Physical setup that doesn’t expose ultrawide to our app, does expose ultrawide to OpenCamera. Ultrawide works in native camera app.

Motorola devices: Physical setup that doesn’t expose ultrawide to any app. Ultrawide works in native camera app.

Samsung devices: Physical setup that exposes standard rear and ultrawide. Ultrawide works in native camera app and OpenCamera. We were able to support Samsung devices.

Pixel devices: Logical setup, but only the Pixel 5 has an ultrawide. The Pixel 4 has a standard and telephoto. So we needed to check if the device has a logical rear camera that is wider than the default camera. Pixels are the only devices we have found that support logical setups.

And these are just the ones we know about! We don’t have every device in the world and this can change with software updates and new devices.

As you can see, how each manufacturer decides to implement the multi-camera API is completely random and illogical. We ended not being able to support as many devices as we thought when the project was conceived. It is very disappointing to see the state of the multi camera API as manufacturers implement it. Especially considering how many devices are being built with multiple lenses.

But hey, I think we are “future-proofed”, whatever that means. Until the multi-camera API is deprecated anyway…

Source: GameChanger