One of the coolest things about Android is that the devices and the cameras attached to them come in all shapes and sizes, and this includes the cameras attached to them.
We’ve seen phones with five cameras, enormous megapixel counts, and all kinds of other hardware oddities.
In addition to this, nifty machine learning features are becoming more ubiquitous, so users are starting to expect camera apps to process their photos in new interesting ways.
Unfortunately, this hardware variety and necessity for image processing tend to make developing camera apps difficult.
Camera APIs also vary across different devices, leading to branching implementations and hard-to-debug issues with the camera.
The standard API used since Lollipop has been the Camera2 API, which gives developers access to the basics of photography (and is itself an improvement over the original Camera API).
However, it still doesn’t address a lot of device-specific weirdness around interacting with the camera hardware and software, and you still have to manually manage resources and configuration.
Because of this, your app often has to determine which device it’s running on to address these specific issues before sending and/or after receiving information from the Camera2 API.
Needless to say, this can quickly become a maintenance nightmare in your codebase.
CameraX to the rescue
The CameraX support library aims to solve these problems with an elegant API that behaves in the same way across almost every Android device.
Basically, it serves as a simplifying abstraction layer on top of the existing Camera2 API. It allows you to quickly and succinctly access camera information for common use cases.
CameraX handles all the startup and shutdown logic, ensures that lifecycles are obeyed, and manages threading for camera features.
Issues found on specific devices are now handled within the CameraX library, instead of you having to include handling logic in your application.
Since it’s still using the Camera2 API, it provides backwards compatibility to API 21.
CameraX use cases and resource management
The API design is simple – you tell CameraX which features you need enabled during that session, what you want to do with the output, and what lifecycle your camera session should be limited to.
You’re required to give it a configuration, but CameraX attempts to determine sensible defaults if you don’t specify one or if your specification exceeds the capabilities of the device your app is running on.
For example, you can request a target resolution for the output image, but if the device is incapable of that resolution, CameraX will automatically handle the fallback to a supported resolution.
Startup and shutdown are handled based on the bound lifecycle, which operates in an intuitive way: on
Lifecycle.Event.ON_START, the camera is started and begins reporting output data, on
Lifecycle.Event.ON_STOP, the camera is shut down, and on
Lifecycle.Event.ON_DESTROY, associated resources are released.
CameraX is designed with three main use cases: preview, analysis, and capture.
Preview use case allows you to access a stream of output frames from the camera, meaning you can display the image stream in your application.
CameraX also provides focus, zoom, and torch (flash) APIs to facilitate developing common camera preview interactions.
Analysis use case allows you to attach an analyzer method that will be run on each frame of the camera output stream.
Analyzing each frame can be used to implement real-time effects like face detection or to detect light or color levels in the image.
Finally, capture use case allows your user to actually take the photo and save it to the device.
The modes can be enabled on their own or in combination with the others, and all three can be tied to the same lifecycle so that resource management remains simple.
Some devices have additional capabilities not present on all devices.
This includes features like Portrait mode, Night mode, HDR, and Beauty mode.
CameraX extensions enable these modes and more, and it’s only a few lines of code to implement.
If the specific device your app is running on doesn’t support that extension, then it returns
false, the extension remains disabled, and everything continues as normal.
How does Google ensure consistent camera behavior?
Google has heard the many pain points developers have found with Camera2 API, and among the most daunting is having to debug hard-to-find issues on dozens of different devices.
To address this, they’ve built an automated test lab specifically for CameraX and filled it with a variety of devices from various OEMs and Android versions back to Lollipop.
They’re continually running unit tests, integration tests, and performance tests on different camera features.
The goal of this test facility is to find bugs with the camera interaction so they can be addressed within the CameraX library, and no longer need to be manually handled by app developers.
We’ve seen the benefits that CameraX gives us over the Camera2 API, and ideally this abstracted access to the Camera2 API will be enough for most use cases.
However, if you have a more complex use-case that CameraX doesn’t yet cover, you can still add the Camera2 dependency alongside CameraX and use Camera2 APIs when you need to.
For now, CameraX is still in alpha, so it’s not recommended to become too attached to the API as it currently exists.
However, according to Google it’s in rapid development, and several libraries announced in alpha during IO 2018 now have stable versions, so I’d expect CameraX to follow a similar tragectory.
If you’d like a hands-on introduction, Google created a helpful Codelab that walks you through the basics.
Give CameraX a whirl and say goodbye to your Android camera-related headaches!