A13 Bionic: this is the brain of 8.5 billion transistors that ride the new iPhones

Every new generation of iPhone comes up with the same question: is it worth the change? Answering that question would require an analysis of each particular case, because it is not a yes or no. One of the most common questions I ask from social networks (or arguments to justify yes or no change) is that the CPU «only» is 20% faster than the A12 and that’s not a big deal either. And we return to the same dichotomy: for some the increase will justify and for others not.

The problem is that measuring a CPU by its speed increase is like measuring a car by its maximum speed. If we do that we forget that the car is not only something that runs: a car has consumption and perhaps a more efficient engine that «only» gains 10Km/h of maximum speed, it turns out to consume 20% less fuel. That new model will have more comfortable seats, a better multimedia system with Apple Car, will improve in its passive accident safety assessment, allow LTE internet connection… there are many more factors to consider, the most important, your ability to perform specific tasks.

Algorithms

Before we go straight into the A13, let’s put order in the very tame comparisons that are made today, mainly to put on the table enough knowledge to allow us to value change in its right measure. Not only know the A13, understand its differences with previous generations.

The star question is: «If the new A13 is only 20% faster, why can’t an XS with the A12 take photos in night mode like the iPhone 11?». Simple: CPU algorithms and functions. It’s not an Apple whim: it’s that the A12 doesn’t have the ability to make night photos in the way Apple has implemented this feature. Its algorithm would not work on an A12 by capacity, not by speed.

In the case of a night photography mode (to cite the specific example) we have different ways to do it. Different algorithms: calculation operations that are performed with each and every pixel of the photograph and the information that the lens gives us, to obtain a photograph in this mode.

Let’s explain in a simple way the differences between algorithms and how we have many solutions to achieve the same goal (with different levels of efficiency). Let’s say I want to add (in Swift) all the numbers in a sequence. I could do it like this:

The latter is a more efficient algorithm. But it is even more so if I use functional programming and do:

var sum s array.reduce(0, +)

All these instructions do exactly the same thing. They get the same result. But some are more optimal than others. They turn around more or not, they need more instructions or less.

Neural Cam vs iPhone 11 (A13)

Having seen and understood this we take the next step: night mode photography with the new iPhones or with the well-known Neural Cam app. Someone will think, «If you make it the Neural Cam app from an iPhone 6, why does Apple only allow it with the A13?» The problem is that we can never think, that an end result (and its quality) equals two different processes.

The Neural Cam app asks us to focus where we want to take the photo and tries to focus (even if it has little light). He sets the focus and asks us to launch the photograph. Then make a capture by opening the target for two seconds, where you have to have your phone without moving it. Here, the optical stabilization capability of our mobile will be key. When it’s over, in a process that lasts about 10 seconds (it could last longer if our mobile is older), it takes all the information it’s captured in those two seconds and reduces the size of the photograph: for what? To eliminate the possible movement in it.

It is a great algorithm that makes use, for everything, of the brute force of the CPU without using any specific components. If we download an app of 3.49 o to take night photos that our device does not do and we have to wait 10 seconds or more for them to be done, we will understand it because it is NOT a native function. It’s a third-party app and we’re more forveenful. If this put Apple on their current devices, the torches would be on the doorstep of Apple Park.

And this is important to understand: it uses machine learning, but does NOT use the neural engines of the latest generations of Apple processors that only work with the capabilities of the Apple Development Kit. To run your model, use the brute force of the CPU and that’s why it takes 10 seconds or more. And that’s why it works on any device running iOS 12. You have to understand that any algorithm, whatever it is, can always run against the CPU: but it will consume more power and be slower.

Night mode however on iOS, natively, is done automatically. Depending on what you want to photograph it can take 2 or 3 seconds on average and you get the result in just a second or less. But we can set up for an exposure of up to 30 seconds manually. When we’re launching the photo, depending on what the device is viewing in the preview changes the way you take the photo.

In this generation of iPhone, these are able to analyze and get results from the feed we see before launching the photo. A feed that is still a real-time video, which is being viewed by the camera as is. Depending on what you see and how you see it, if it’s a goal that moves a lot or little, it changes the way you take the photo. If we put the camera in front of a fixed object and leave the still image the system takes several photographs at once with longer exposure. But if you detect that we are moving or the objects that pass through the camera preview move faster, you will take more faster photos with different exposures to add more results with different exposures.

Therefore, the night mode of iOS 13 for the new iPhone makes use of Machine Learning to detect the semantics of photography: to understand what is seen in the photograph: its shadows, what moves, what not, how we place the camera, whether it is a fixed plane or not , at what distance, if what we are taking is still or moving… This is called image semantics, a feature that only has the A13 chip in its neural motor. The ability to recognize and analyze content, even in preview mode and that a device could not do.

A13

The A13 is a huge leap forward in architecture. It’s not that it’s 20% faster, which is the least of it. It is the number of components and new small parts that help the whole set as we have discussed (if we use native development).

Apple’s chips team have been working for about 10 years, since April 2008 that Apple bought PA Semi and in January 2010 the first device with an Apple chip was introduced: the iPad. This is the tenth chip they take out, if we don’t count the X variations that some generations have had that only incorporate more CPU or GPU cores.

The A13 Bionic, which has been improved on each and every component and added some new ones, is not just CPU. Now that we have had time to analyze we can see the following components:

High-efficiency sound processor with 24-bit audio support.

A DAC with advanced format AAC or HE-AAC support, both version 1 and version 2. It is also capable of playing MP3, FLAC and many other audio formats. A processor that is also able to calculate the spatial sound wave alterations of the new iPhones, which creates a computational surround sound as HomePods do, coupled with the ability to decode a Dolby Atmos file, also supported at the level Sound. And another function: it is able to extend the sound of a particular element of the sound spectrum. If I focus something on a video, what is focused and generates sound can increase the volume (zoom in on sound) to that element that generates the sound.

Screen and frequency management processor. We should not forget that the screen of the new iPhone Pro is the same as the Pro Display XDR monitor. It has an extended HDR mode, an average brightness of 800 nits with peaks of 1,200 and a contrast of 1:2,000,000. This processor establishes a dynamic refresh of the display so that when some pixels do not change their contents, their refresh is less and thus improves the efficiency of the battery considerably. The Retina Pro XDR screen of iPhone Pro does something similar and is able not to send refresh signal to the screen if its content has not changed and thus save battery. Up to 15% higher energy efficiency than the XS or XS Max. It works in conjunction with an OLED display processing processor to manage pixels and how they light up, which uses the screen’s Pro engine to refresh or paint.

High-performance unified memory processor. This is a technology that Nvidia uses in its CUDA library and in its GPUs. It basically allows you to access any part of the entire memory system and at the same time, migrate data on demand to the dedicated memory of each component with broadband access. This way, in a single operation, the component that accesses content in general memory will get a migration of this direct data to its own memory zone or logs.

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *