Tech articles - DXOMARK https://www.dxomark.com/category/tech-articles/ The leading source of independent audio, display, battery and image quality measurements and ratings for smartphone, camera, lens, wireless speaker and laptop since 2008. Fri, 08 Mar 2024 15:49:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.dxomark.com/wp-content/uploads/2019/09/logo-o-transparent-150x150.png Tech articles - DXOMARK https://www.dxomark.com/category/tech-articles/ 32 32 DXOMARK Decodes: An introduction to AI in smartphone cameras https://www.dxomark.com/dxomark-decodes-an-introduction-to-ai-in-smartphone-cameras/ https://www.dxomark.com/dxomark-decodes-an-introduction-to-ai-in-smartphone-cameras/#respond Fri, 08 Mar 2024 11:09:15 +0000 https://www.dxomark.com/?p=163681 DXOMARK’s Decodes series aims to explain concepts or dispel myths related to technology, particularly in smartphones and other consumer electronics.  In this edition, we address the current buzz around artificial intelligence and briefly look at one way that AI is being used in smartphone cameras. We’ll continue to explore other ways in which AI is [...]

The post DXOMARK Decodes: An introduction to AI in smartphone cameras appeared first on DXOMARK.

]]>
DXOMARK’s Decodes series aims to explain concepts or dispel myths related to technology, particularly in smartphones and other consumer electronics.  In this edition, we address the current buzz around artificial intelligence and briefly look at one way that AI is being used in smartphone cameras. We’ll continue to explore other ways in which AI is used in smartphone cameras and image quality assessment in future articles.


Smartphone photography has always had an element of magic about it. We just point and tap our devices in the hopes of capturing a moment or the scenery, no matter how challenging the situation might be. Smartphone cameras are now very sophisticated in the way they can make almost any image or video come out with correct exposure, good details, and great color, helping to overcome the compact device’s optical limitations.

Recently, we saw the importance that smartphone makers are placing on using artificial intelligence in the latest flagships to improve the user experience, particularly the image-taking experience. We saw some of the latest  AI camera technologies with the release of Samsung’s Galaxy S24 Ultra, for example, which emphasized a range of AI photography tools that can guide the image-taking process from “preview to post,” including editing capabilities that allow users to resize or move objects or subjects after capturing the image. The latest Google Pixel phones also use AI technologies that allow users to reimagine or fix their photos with features like “Best Take” or “Magic Eraser,”  which blend or change elements such as facial expressions, as well as erase unwanted elements from a photo.

But while smartphones put a camera in everybody’s hands, most smartphone users are not photographers, and many devices do not even offer options to adjust certain photographic parameters, in many cases thanks to AI. As AI makes its way into many aspects of our lives, let’s briefly explore what is AI and how it is being applied to smartphone cameras.

What do we mean by AI?

AI is a fast-developing field of computer science that offers the possibility of solving many problems by perceiving, learning, and reasoning, to intelligently search through many possible solutions. AI has given computer systems the ability to make decisions and to take action on their own depending on their environment and the tasks they need to achieve. With AI, computer systems are performing tasks that normally would have required some degree of human intelligence, for example, from driving a car to taking pictures.  It’s no wonder that companies worldwide are using AI to improve their products, services, and the user experience.

We often hear the terms Artificial Intelligence, machine learning and deep learning bandied about interchangeably. But the three terms have some distinctive differences in how they process data.

Artificial Intelligence is a general term to describe the ability of a computer or robot to make decisions autonomously.  Within AI is a subfield called machine learning, which contains the algorithms that integrate information from empirical data. The programmer, after coding the algorithm, executes it on a set of data that is used for “training”. The algorithm will look for patterns in the data that allow it to make predictions on a given task. Once new data comes in, the algorithm can search for the same patterns and make the same kind of predictions on the new data. It is the algorithms that learn to adapt to the new data.

A subset of machine learning is called deep learning, which processes an even larger range of complex data in a more sophisticated way, through multiple layers called neural networks to achieve even more precise results and predictions.
Deep learning-based models, for example, are widely used now in image segmentation on X-rays for medical applications, in satellite imaging, and in self-driving cars.

Smartphone photography is also benefitting from deep learning models as cameras are programmed to learn how to produce and create a perfect image.

How AI is used in smartphone photography

You might not realize it, but even before you press the shutter button on your smartphone to take a photo or video, your personal pocket photographer has already begun working on identifying the scene and in some cases differentiating the objects and setting the parameters to be ready to produce an image that will hopefully be pleasing to you.

Smartphone photography is a good example of AI at work because the images are already a result of computations that rely on certain AI elements such as computer vision and algorithms to capture and process images.

In contrast, a traditional DSLR camera provides a photographer with a wide range of parameters for creative image-taking. The way these parameters are set depends on:

–identifying the scene (portrait, natural scene, food, etc.) that is to be photographed and the semantic content of the scene, meaning what should the viewer focus on in the image
–the properties of the scene such as the amount of light, distance to the target, etc

But most smartphone cameras do not even offer the option to adjust these parameters.

Scene detection

The ability of a machine to learn depends on the quality of the data it processes. Using computer-vision algorithms, which in itself is a form of AI, a smartphone camera needs to be able to correctly identify the scene by extracting information and insights from the images and videos in order to adapt its treatment.

The following examples are simple segmentations, in which the object is separated from the background and categorized.

What allows the computer or device to extract this information is called a neural network. With neural networks, computers can then distinguish and recognize images in the same way that humans do.

There are many different types of neural networks, but the main machine-learning model used for images is known as a Convolutional Neural Network (CNN), which puts an image through filters or layers that activate certain features from the photo. This then allows the scene and objects in the scene to be identified and classified. CNNs are used for semantic segmentation of an image, in which each pixel in an image is categorized into a class or object.
Semantic segmentation and image labeling, however, are the most challenging tasks for computer vision.

For cameras to be able to learn to “see” scenes and objects like humans do depends on extensive databases of meticulously annotated and labeled images. Image labeling is still a task that requires human input, and many companies create and sell massive databases of labeled photos that are then used to create machine learning models that can be adapted for a wide range of products and specific applications.

The technology has advanced very quickly, and some chipmakers are already incorporating semantic segmentation into their latest chips so that the camera is aware and “understands” what it is seeing as it takes the photo or video to optimize it. This is known as real-time semantic segmentation or content-award image processing. Many of these technologies are thanks to improved processing power from the chipsets, which are now integrating many of these AI technologies to optimize photo- and video-taking. By having the capability to separate the regions of an image in real time, certain types of objects in the image can be optimized for qualities such as texture and color. We’ll take a closer look at all the other ways that AI plays a role in image processing in another article.

Now let’s take a look at a real-life example of AI at work in a smartphone camera.   The example below reveals how the camera is making decisions and taking action on its own based on what it is identified in the scene. You’ll see how the camera adjusts the image as it goes from identifying the scene (is it a natural scene or portrait)  to detecting a face and then adjusting the parameters to provide a correct exposure for a portrait — the target (the face).

Photo 1
Photo 2
Photo 3

In Photo 1 on the left,  the camera identifies a natural landscape scene and exposes it, but at Photo 2, when the subject turns around, we see that the camera still has not fully identified the face, but by Photo 3, the camera has identified a face in the scene and has taken action to focus on it and expose it properly at the expense of the background exposure. In addition to the changed exposure of the background as well as the face when comparing Photo 3 with Photo 1, we also see that the subject’s white T-shirt has lost much of its detail and shading.

While Photo 3 is not ideal in terms of image quality, we can clearly see the camera’s decision-making process to prioritize the portrait for exposure.

Conclusion

As more manufacturers incorporate the “magic” of  AI into their devices, particularly in their camera technology to optimize photos and videos, software tuning becomes more important to get the most out of these AI capabilities.

Through machine learning, smartphone cameras are being trained to identify the scenes more quickly and more accurately in order to adapt the image treatment. Through deep learning and its use of neural networks, particularly the image-specific CNN, smartphone cameras are not only taking photos, but they are also making choices about the parameters once reserved for the photographer.

AI is helping to turn the smartphone camera into the photographer.

We hope this gives you a basic understanding of how AI is already at work in your smartphone camera. We will continue to explore how AI affects other areas of the smartphone experience in future articles. Keep checking  dxomark.com for more Decodes topics.

The post DXOMARK Decodes: An introduction to AI in smartphone cameras appeared first on DXOMARK.

]]>
https://www.dxomark.com/dxomark-decodes-an-introduction-to-ai-in-smartphone-cameras/feed/ 0 AI_terminology_graphic_1 Semantic_Segmentation_Graphic_2 Semantic_Segmentation_Graphic_1 AI_scene_detection_visual AI_scene_detection_visual_2 DECODES Picture1 DECODES Picture2 DECODES Picture3
Smartphone portrait photography and skin-tone rendering study: Results and trends https://www.dxomark.com/smartphone-portraits-skin-tone-rendering/ https://www.dxomark.com/smartphone-portraits-skin-tone-rendering/#respond Wed, 21 Feb 2024 18:15:36 +0000 https://www.dxomark.com/?p=166901 In the summer of 2023, DXOMARK experts conducted their largest study of smartphone portrait photography, focusing on everyday life moments. The study: ● focused on portraits (all varieties of pictures featuring individuals); ● captured 405 scenes with 83 regular consumers as models; ● consisted of a user panel of these models, 30 professional photographers, and [...]

The post Smartphone portrait photography and skin-tone rendering study: Results and trends appeared first on DXOMARK.

]]>
In the summer of 2023, DXOMARK experts conducted their largest study of smartphone portrait photography, focusing on everyday life moments. The study:
● focused on portraits (all varieties of pictures featuring individuals);
● captured 405 scenes with 83 regular consumers as models;
● consisted of a user panel of these models, 30 professional photographers, and 10 DXOMARK image quality experts.

Our goal was to measure user preferences on people’s pictures and identify emerging trends in smartphone portrait photography. In the study, individuals representing a wide range of skin tones were used, which led to a compelling question: Does the perceived quality of images remain consistent across different skin tones?

Read on to learn about our main findings.

4 Key takeaways

1. Today’s best smartphones fail to meet user expectations for portrait rendering in pictures.

2. There are significant differences between smartphones in terms of portrait rendering, resulting in varying levels of user satisfaction.

3. The perceived quality of images does not remain consistent across all skin tones pictured.

4. Smartphones still have room for improvement in achieving satisfying photo rendering in every light condition.

 

Three devices, three renderings

As part of their methodology, DXOMARK experts included in their study the rendering of three premium flagship devices released in late 2022 and 2023, along with those of a professional photographer using a Digital Single-lens Reflex (DSLR) camera. Participants were then asked to identify which image they would not want to post on social media, as a criterion to highlight their level of satisfaction.

The goal of this study was to identify trends in user preferences by obtaining rendering preferences.

The first key finding of the survey, which explained the subsequent results, was the noticeable differences in overall rendering between the three devices. This suggests that each manufacturer has its own distinct visual “signature.”

The manufacturer’s technical choices

We observed significant differences between the photos produced by the devices, even in very basic use cases. This resulted in different Satisfaction Indexes. This was true for both typical outdoor and indoor scenes.

 

“Satisfaction levels vary widely between the photos, hence the importance of studying trends in terms of user preferences. It also underscores the challenge manufacturers face in creating a unique style while ensuring they deliver a rendering that appeals to the majority of users.”
Hervé Macudzinski, Image Science Director, DXOMARK

 

Here, we observed notable differences in overall brightness, skin color, color rendering, and face exposure. Even in less technically demanding scenes, the manufacturer’s signature clearly influenced the results. Each device also scored a unique Satisfaction Index, underlining their distinct characteristics.

What is the Satisfaction Index?
The Satisfaction Index, developed by DXOMARK experts, is a metric that quantifies user preferences and measures the level of satisfaction of respondents. It takes into account several factors, including:
● Just Objectionable Difference (JOD);
● Image Rejection (%);
● Mean Rejection (%).The Satisfaction Index is scored on a scale of 0 to 100, where:
● 0 indicates that the image was rejected by more than 50% of respondents;
● 100 indicates no rejection at all.

 

The verdict

The best smartphones are failing to meet user expectations for portrait pictures.

A total of 1,620 photos were taken for this study, and each photo was assigned a Satisfaction Index. A score of 70 or more guarantees a high JOD score and a low rejection rate, indicating that the photo is generally satisfactory to panelists. An index below 70/100 indicates that the photo may not meet user expectations.

The overall Satisfaction Index for all the portrait pictures reviewed was 61.

Chart_Satisfaction Index for Smartphones

Interestingly, users had higher expectations for portrait photos in all conditions:

Indoors: of the following indoor shots, respondents preferred the brighter picture taken with Device A (after the photographer’s rendering.) Here, the Satisfaction Index was at 60.

At night: users expect people to stand out. The satisfaction index was 57.

Low-light conditions are not ideal for photography. Even for professional photographers, it was a significant challenge to meet the expectations of the panelists.

A big challenge for smartphone cameras: The backlit scene

 

Exposure has the greatest impact on satisfaction

The portraits taken with a professional camera received an impressive overall average score of 77. By comparing the professional camera results to the smartphone results, we were able to better understand the trends in user preferences and identify areas of dissatisfaction.

Respondents often expressed dissatisfaction with the overall color rendering and incorrect exposure of faces.

Here are some other takeaways:
● users have strong expectations on level of brightness, skin color, and overall color rendering (as shown in the SDR scenes);
● users strongly penalized underexposure of the face, which had a significant impact on overall satisfaction (as shown in the indoor scenes);
● the most saturated or brightest image was not necessarily the preferred option.

In low-light situations, users expect the resulting photos to be similar to typical indoor photos, often unaware of the impact of low light on photography. This contributes to a greater disparity in user perception.

However, exposure remains a top priority for users and has a significant impact on their satisfaction.

When shooting at night, users want to maintain the ambiance of the scene while ensuring the subject’s face is properly exposed. This can be a challenge even with a professional camera, especially when the subject has a darker skin tone.

 

“People demonstrate remarkably specific preferences and a keen eye for details. In that context, standard consumer insights (ranking, device comparisons, etc.) are not enough to understand them.”
Hervé Macudzinski, Image Science Director, DXOMARK

 

The perceived quality of images varies across all skin tones pictured

Photos of people are captured in various conditions, from indoors to outdoors, day or night, sunny or backlit. Despite its popularity, this type of photography is technically challenging for smartphone cameras.
Through our rigorous scientific approach, this new survey provides insight into the factors that influence respondents’ choices. One of these factors is the rendering of the model’s skin tones, revealing image quality issues.

Satisfaction varies depending on age

A total of 123 panelists participated in the survey, divided into subgroups based on gender and age. This allowed us to gain first consumer insights.

Younger consumers (under 40) were more selective when rating portrait pictures than those over 40. They also had a lower overall Satisfaction Index. In particular, there were significant differences in scenes with higher technical complexity, such as low-light, HDR, and backlit scenes.

This suggests that younger people are more sensitive to image quality issues than older people. Face exposure emerged as an area of particular concern for young people.

Given their discerning nature and high demands, satisfying the younger demographic is a key for manufacturers.

 

Satisfaction varies depending on gender

We also noticed a significant discrepancy in the Satisfaction Index between male and female panelists.

In all conditions, women had higher expectations of image quality than men. The more challenging the conditions in a given scene, the more degraded the image quality, and the women were more adept at recognizing this.

This difference in expectations between genders proved to be the most substantial compared to other subgroups, such as age, cultural heritage, or skin tone.

Satisfaction of every respondent varies depending on the skin tone of the model

A total of 83 models participated in the study, representing a wide range of skin tones.

As previously discussed in our methodology article, we used the Fitzpatrick scale, a widely used classification system for categorizing different skin tones. However, it is important to note the limitations of this scale, as it may not encompass the full spectrum of skin tones.

Still, the survey results clearly show that the presence of people with darker skin tones in photographs consistently correlates with lower levels of satisfaction. Crucially, this finding is not an issue of representation, as it applies to all respondents, including models, photographers and DXOMARK experts. Everyone believes that these pictures are less effective and that smartphones deliver less favorable renderings when the skin tone deviates from white.

Satisfaction Index per skin tone type

Hence, the problem lies in the inadequate rendering of darker skin tones. Other factors contributing to lower satisfaction include incorrect white balance and poor overall exposure for the same scene with a darker-skinned model.
The satisfaction scores declined as skin tone darkens, suggesting that the problem is not exclusively related to darker skin tones, but to skin that is perceived as “not fair”. While light-skinned models are consistently rendered with similar image quality across devices, rendering challenges arise with any non fair skin tone.

Tuning issues are more visible on deeper skin tones

The two photos on the left were taken with the same device. We can see that with a darker skin tone in the same scene, more minor issues were detected that affected the overall Satisfaction Index. An example of this is the underexposure observed in the second picture.

However, the device used for the two photos on the right delivered equal satisfaction for both skin tones. The device on the left failed to provide satisfactory results for the darker skin tone. This is due to lower exposure settings and a lack of adaptation to the skin tone of the person in the scene.

These examples highlight the challenge for smartphones, which must adapt and use different tuning/settings to achieve optimal renderings for all individuals.

A reminder

This survey was not designed to assess device quality, but rather user rendering preferences. The preferred rendering was not always the most “natural” or the one that “accurately rendered” skin tone.

 

Room for improvement

Satisfaction with outdoor portrait photography was high, yet not flawless. As previously explained, it depends on different criteria, like lighting condition, of course, but also tuning choices by manufacturers.

To provide a basis for comparison, we included pictures taken and edited by professional photographers using DSLRs in the survey. These images represent what an “ideal target image” might look like and what would be considered perfect from the photographer’s point of view. The average Satisfaction Index for the professional pictures was 74. The lowest index, observed in low-light conditions, was 71.

Chart 1_Ultra Premium vs Photographer Rendering

Of the smartphones assessed, only one device achieved an overall score of 71, with a high Satisfaction Index in all lighting conditions. The other two devices received significantly lower scores from our respondents.

Is the smartphone camera just a tool for capturing memories? Far from it. With its embedded technology, such as computational imaging capabilities, the smartphone camera plays the role of the photographer by making decisions on behalf of the user.

Through technological advancements over the years, smartphone cameras have made significant progress in bridging the gap with DSLRs in many ways.
The results indicate that there is a significant need for improvement in low-light, night, and backlit portrait photography, as users were highly dissatisfied with the results. For example, when shooting at night, users are unwilling to compromise between capturing the ambience of the nighttime setting and ensuring that the subject’s face is well exposed.

Photographer satisfaction: A guide to tomorrow’s consumer demands?

Thirty photographers and 10 DXOMARK image quality experts participated in this survey. Although one smartphone received high satisfaction scores, these participants were able to distinguish its photos from those of the photographer rendering.

The photographers’ Satisfaction Index was significantly lower, because they knew exactly what they were looking for in different situations, leading them to be more demanding and reject more pictures than consumers. Also, they have the ability to detect subtle issues, which is different from their expectations as a signature and aesthetic goal.

The professionals had high expectations for all types of scenes and lighting conditions. And their top reasons for rejecting photos were exposure and color rendering.

The disparity between smartphones and photographer renderings was even more pronounced when it came to certain lighting conditions, such as low light and night photography. We found that the more challenging the conditions, the greater the preference for professional rendering.

In summary, the general preference for photographer rendering provides valuable insight into the ideal target rendering that manufacturers should strive to achieve. This knowledge can guide their efforts to meet and exceed consumer expectations in the future.

The post Smartphone portrait photography and skin-tone rendering study: Results and trends appeared first on DXOMARK.

]]>
https://www.dxomark.com/smartphone-portraits-skin-tone-rendering/feed/ 0 Diapositive1 Diapositive2 Diapositive3 Diapositive4 Satisfaction Index vs Age Satisfaction Index per skin tone type Diapositive7 Diapositive8 Satisfaction Index vs Expertise
Smartphone portrait photography and skin-tone rendering: How did we measure user preferences? https://www.dxomark.com/dxomark-methodology-skin-tone/ https://www.dxomark.com/dxomark-methodology-skin-tone/#respond Fri, 02 Feb 2024 14:37:43 +0000 https://www.dxomark.com/?p=164378 Portraits are the most valued and popular type of photography, yet capturing great portraits remains technically challenging for smartphone cameras. The specific issue of achieving accurate skin tones, for instance, has received considerable attention from researchers and manufacturers alike. After taking into account all previous work on this topic, DXOMARK’s image quality experts conducted their [...]

The post Smartphone portrait photography and skin-tone rendering: How did we measure user preferences? appeared first on DXOMARK.

]]>
Portraits are the most valued and popular type of photography, yet capturing great portraits remains technically challenging for smartphone cameras. The specific issue of achieving accurate skin tones, for instance, has received considerable attention from researchers and manufacturers alike.

After taking into account all previous work on this topic, DXOMARK’s image quality experts conducted their own extensive qualitative study aiming to:

● identify trends in users’ preferences regarding portraits (pictures of a single person as well as a group of people)
● identify elements of satisfaction and frustration;
● explore the technical challenges.

To achieve this, we designed a unique methodology that allowed us to gather detailed insights from the most common use cases, environments, and conditions. This scientific methodology was used first for our European study in Paris, but it could be easily applied to other regions of the world, to other areas of study as well as other electronic products (laptops, for example).

We present it to you here.

The question of perception

Perception is a challenge when it comes to portrait quality evaluation. Indeed, people’s preferences when it comes to photos are often tied to their memories and familiarity with the subject. Hence, we hold our portraits and those of other people to different standards.

This begs the question: which qualities can most people agree a “good portrait” should have?

To find answers, DXOMARK’s image quality experts conducted this new qualitative study aiming to identify the reasons of frustration and the key pain points in smartphone portrait photography.

Understanding and measuring user preferences

The methodology our experts built allowed them to achieve two main objectives.

Understanding user preferences

This requires a comprehensive analysis that encompasses all smartphone camera uses (meaning types of portraits here), and the variety of conditions they take place in. Of course, each usage presents unique technological challenges.
Only by fully understanding each of these uses and the technical difficulties they pose could we simulate highly accurate test conditions.

Measuring user preferences

This is a critical component of this analysis. The scale should represent the perceived quality of each type of portrait according to an individual or a test group.

Conducting this analysis with a large group and creating a test scenario that closely resembles real-world usage was key to the success of this study.

Analyzing portrait preferences in relation to skin-tone rendering

We took on the question of skin tone rendering quality perception in smartphone portrait photography.
This required

● gathering a panel of people representative of all skin tones but also diverse cultural backgrounds, age groups, and gender inclusive
● developing a relevant shooting plan

A shooting plan includes a set of diverse photos used to identify users’ portrait preferences. We designed such a shooting plan for many years. Quantitative surveys are very useful per region to understand the preferred use cases. In the context of this study, a “relevant” plan is one that covers most use cases.

The shooting plan: A key component to anchor users’ insights in real life

The technical framework

The photographer

The shots had to be taken by professional photographers. Why? Because we needed perfectly comparable shots. The challenge lay in accurately capturing the same scenes with different devices.

The devices

Our goal was not to compare devices or evaluate their performance but rather to gain insight into user preferences regarding the top offerings on the market.

Therefore, we used the most advanced smartphones, the flagship devices, available at the time of the study, as well as a professional digital camera that allowed us to look at what the future of smartphone photography may hold.

⚠️ For each scene and type of portrait, four different devices were used: three smartphones and a professional camera.
These allow us to identify the main trends in preferences, each of which can then be studied in more depth.

Scenes and stages

The location

The shooting plan was tailored to the specifics of the geographical area under study. In our case, it was Paris. Our professional photographers curated a plan that embodied the look, feel and essence of a European way of life. Our ultimate goal was to capture images that would resonate with the European panel.

💡This type of study can be replicated anywhere in the world, with local photographers capturing and showcasing their respective regions’ unique customs and traditions.

The stages

A stage refers to a combination of:
● places
● lighting conditions
● framing (scene composition)
● number and position of respondents
A total of 180 stages were shot, with one, two, three and four models in each.

The scenes

A scene is a specific combination of a stage and models placed within it. The model is the variable between different scenes within the same stage.

Skin Tone Set 1
Skin Tone Set 1
Skin Tone Set 1

 

Skin Tone Set 2
Skin Tone Set 2
Skin Tone Set 2

 

Skin Tone Set 3
Skin Tone Set 3
Skin Tone Set 3

Our shooting plan was comprehensive to include all types of scenes. The were partitioned in the following way (the number represents the number of scenes shot for each condition) :


 

“We understand that certain test conditions, especially night scenes, can be more challenging than others. That’s why we included a large variety of conditions during testing.
We also enriched the shooting plan with lab scenes, which feature models in front of a white background, under neutral and consistent lighting. This controlled environment allowed us to focus only on the rendering of the portrait and its reception by the panel, without the scene, light or conditions affecting the results.”

Hervé Macudzinski, Image Science Director, DXOMARK

 

The light conditions

We shot a total of 1,620 pictures in HDR, SDR and backlit conditions, partitioned as shown in the following chart:

● backlit (very challenging light conditions)
● SDR or Standard Dynamic Range (limited range of brightness and colors)
● HDR or High Dynamic Range (high range of brightness and colors)

Survey respondents

The respondents

We put together a panel of European people representing all skin tones as well as a variety of cultural backgrounds, genders, and ages. A total of 123 people participated in the survey, with 83 models/respondents photographed in 405 scenes, 30 professional photographers, and 10 DXOMARK image quality experts, making this one of the largest studies of its kind.

Both genders were almost equally represented, with the panel made up of:

● 52% women
● 48% men

Every adult age group was included as well:

● 18 to 30 years old (25%)
● 30 to 40 years old (29%)
● 40 to 50 years old (19%)
● 50 to 60 years old (15%)
● Over 60 years old (12%)

To select and classify the respondents based on their skin tone, we used the Fitzpatrick scale, a tool used to determine how different skin types react to the sun. The scale organizes skin types into 6 distinct categories, all included in our study:

● Type I, “light” (12%)
● Type II, “fair”  (30%)
● Type III, “medium” (23%)
● Type IV, “olive” (2 %)
● Type V, “brown” (8%)
● Type VI, “deep” (4%)

About the Fitzpatrick scale
Originally developed for medical purposes, the Fitzpatrick scale is commonly used for classifying skin across various industries. While being a robust and widely used tool, numerous scientific publications have pointed out its limitations. For example, it does not take into account the difference between skin type & skin tone. The paper “Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color”, for instance, highlights the need for a more comprehensive measure of skin color. The Fitzpatrick scale also relies on self-reporting, which can bring with it unintentional bias”.Fitzpatrick Scale
“We adopted the Fitzpatrick scale as it is widely used to classify skin tones. However, it may not provide enough granularity for medium to dark skin tones, resulting in potential classification inaccuracies. Therefore, we are exploring the possibility of using alternative scales such as the Monk scale or the Individual Typology Angle (ITA) in future rounds.”
Benoît Pochon, Image Science Director,  DXOMARK

Unveiling the Satisfaction Index

The DXOMARK Satisfaction Index is a numerical representation of user preferences. It is a combination of two distinct aspects that we measured in this study: One measures preference and the other measures rejection. By combining these two results, we were able to gather insights not only about user preferences but to quantify them as well.

Participants took all the tests under controlled viewing conditions and were unaware of the devices that were used to capture the images.

The details of how we created the DXOMARK Satisfaction Index are presented below.

The two-step user survey

Step 1: The best picture

First, participants were presented with only two images, side-by-side, and asked to select the one image that they preferred based on its overall image quality.

Pairwise comparison

In order to quantify the perceived difference in quality our experts used a Just Objectionable Difference (JOD) scale.

Pairwise Comparison and JOD scale
This method allowed our experts to rank the pictures by crossing the results of several comparisons. For example, two images were considered to be 1 JOD apart if 75 % of observers found that one had better quality than the other.
Ranking pictures according to a JOD scale requires the use of advanced statistical techniques, in order to ensure enough comparisons are made to converge to a reliable estimate.
Those techniques also allow experts to acquire more information. For instance, a confidence interval for the JOD scores of a given group can be determined using a statistical method known as bootstrapping, which relies on repeated resampling of a set of data in order to accurately estimate a result for a particular group.

At the end of the survey, for each participant, every image taken with each of our four cameras was given a preference score. We could then aggregate those results to estimate a preference score for groups of participants.

Step 2: Social media-worthy picture

In the second part, participants were presented with four images of the same scene taken with different cameras (one with a professional camera and three with smartphones), and then asked to identify which image they would not want to post on social media, effectively, which image or images they would reject. The goal of this question was to refine our preference analysis.

Relative rejection

 

Why social media?
We wanted to measure acceptability. Our question was: “what do respondents consider to be the minimum level of quality acceptable?”. In that regard, social media provides a criterion that speaks to everyone yet remains significant. If we had simply asked people which photo they would keep, they might have chosen a lower-quality option because of their sentimental attachment to it.
“We needed a criterion for evaluating the quality of photos. Social media suitability proved to be the ideal one.”

Hervé Macudzinski, Image Science Director,  DXOMARK

 

Calculating the Satisfaction Index

After conducting this two-step survey, we collected the following information for each scene:

● the overall rejection rate for all respondents
● the rejection rate for the group being studied
● the JOD scale

With the collected data, we used the formula below to calculate the Satisfaction Index score per picture, and we scaled the result so that it would fit within a range of 0 to 100.METHODOLOGY_Satisfaction Index formula

Taking into account the confidence interval for each portion of the index, we could also determine a confidence interval for the overall Satisfaction Index.

The Satisfaction Index falling below 70/100 meant the photo may not meet user expectations.

Conversely, examining the characteristics of photos with scores above 70 helped us identify the prevailing preferences within a given use case. This understanding allowed us to establish the commonalities between satisfying renderings, as well as their technical characteristics.

Why use a Satisfaction Index?

The Satisfaction Index is a homogeneous and comparable score that can be used to compare participants, groups of participants or scenes.

By examining the Satisfaction Index for each individual, we can gain valuable insight into their ability to identify image quality issues and their preferences compared to other participants or groups, and the trends in their preferences.

Closing thoughts and considerations

This study analyzes the impact of shooting conditions and camera choice on image quality perception, but it also answers other related questions that provide us with consumer insights:

● Are smartphone users currently satisfied with the quality of their portraits?
● Do all high-end smartphones provide the same level of satisfaction in this respect?
● Do professional photographers produce more satisfying images overall compared to smartphones?
● If so, which gaps in quality can non-photographer users perceive?
● Does age influence quality perception?
● Does gender affect quality perception?
● Does the perceived quality of pictures remain the same regardless of the model’s skin tone?
● What other dimensions influence the respondents’ choices and their perception of image quality?

The complete study by DXOMARK Insights highlights the technical parameters that are key to ensuring high-quality portraits and user satisfaction. To manufacturers, that is vital information.

Stay tuned for the first results, coming soon!

The post Smartphone portrait photography and skin-tone rendering: How did we measure user preferences? appeared first on DXOMARK.

]]>
https://www.dxomark.com/dxomark-methodology-skin-tone/feed/ 0 Set 1_3111_1 Set 1_3111_2 Set 1_3111_3 Set 2_1145_1 Set 2_1145_2 Set 2_1145_3 Set 3_0371_1 Set 3_0371_2 Set 3_0371_3 A variery of testing conditions Lighting conditons Skin Tone methodology FITZPATRICK SCALE Pairwise_SKIN-TONE-METHODOLOGY Skin-tone-methodology-Rejection Satisfaction Index formula
Speakerphones: See which ones performed best in our tests https://www.dxomark.com/speakerphones-see-which-ones-performed-best-in-our-tests/ https://www.dxomark.com/speakerphones-see-which-ones-performed-best-in-our-tests/#respond Wed, 10 Jan 2024 09:02:11 +0000 https://www.dxomark.com/?p=163166 When DXOMARK introduced its Laptop testing protocol in June 2023, the main focus was to assess the laptop’s performance in two specific use cases: videoconferencing and multimedia playback. During our laptop tests, we identified several pain points in users’ audio experience. We also recognized that many consumers were using speakerphones both at work and at [...]

The post Speakerphones: See which ones performed best in our tests appeared first on DXOMARK.

]]>
When DXOMARK introduced its Laptop testing protocol in June 2023, the main focus was to assess the laptop’s performance in two specific use cases: videoconferencing and multimedia playback. During our laptop tests, we identified several pain points in users’ audio experience. We also recognized that many consumers were using speakerphones both at work and at home to enhance their laptop’s audio capabilities, whether to facilitate meetings with multiple people or to just listen to music or watch movies. So as a complement to our laptop testing, we decided to run audio evaluations on several speakerphones to see how well they performed in videoconferencing and multimedia playback situations.

Testing methodology

Our methodology in testing speakerphones was the same as the one we use when testing laptop audio performance. We combined objective measurements and perceptual evaluations of all audio attributes (timbre, spatial, dynamics, volume, artifacts), which were performed in our anechoic laboratory and in our simulated and real-life use cases and environments. Because the testing protocol was the same, this makes the speakerphone scores directly comparable with the laptop audio scores. Read more about the details of our laptop testing protocol.

All speakerphones were tested using the same laptop, a Lenovo Thinkpad X1 (Gen 10), which runs on Windows. The selection of speakerphones was based largely on availability and popularity.

Summary of the results

We evaluated nine speakerphones using our laptop audio protocol, and the results are in!

Speakerphone ranking

Two speakerphones came out on top: the Jabra Speak2 75 and the EPOS Expand 40.

The Jabra Speak2 75 earned the top spot in the ranking, with improvements in all audio aspects over the Jabra Speak 750. The Jabra Speak2 75 had the best performance in multimedia playback and videocall capture, making it an excellent choice not only for office or personal video calls but also for listening to some music in between meetings.

Just behind the Jabra Speak2 75 was the EPOS Expand 40. EPOS, which was previously part of Sennheiser Communications, managed an excellent tuning of the capture performance, especially in meetings with multiple people taking part.

Both the Microsoft Modern USB-C Speaker (3rd) and the Poly Sync20 (4th) deserve an honorable mention, being among the most affordable speakerphones tested, while both performing admirably, especially for the capture side on the Microsoft device and the playback side for the Poly device.

Detailed results

Jabra Speak2 75

The Speak2 75 performed very well in multimedia playback performance, proving useful for music and movie use, thanks to warm tonal balance and good clarity. Alongside its playback performance, its microphones produced a very pleasant sonority in general. Voices recorded in our test had nice timbre and sounded natural; the only downside was the monophonic nature of the recordings, which made localizability a bit trickier. The device efficiently reduced background noise, leading to a satisfying SNR, although the digital signal processing (DSP) was less efficient when dealing with reverberant acoustics. An all-round good performance for this speakerphone.

Pros

  • Microphone has an excellent sonority
  • Excellent multimedia playback performance
  • Very efficient background noise reduction

Cons

  • Monophonic recording makes it hard to identify and localize voices
  • SNR not as efficient in reverberating acoustical environments

 


EPOS Expand 40

EPOS Expand 40The Expand 40 had nice, if somewhat dark, sonority during playback. Although not necessarily the best choice for multimedia consumption, voices sounded natural and warm. Capture performance was a bit less satisfying, due to voices sounding muffled, and recordings being monophonic. However, the speakerphone functioned particularly well in duplex speech situations, and its handling of artifacts was satisfactory in both playback and capture.

Pros

  • Great duplex capabilities during video call and meetings
  • Good multimedia playback performance
  • Very few artifacts

Cons

  • Recordings sound muffled
  • Monophonic recording makes it hard to localize voices

 


Microsoft Modern USB-C Speaker

The Microsoft speakerphone provided a good experience overall, especially in capture, where it had a pleasant recording timbre, excellent directivity in the meeting use case, and a satisfying performance in duplex speech situations. The sonority in playback is warm and voices sound good, although they can lack a bit of brilliance and tend to be impaired by either inconsistent noise reduction and/or envelope rendition. The device was also affected by several artifacts in playback and capture alike, but its overall performance was nonetheless satisfactory.

  • Pros

  • Good recording timbre
  • Excellent directivity in meeting use case
  • Great overall performance in duplex speech situations

Cons

  • Inconsistent envelope rendition and/or noise reduction during capture
  • Artifacts impact the quality of playback and recording

Poly Sync20

The Poly Sync20 performed very well across the board. Its playback capabilities made for pleasant and intelligible voice rendition and warm tonal balance – enhanced by a strong presence of low end, which made it especially good for multimedia use. Timbre rendition through its microphones was not as good, as voices tend to sound a bit aggressive, but it had a very effective background noise rendition, and a directivity well suited for meetings.

The device’s microphones did not handle duplex speech particularly well, with quieter voices easily affected by gating.

Pros

  • Very good performance in video call and multimedia playback
  • Microphone provides excellent directivity for meetings
  • SNR is excellent in all capture use cases

Cons

  • Captured voices tend to sound aggressive
  • Duplex speech is affected by strong gating

Beyerdynamic Space

The Beyerdynamic Space has strong playback capabilities, thanks notably to its pleasant and intelligible voice rendition. The speakerphone is also well suited for listening to music, delivering a warm tonal balance and snappy dynamics, especially at loud volumes. You can also use it to watch movies, if you don’t mind the monophonic rendition or the low midrange sounding a bit muddy at times. But all in all, the playback experience is great, and devoid of artifacts.

As for capture, the device seems promising but leaves room for improvement: audio processing is very efficient at reducing background noise, resulting in great SNR; but although the dynamic envelope is still realistic in most use cases, gating can occur on quieter voices due to background noise reduction going a bit overboard. This becomes especially problematic during duplex speech, as volume drops and other artifacts greatly impair intelligibility. Furthermore, the tonal balance delivered by the microphones lacks both bass and treble to some extent.

Pros

  • Very good performance in multimedia playback
  • Great SNR in capture

Cons

  • Captured voices sound thin (poor timbre rendition)
  • Strong gating in duplex speech situations

Logitech Speakerphone P710e

Logitech P710eThe Logitech speakerphone underperformed in our tests, especially in capture, where unpleasant timbre rendered voices as muddy and unclear. SNR was great, however, but DSP was not efficient enough when it came to reverberant acoustics and duplex speech. As for the playback experience, it provided relatively good sonority for video calls and meetings, but not enough for a good multimedia experience.

Pros

  • Great all-round SNR
  • Few to no artifacts

Cons

  • Poor recording timbre
  • Many artifacts during duplex speech

Yamaha YVC-200

Yamaha YVC 200The YVC-200 offers good vocal clarity through its microphone as well as great envelope rendition and intelligibility. Its timbre performance in playback was equally good in video call and meeting use cases, and capture directivity was suitable for both scenarios.

However, background noise was very intrusive in all use cases, to the point where video calls and meetings were less pleasant on the receiving end. The device did not handle duplex speech very well, as both voices were barely intelligible. Finally, music and movies did not sound good on this speakerphone.

Pros

  • Very good intelligibility (voices clear in capture)
  • Good performance in meeting playback

Cons

  • Intrusive background noise in all capture use cases
  • Unintelligible duplex speech
  • Unsuitable for multimedia purposes

Jabra Speak 510

The Jabra 510 does fairly well with video calls, but less so with meetings. While its rendition of speech through its microphone is pleasant and intelligible (thanks to satisfying dynamics), it does not properly capture all voices equally around it, as voices on the sides and to the rear of the speakerphone often sound quieter and more distant than they should. Conversely, this property enhances the experience in one-to-one video calls, as background noise reduction is quite effective, resulting in very good SNR. Duplex speech is nearly impossible, however, as both voices are unintelligible when speaking at the same time. Furthermore, its playback timbre is unsuitable for multimedia content, and distortion is perceptible when listening to music.

Pros

  • Great overall SNR in capture
  • Decent capture dynamics

Cons

  • Not suited for multimedia purposes
  • Very strong gating in duplex use cases

Jabra Speak 750

The Jabra 750 did not perform very well in any of our use cases. Although its microphone directivity was well suited for meetings, its timbre and dynamics performance during capture left much to be desired, with muddy, unclear, and compressed sound that was prone to distortion. The same capture issues were present in video calls, and additionally, microphone directivity was less well adapted. However, background noise reduction was quite effective, and the device handled duplex speech fairly well.

Playback performance was not much better, whether for video calls, meetings, or multimedia usage.

Pros

  • Great SNR across all use cases
  • Excellent directivity in meeting use case

Cons

  • Subpar timbre performance across all capture and playback use cases
  • Not suited for multimedia use

 

The post Speakerphones: See which ones performed best in our tests appeared first on DXOMARK.

]]>
https://www.dxomark.com/speakerphones-see-which-ones-performed-best-in-our-tests/feed/ 0 speakerphones_ranking Jabra-Speak2-75_featured-image-packshot-review-Recovered EPOS-expand-40_featured-image-packshot-review-Recovered Microsoft-Modern-USB-C-featured-image-packshot-review-Recovered Poly-Sync20_featured-image-packshot-review-Recovered Beyerdynamic-Space_featured-image-packshot-review Logitech-Speakerphone-P710e_featured-image-packshot-review Yamaha-YVC_-200_featured-image-packshot-review-Recovered Jabra-Speak-510_featured-image-packshot-review Jabra-Speak-750_featured-image-packshot-review-Recovered
DXOMARK Decodes: A brief look at smartphone charging and compatibility https://www.dxomark.com/dxomark-decodes-a-brief-look-at-smartphone-charging-and-compatibility/ https://www.dxomark.com/dxomark-decodes-a-brief-look-at-smartphone-charging-and-compatibility/#respond Thu, 21 Dec 2023 17:30:57 +0000 https://www.dxomark.com/?p=163449 So you just unpacked your new smartphone from its box, and as is common these days, it didn’t come with a charger, but it came with a USB-C cable, or maybe not even the cable. You begin to wonder whether you can safely use a charger from your other smartphones, or whether you should buy [...]

The post DXOMARK Decodes: A brief look at smartphone charging and compatibility appeared first on DXOMARK.

]]>
So you just unpacked your new smartphone from its box, and as is common these days, it didn’t come with a charger, but it came with a USB-C cable, or maybe not even the cable. You begin to wonder whether you can safely use a charger from your other smartphones, or whether you should buy one from the smartphone brand or just buy an off-brand high-watt charger that advertises super-fast charging times.

Sound familiar? Although you might have lots of cables and chargers that are compatible, you’ll find that smartphone charging compatibility is far more complex than just plugging in any charger and cable that fits. In this article, we’ll try to shed some light on this topic.

Charging compatibility made headlines recently because of a new European Union law that goes into effect in the fall of 2024 that requires electronic devices sold in the EU to adopt the USB-C charging cable and port. But the law goes beyond that.

Manufacturers will also have to provide relevant information about charging performance, for example, power requirements and fast charging support. This information will make it easy to work out if an existing charger will work with your new device and will help to select a new compatible charger if required. This law aims to limit the need to buy new chargers and to allow for the reuse of existing chargers, thus cutting down on waste.

How chargers and smartphones interact

If every smartphone uses the same connector, will charging be the same for every smartphone?  Even though the connector is the same, the way a device will charge will be far from common because of the wide variety of charging protocols that exist.

What is a charging protocol? The charging protocol is a set of rules and specifications chosen by either OEMs or industry organizations like the USB Implementers Forum (USB-IF), which manages the energy delivery from the power source to the rechargeable device. The charging protocol normally specifies the voltage and the current to be adopted during the charging process, as well as the safety features and the communication between devices. Charging protocols are often standardized by industry organizations to ensure the compatibility between devices and chargers.

USB Power Delivery (USB-PD), is a universal charging protocol standardized by the USB-IF. There are four versions of the USB-PD standard, with the latest version 3.1 (adopted in 2021), that offer fast-charging capability all the way up to 240W (currently only for laptops). The same charging protocol supports different connector types, for example: USB type-C, Apple lightning, and others. The advantage is that standard protocols offer more compatibility.

However, some manufacturers have implemented their proprietary charging protocols that allow them to reach high levels of charging power with their own devices, but not with devices from other brands. The EU ruling will require that manufacturers of proprietary charging protocols also support the universal USB-Power Delivery protocol for better inter-compatibility.

Complexities of smartphone charging

Smartphone battery charging is not a linear process in which the charging power remains at a constant level from 0% to 100%.

The following graph illustrates the charging power evolution during the charging process, along with the battery percentage displayed on the screen. The 80% of full charge capacity, 100% shown on the display, and the full charge are also pointed out in the graph. The dark line shows the varying levels of charging power during the charging, with the peak charging power being reached just under 42W in the first few minutes of the charging. The peak charging power heats the battery very fast, so it’s reasonable that the peak charging power only lasts for a few minutes. But the thing to keep in mind is that the battery keeps charging but at a progressively reduced charging power. In the graph below, we still see a few peaks, but they are between the 30W and 40W levels.

Each manufacturer decides at which point to display a battery charged at 100%, which indicates that the battery is nearing a full charge.

In the following chart, we see just how two superchargers work. The maximum supercharge of 150W was nearly reached and  240W was achieved but only once during the charge duration, and only at the very beginning of the charge. This shows that even the fast chargers usually only peak at the advertised speed for a moment before charging at lower speeds to protect the device and battery.

 

DXOMARK  provides this detailed graph in every smartphone battery test result.

Charger compatibility

Earlier this year we tested the cross-compatibility of chargers between various brands and published our results in an article. In summary,  our findings showed that a proprietary fast charger of 240W could achieve that level of charging with the smartphone it was specifically made to work with (if only for a brief moment as seen earlier). But if used on another brand, the charging power might only reach up to 45W with the USB-PD protocol, as both devices revert to the protocol that is best compatible between them.

Testing the iPhone 15 Pro Max

Since Apple recently introduced the USB Type-C port with the iPhone 15 series, we were eager to test non-iPhone chargers and cables with the iPhone 15 and iPhone 15 Pro Max. We ran tests to verify the compatibility between the new iPhone 15 series with multiple chargers and cables from different brands, and third-party chargers including original iPhone ones.

Our results showed that the latest iPhone 15 series was compatible with most third-party chargers and other brands, with no significant difference in charging power. The iPhone 15 Pro Max drew around 28W to 30W maximum during charging, while the iPhone 15 drew around 22W.

For example, in one test, we charged the iPhone 15 Pro Max with a 30W iPad adapter, using USB-C cables from other phone brands. Our results showed that the charging power was constant at 27.6W.

We saw some slight variation with the same cable (from Apple) but chargers from different brands or third parties, as seen in the following graph. What stood out from these results was that the iPhone 15 Pro Max reached a peak charging power of 29.4W with a 45W third-party charger, a bit higher than the 27.6W charging power reached when using the Apple brand cable and charger combination.

It’s also interesting to note that a superfast 160W charger did not yield higher readings than the 45W charger. But we noticed that the iPhone 15 series achieved a peak charging power with certain Android chargers supporting USB PD 3.0 that was slightly higher than with an original iPhone charger.

The iPhone 15 Pro Max’s charging performance with Apple brand as well as off-brand chargers.
The iPhone 15’s charging performance with Apple brand as well as off-brand chargers.

We also tested the iPhone 15 Pro Max charging compatibility with third-party cables and chargers from the same brand. The iPhone 15 Pro Max was able to charge with most brands.

This illustrates the complexities of the overall charging process. All the components of charging — the adapter, the cable and the phone — have to recognize each other to work together if the charge is to achieve its highest power possible.

Conclusion

As you can see, there’s much to consider when choosing the right charger and cable for your smartphone. In the case of the iPhone 15 Pro Max, the device did not surpass the 30W with any charger, even one that delivered a supercharge. The safest bet is to stick with the smartphone manufacturer’s charger and cable, but that doesn’t mean that third-party should be entirely dismissed. As we also saw with the iPhone 15 Pro Max, some third-party chargers delivered a little bit more charging power to the device than Apple’s charger.

The move to standardize USB-C PD is a big step in the right direction, even if all chargers or cables won’t supply the same amount of charging power to different smartphones. As our tests showed, even proprietary super-fast chargers only peak at their highest charging power briefly, even for their own devices; this is to be expected.  Nevertheless, the requirement to support a common and universal protocol will ensure that regardless of which phone you have or which cables and chargers you buy, you will be able to safely charge your phone.

We hope that this article has given you a better understanding of the complexities involved in smartphone battery charging. Watch the video on cross charging:

Be sure to check out more content in our Decodes series, where we try to explain concepts or dispel myths related to technology, particularly in smartphones and other consumer electronics.

The post DXOMARK Decodes: A brief look at smartphone charging and compatibility appeared first on DXOMARK.

]]>
https://www.dxomark.com/dxomark-decodes-a-brief-look-at-smartphone-charging-and-compatibility/feed/ 0 Power consumption and battery level 1 Power consumption and battery level 2_Supercharger Power consumption and battery level 2_Supercharger 2 iPhone15ProMax DECODES diff chargers FINAL iPhone 15 DECODES different chargers FINAL
DXOMARK Decodes: How a large sensor in a smartphone influences image quality https://www.dxomark.com/dxomark-decodes-how-a-large-sensor-in-a-smartphone-affects-image-quality/ https://www.dxomark.com/dxomark-decodes-how-a-large-sensor-in-a-smartphone-affects-image-quality/#respond Wed, 29 Nov 2023 14:38:07 +0000 https://www.dxomark.com/?p=156521&preview=true&preview_id=156521 Earlier this year, DXOMARK introduced its Decodes series, which aims to explain concepts or dispel myths related to technology, particularly in smartphones and other consumer electronics. As an experienced scientific tester of smartphones, DXOMARK is in a unique position to measure how well a device’s cameras, audio, display, and battery perform through its rigorous protocols [...]

The post DXOMARK Decodes: How a large sensor in a smartphone influences image quality appeared first on DXOMARK.

]]>

Earlier this year, DXOMARK introduced its Decodes series, which aims to explain concepts or dispel myths related to technology, particularly in smartphones and other consumer electronics.

As an experienced scientific tester of smartphones, DXOMARK is in a unique position to measure how well a device’s cameras, audio, display, and battery perform through its rigorous protocols that assess the user’s experience. What we see every day is that tuning is the critical step in finding the right balance between software and hardware interaction so that the user can benefit from all of the device’s features. Striking that optimization balance is often an art that involves strategic choices on the part of the phone manufacturer.

Earlier this year we focused on the importance of software tuning to the display experience. In this latest installment of our series, we’ll try to decode how a large main camera sensor affects the image quality of the device’s photos.

Some smartphone makers, such as Honor, Oppo, Vivo, and Xiaomi, have recently touted their use of big sensors, the so-called “1-inch sensor,” in their smartphone cameras. But does a big sensor alone help to improve image quality?

In this article, we’ll touch on some of the advantages as well as the disadvantages of having a large sensor in your smartphone. We’ll also take a look at the shooting scenarios that benefit from a large sensor, which could explain the trend toward larger sensors in smartphone cameras.

First, what we call a 1-inch sensor does not really refer to a sensor that measures 1-inch. The history of the term is linked to the days when video was shot through tubes that measured 1-inch in outside diameter and had a diagonal of 16mm. This number should be read as the equivalent for a 1- inch video camera tube. Manufacturers’ phone specifications often express the sensor sizes as a fraction of a 1-inch video camera tube.

Some manufacturers, however,  do not specify the smartphone camera’s sensor size, but if the number of pixels is known as well as the size of the pixels, it is possible to estimate the sensor size. For example, for  a camera with 48 MP and pixel size of 1.22 µm, the formula to estimate the sensor size is 48 MP x 1.22 µm x 1.22 µm = 71.44 mm² (sensor area.)

Decodes Sensor Final
A comparison of camera sensor sizes in smartphones. While the iris is not the part of the eye that is sensitive to light, we are including the average area of the human iris in this illustration only as general reference for comparison.

Another thing to keep in mind is that a large sensor does not necessarily mean a higher pixel count, or better resolution. While a large sensor has the space to accommodate more pixels, a small sensor could contain the same amount of pixels as a large sensor. What could affect image quality would be the size of the pixels on the sensor. And that’s a company’s strategic choice between the desired resolution and image quality, because the smaller the pixel, the lower the signal-to-noise ratio, and therefore the lower resulting quality.

Decodes sensor_pixel size
The bigger sensor on the right has bigger pixels and  is capable of capturing more light even though both sensors contain
the same number of pixels.

Over the past years, smartphone manufacturers have been increasing the size of the sensor to improve the sensitivity to light.

For example, Apple has doubled the size of the light-sensitive surface between the iPhone 12 Pro Max and iPhone 15 Pro Max, which corresponds to a gain of one stop. If we look at the flagship smartphone cameras released in 2023, the Oppo Find X6 Pro also has double the area compared to the iPhone 15 Pro Max, which corresponds also to one stop.

Some smartphone makers have been moving toward bigger sensors.

Main benefits of a large sensor

Many smartphone users often express their frustration with their low-light pictures and videos. A key area for improvement for all smartphone cameras continues to be low-light performance.

In the days of film photography, the only option for photographers to combat low-light challenges was to extend the capture time. But they faced the risk of generating motion blur if the subject was moving or if the photographer was holding the camera. In digital photography, the answer to combating low-light challenges is found in reaching a good balance between texture (detail) and noise. Small sensors often produce a lower texture/noise ratio in the final images. Big sensors are a great tool to improve this ratio.

Low-light photography and videography are where we generally see the biggest gap in quality between small sensors and big sensors. Big sensors have a larger surface exposed to light, allowing them to capture more photons. More photons mean more signal, even in low light. Hence, all things being equal, increasing the size of the sensor enables better low-light performance. Night photography is a situation where having a large sensor can help with image quality.

The following examples are a photographer’s renderings of night scenes.

Decodes Night 3
Decodes Night 1

Potential limitations

On the other hand, while a big sensor paired with a lens with a large aperture could be ideal for portraits and low-light images, it might not be ideal for photos where a large depth of field is needed, such as landscape photography or group portraits.

Modern smartphones use bright lenses, with an aperture often under f/2.0. This, in combination with big sensors, produces a shallow depth of field, which is the range of distance in which a person or an object is in focus. The deeper the depth of field, the easier it is for the phone to find the right focus plan. This might affect autofocus stability.

An image taken with a smartphone that has a 1-inch sensor.
The depth of field is shallow in this photo because the person in the background is out of focus.

Beyond the specs

There are clever ways to optimize a device to get better image results. Tuning is the art of making the best out of all the image-processing algorithms available. These algorithms are found in the Image Signal Processing (ISP) chip or the camera app code. It is very much like cooking: It requires many tries and fails to find the perfect recipe. There is one recipe for each type of scene.

There are several levers that camera makers can use to optimize image results. Among the top ones:

  • Increase the sensor size: This directly increases the photon flow and thus the signal-to-noise ratio. The sensor sizes among the flagships released in 2023 are within a range of one stop. As we saw, the drawback of larger sensors is a smaller depth of field.
  • Increase lens aperture.  This is characterized by a low f-number. It directly increases the photon flow and thus the signal-to-noise ratio. As of today, all smartphones already have a very large aperture (small f-number). A large aperture also comes with the drawback of a smaller depth of field.
  • Use optical image stabilization (OIS): Thanks to OIS, it is possible to compensate for hand motions, which allows for a higher exposure time, and therefore higher signal-to-noise ratio. The OIS offers a potential gain between 0 and 2 stops (in other words, quadrupling the exposure time without motion blur due to hand shake). On the downside, it comes with motion blur, when motion is present in the scene.
  • Integrate computational photography solutions, such as image fusion: This solution uses information from several frames, which increases the signal-to-noise ratio, just like a higher exposure time. For example, in order to limit motion blur, the camera could reconstruct an image with sharp detail and low noise by shooting several frames at a short exposure time instead of taking one single long exposure time frame. The potential drawback is an artifact called “ghosting,” which can be visible in the image if several of the frames are not merged properly, especially if there is motion in the scene. The best smartphones today use up to roughly a dozen frames, which corresponds to a potential gain of more than 3 stops!
  • Improve denoising algorithms: This solution consists of designing complex image processing algorithms to digitally clean the signal from the noise. Over the past 20 years, huge improvements have been made in this area, making this solution the largest provider of virtual gain in sensitivity. Of course, all this depends on the tuning to control the potential side effects. Indeed, strong denoising can come at the cost of reducing the level of details in the image or introducing unpleasant artifacts.

Can tuning make the difference?

Having great hardware helps with the tuning because it transmits better information to the camera app. For big sensors, the signal is better for noise and texture. But its shallow depth of field makes it harder to tune the autofocus. Tuning teams therefore face contradictory objectives. Will they put more effort into focus performance or texture/noise ratio?

For example, attempting to improve both texture and noise in low-light conditions is particularly challenging because they usually work against each other.

Let’s look at the effect of tuning and sensor size on low-light performance. To do so, we selected ultra-premium (>800€) smartphones launched since 2022. The key image quality attributes are texture and noise. Here, we narrowed the analysis to the scores in low-light conditions.

We observed that small sensors, in general, present a limit in terms of texture and noise score compromise. On the graph below, which includes the scores of some of the best devices from 2022 to 2023, it can go as high as 104 points for the noise score and 97 points for the texture score.

Big sensors on the other hand, can beat these numbers both for texture and noise. To date, two 1-inch sensor smartphones are the best devices in low light. They reached the two top spots in both low-light texture and noise categories under our DXOMARK Camera v5 test protocol. However, using a big sensor does not remove the necessity of putting some efforts into algorithms and tuning. We see that because some other devices with large sensors are low in both texture and noise, with scores under 80.

DXOMARK Low-light texture and noise scores
We see a limit to the texture-noise compromise achievable by small sensors, while large sensors can reach higher levels of performance

Conclusion

Big sensors in smartphone cameras can provide some significant advantages in some situations such as low-light photography. But they are not the only answer. Smart and properly tuned software can sometimes compensate for a smaller sensor. All things being equal, a big sensor comes with challenges for other situations such as depth of field management. Proper tuning is essential more than ever if a device is to fully benefit from its performant hardware. So when it comes to image quality, a bigger sensor is not always better!

Be sure to follow all the latest camera test results on dxomark.com.

The post DXOMARK Decodes: How a large sensor in a smartphone influences image quality appeared first on DXOMARK.

]]>
https://www.dxomark.com/dxomark-decodes-how-a-large-sensor-in-a-smartphone-affects-image-quality/feed/ 0 Decodes Sensor Decodes Sensor FINAL decodes_pixel_sensor_sizes Decodes Sensor Size vs Launch FINAL Night 3 Night 1 TextureNoiseLowlight
Skin Tone: The next challenge in smartphone portrait photography https://www.dxomark.com/skin-tone-smartphone/ https://www.dxomark.com/skin-tone-smartphone/#respond Wed, 08 Nov 2023 17:34:55 +0000 https://www.dxomark.com/?p=160915 The quality of photos and videos has always been a key differentiating factor for smartphone manufacturers. As more and more people become avid photographers and use their phones exclusively, not in addition to a DSLR, the challenge is meeting their rising expectations. According to a global DXOMARK survey conducted by YouGov in January 2022, portraits of [...]

The post Skin Tone: The next challenge in smartphone portrait photography appeared first on DXOMARK.

]]>

The quality of photos and videos has always been a key differentiating factor for smartphone manufacturers. As more and more people become avid photographers and use their phones exclusively, not in addition to a DSLR, the challenge is meeting their rising expectations.

According to a global DXOMARK survey conducted by YouGov in January 2022, portraits of friends and family are the most popular type of pictures people take. Getting the exposure right and effectively capturing skin tone is critical to a good portrait. So, what are the technical challenges to capturing skin tone rendering in smartphone photography?

Our image quality experts have immersed themselves in this still unexplored subject with the methodology and scientific rigor that characterizes DXOMARK, in its quest to better understand users in relation to what today’s technology enables.

How did smartphones replace classic digital cameras?

Cameras first appeared on cell phones 20 years ago, and a lot has changed since then. The smartphone has now become the primary camera in almost every household. In fact, they are closing the image quality gap with digital cameras, and their quality is a major selling point for manufacturers.

Focus on the power of selfies
The introduction of the front camera made it easy for users to take pictures of themselves. And the trend has only grown – in 2023, selfie use is at an all-time high, with 93 million selfies taken every day.1 Selfies and their ubiquity have highlighted the importance of quality and rendering. Users are used to seeing photos of themselves, whether they are taken with a rear camera or a front camera.

Smartphone users’ favorite subject: friends and family

DXOMARK conducted a survey to identify users’ behavior and attitudes regarding additional features on their smartphones:

    • family, friends, and pets make up more than 50% of the subjects photographed;
    • on average, only 17% of respondents’ photos are travel-related, and life/food subjects make up an average of 9%.

Kodak’s original motto in 1889 was “You press the button, we do the rest”. The automatic camera mode of today’s smartphones delivers on this promise. This feature helps cameras to produce high-quality results by recognizing both the scene or subject to be photographed and the shooting conditions.

What are the key challenges of portrait photography?

Portrait photography is an art form that hinges on several key elements, with target exposure, contrast, and skin color being at the forefront!

Perception and memory colors

Memory colors are associated with the ones of familiar objects. When it comes to photography, preferences are often closely related to memory. If people know the subject well, they will have high expectations, limiting the number of possible renderings.

Skin tone is one of the biggest challenges in capturing color in portrait photography. Everyone has a clear idea of how people they know personally should “look” in a photo, in all kinds of lighting conditions.

Photographers are the first to judge whether the shot is satisfactory. In a non-professional setting, the person or people being photographed may also give input on how the photo should look. Also, in the case of friends and family, the people on the scene are very important.

Usually, the camera user is responsible for setting the preferences. It’s important to remember that preferences may differ between those who know the subject and those who don’t. The same applies to the scene in the frame – people’s perceptions can vary.

Capturing the moment

Exposure, color, and contrast are all important in accurately capturing skin tone. With exposure being the key player:

    • cameras need to capture motion, which means as much information (light) as possible in a very short time;
    • the great strength of a full-frame hybrid camera is its ability to capture the maximum amount of light in a matter of milliseconds

As shown in these examples, using three of today’s leading devices, camera suggestions, and by extension, user preferences, can vary widely.

 

The technical challenge of skin tones

Different skin tones bring different technical challenges for each scene. Each tone can make images more or less contrasty, and more complicated to expose correctly. For instance, a group shot with HDR conditions.

In these pictures, we have several subjects in the same outdoor scene, taken with the Samsung S23.
It is interesting to note that the exposure strategy is quite different from photo to photo. Skin tones and clothing colors affect the statistics that are used to feed the camera’s algorithms and then the camera’s ability to deliver optimal images. In these examples, capturing skin tones may be more challenging because the signal is lower.

Skin tones have to be considered in the context of their environment – it’s a highly complex set of parameters, with very subtle variations.

How is it even more of a challenge for smartphones?

The sensor size of a top-of-the-line smartphone is 10 times smaller than that of a reflex camera – so the challenge is completely different.
Smartphones have to do everything automatically for the user! Also, they struggle with small sensors in low light and freezing the moment, have to avoid clipping on faces and offer dynamic range.

Balancing light and image processing

For smartphones, the goal is to capture more light spatially. One way manufacturers have done this is by designing ever-woven lenses (up to f/1.4) while keeping the size of the camera module more compact.

Image processing is another tool used to improve the performance of smartphone cameras. It allows them to capture light temporally. How? By systematically capturing several images and merging them into one image!

Smartphone photography: a potential unlocked?

One of the features that makes the photographer’s job easier is the AI used on smartphones. How? By taking over some or all of the settings. AI analyzes the scene to be photographed and applies the appropriate settings.

However, there is a trade-off between the perfect camera and the time and money available to engineers to develop it. This can lead to biased AI databases. These biases can have a direct impact on the final images.

To correct this problem, the AI needs to be trained on a comprehensive dataset. This should include:

    • a variety of use cases: settings, shooting conditions, skin tones;
    • a variety of annotations: feedback aggregation method and point of view;
    • several examples of expected renderings: according to quality criteria and user preferences.

In short

The smartphone has to become the professional camera and the professional photographer all in one.

The issue of user satisfaction: how to address it?

Another important consideration is how satisfied users are when shooting in different conditions. The YouGov-DXOMARK survey reveals that satisfaction varies with different lighting conditions, whether outdoors, indoors, in low light, or at night.

In fact, if 78% users are satisfied with their outdoor photos taken in full light, as shown in the bar chart above, only 1 in 5 photos are rated unsatisfactory… Smartphone devices and camera technology have a long way to go to meet user expectations.

With all the challenges discussed above, this dissatisfaction could increase when it comes to portraits of family and friends.

Smartphone portrait photography: closing thoughts and considerations

All in all, portraits are the most popular and valued type of photography. But along with sports and wildlife photography, they’re also one of the most technically challenging for our smartphone cameras.

Skin tone in portrait photography has been the focus of several manufacturers and the subject of numerous scientific publications. At DXOMARK, we decided to pursue this topic because it is key to understanding users, which is at the heart of everything we do. It all starts with them and their usage. We’re helping manufacturers better understand this subject and user preferences through in-depth qualitative research.

Other questions we’ve been looking at include: How can we measure preferences accurately and understand what drives them? Could cultural differences have a potential impact? And what about the influence of age and environment? DXOMARK’s Image Quality experts have addressed these issues in a new extensive qualitative study, focused on identifying the key pain points in smartphone portrait photography. It involved:

● a shooting schedule representative of the real world: numerous common scenes and scenarios;
● a panel representing every skin tone;
● a data-based approach by using JOD (Just Objectionable Difference) 2


[1]Source: Photutorial, “How many pictures are there (2023): Statistics, trends, and forecasts” https://photutorial.com/photos-statistics/

[2]JOD is a way to determine the level of perceptual differences between two images before an objection is raised. For example, two images are said to be 1 JOD apart if 75 % of observers find that one image is better in terms of image quality than the other.

1    1 Source: Photutorial, “How many pictures are there (2023): Statistics, trends, and forecasts”
2    2 JOD is a way to determine the level of perceptual differences between two images before an objection is raised. For example, two images are said to be 1 JOD apart if 75% of observers find that one image is better in terms of image quality than the other.

The post Skin Tone: The next challenge in smartphone portrait photography appeared first on DXOMARK.

]]>
https://www.dxomark.com/skin-tone-smartphone/feed/ 0 Skin Tone feature Skin-Tone-Photo-set-1-Final-with-boxes-02 Skin tone Photo Set No. 2_Three pix SKIN TONE Photo-set-3
A closer look at DXOMARK’s laptop protocol https://www.dxomark.com/dxomark-laptop-test-protocol/ https://www.dxomark.com/dxomark-laptop-test-protocol/#respond Wed, 06 Sep 2023 10:39:16 +0000 https://www.dxomark.com/?p=151205 The early 2020s brought significant changes to our lives, including an increased reliance on laptops for work, education, and entertainment. For example, video calls have become a vital part of many people’s daily routines. However, not all laptops provide the same audio and video quality during these calls. As another example, laptops have morphed into [...]

The post A closer look at DXOMARK’s laptop protocol appeared first on DXOMARK.

]]>
The early 2020s brought significant changes to our lives, including an increased reliance on laptops for work, education, and entertainment. For example, video calls have become a vital part of many people’s daily routines. However, not all laptops provide the same audio and video quality during these calls.

As another example, laptops have morphed into personal entertainment centers, allowing us to enjoy listening to music and watching movies. But here, too, discrepancies arise. Some laptops offer high-quality audio, while others leave us wanting more. Display color accuracy and brightness also vary, impacting enjoyment.

In this closer look, we will explore DXOMARK’s new laptop test protocol that is designed to thoroughly assess laptop video call and music & video performance. Join us as we delve into some of the intricacies of laptop performance in audio, camera, and display. Our aim is to provide you with valuable insights that will help you find a laptop that meets your requirements.

What we test

This protocol applies to any product intended to be used as a laptop. Multiple form factors are compatible with our tests and ranking, such as clamshell (most laptops), 360°, and 2-in-1 devices.

Testing philosophy

As with our other protocols, our testing philosophy for laptop evaluation is centered on how people use their laptops and the features that are most important to them. We research this information through our own formal surveys and focus groups, in-depth studies of customer preferences conducted by manufacturers, and interviews with imaging and sound professionals.

By knowing consumers’ preferences and needs,  we can identify the camera, audio, and display attributes that affect the user experience. This  allows us to build a protocol that tests use cases and the attributes using scientific objective measurements and perceptual evaluations.

Use cases

For this initial version of our laptop audiovisual score, we selected two use cases that are representative of both professional and personal laptop use — Video Call and Music & Video.

  • Video Call focuses on the ability of the device to  show faces clearly and with stability and to capture and render  voices in a manner that is pleasant and readily intelligible.
  • Music & Video focuses on the quality of the display for videos in all conditions and the quality of the audio playback for both music and videos.

Video Call Music & Video
Test scenario description Using the laptop with its integrated webcam, speakers, microphone(s) and screen for one-to-one and group calls using a video conferencing app. Using the laptop with its integrated screen and speakers to watch videos or movies or to listen to music.
Usual place used Office, home Home
Some typical applications Zoom, Microsoft Teams, Google Meet, Tencent VooV, Facetime YouTube, Netflix, Youku, Spotify, iTunes
Consumer pain points Poor visibility of faces
Low dynamic range (webcam)
Low intelligibility of voices
Poor screen readability in backlit situations
Poor color fidelity
Poor contrast
Low audio immersiveness
Poor high-volume audio performance
Low display readability in lit environments

Test conditions

AUDIO

Volume Low (50 dBA @1m)
Medium (60 dBA @1m)
High (70 dBA @1m)
Content Custom music tracks
Custom voice tracks
Selected movies
Apps Capture: Built-in camera app
Playback: Built-in music/video player app Duplex: Zoom

CAMERA

Lighting conditions 5 to 1000 Lux
D65, TL83, TL84, LED
Distances 30 cm to 1.20 m
Charts DXOMARK test charts
App Built-in camera app

DISPLAY

Lighting conditions Dark room
Screen brightness Minimum, 50%, Maximum
Contents Custom SDR video patterns
Custom HDR10 video patterns
Apps Built-in video player app

As we always evaluate objective measurements and perceptual evaluations in the context of an attribute, here are DXOMARK’s definitions of the attributes for the three laptop components that we test — audio, display, and camera.

Audio

For our video call use case, we look at audio capture, handling full duplex situations, and audio playback. Quality audio capture provides good voice intelligibility, a good signal-to-noise ratio (SNR), satisfactory directivity, and  good management of audio when the user interacts with the laptop (such as typing during a call.)

Good laptop audio processing can also handle duplex situations —  when more than one person is talking at the same time — without  echoes or gating, when  necessary sounds are lost. For playback, we assess how faithfully sound sources are replicated, how intelligible voices are, how immersive the spatial reproduction is, and how artifacts are controlled, and satisfactory directivity.

We evaluate the following audio attributes (also part of our smartphone Audio protocol):

List of Audio Sub-scores

Timbre

Timbre describes a device’s ability to render the correct frequency response according to the use case and users’ expectations, taking into account bass, midrange, and treble frequencies, as well as the balance among them. Good tonal balance typically consists of an even distribution of these frequencies according to the reference audio track or original material. We evaluate tonal balance at different volumes depending on the use case. In addition, we look for unwanted resonances and notches in each of the frequency regions as well as for extensions at low- and high-end frequencies.

Dynamics

Dynamics covers a device’s ability to render loudness variations and to convey punch as well as clear attack and bass precision. Sharp musical notes and voice plosives sound blurry and imprecise with loose dynamics rendering, which can hinder the listening experience and voice intelligibility. This is also the case with movies and games, where action segments can easily feel sloppy with improper dynamics rendering. As dynamics information is mostly carried by the envelope of the signal, not only does the attack for a given sound need to be clearly defined for notes to be distinct from each other, but sustain also needs to be rendered accurately to convey the original musical feeling.

In addition, we also assess the signal-to-noise ratio (SNR) in capture evaluation, as it is of the highest importance for good voice intelligibility.

Spatial

Spatial describes a device’s ability to render a virtual sound scene as realistically as possible. It includes perceived wideness and depth of the sound scene, left/right balance, and localizability of individual sources in a virtual sound field and their perceived distance. Good spatial conveys the feeling of immersion and makes for a better experience whether listening to music or watching movies.

We also evaluate capture directivity to assess the device’s ability to adapt the capture pattern to the test situation.

Volume

The volume attribute covers the loudness of both capture and playback (measured objectively), as well as the ability to render both quiet and loud sonic material without defects (evaluated both objectively and perceptually).

Artifacts

An artifact is any accidental or unwanted sound resulting from a device’s design or its tuning, although an artifact can also be caused by user interaction with the device, such as changing the volume level, play/pausing, typing on the keyboard, or simply handling it. Artifacts can also result from a device struggling to handle environmental constraints, such as wind noise during recording use cases.

We group artifacts into two main categories: temporal (e.g., pumping, clicks) and spectral (e.g., distortion, continuous noise, phasing).

Display

A laptop needs to provide users with good readability, no matter the lighting conditions. Its color rendition should be faithful in the SDR color space (and in the HDR color space for HDR-capable devices. Main challenges:

We evaluate the following display attributes (also part of our smartphone Display protocol):

List of Display Sub-scores

Color

From the end-user’s point of view, color rendering refers to how the device manages the hues of each particular color, either by exactly reproducing what’s coded in the file or by tweaking the results to achieve a given signature. For videos, we expect that devices will reproduce the artistic intent of the filmmaker as provided in the metadata. We evaluate the color performance for both SDR (Rec 709 color space) and HDR (BT-2020 color space) video content.

Brightness & Contrast

We evaluate minimum and maximum brightness to help us ascertain if a laptop can be used in low light and in bright, backlit environments.

We also evaluate a device’s brightness range, which gives crucial information about its readability under various kinds and levels of ambient lighting. A high maximum brightness allows a user to use the laptop in bright environments (outdoors, for example), and a low minimum brightness will ensure that user can set the brightness according to their preference in a dark environment.

We evaluate maximum contrast using a checkerboard pattern, which also lets us see blooming impacts display performance.

Tone mapping

We evaluate the electro-optical transfer function (EOTF), which represents the rendering of details in dark tones, midtones, and highlights. It should be as close as possible to that of the target reference screen but should adapt to bright lighting conditions to ensure that the content is still enjoyable.

Uniformity

We evaluate the uniformity of the laptop display both at maximum and minimum brightness to assess any uniformity defects that would be noticeable to the end-user.

Reflectance & Angular

We use a spectrophotometer to evaluate spectral reflectance level on laptop displays when turned off. Additionally, we use a glossmeter to measure the reflectance profile — for example, how diffuse reflectance is. These two measurements are important indicators of laptop readability in bright lighting environments.

Click for more information about these measurements.

Camera

To provide a good end-user experience, a laptop’s built-in camera has to provide a stable image throughout the call, and keep faces in focus and well exposed even in challenging lighting conditions. Viewers should be able to follow facial expressions and mouth movements that are perfectly synchronized with the audio.

We evaluate the following camera attributes (also part of our DSLR Sensor and smartphone Camera and Selfie protocols):

List of Camera Sub-scores

Exposure

Exposure measures how well the camera adjusts to and captures the brightness of the subject and the background. It relates as much to the correct lighting level of the picture as to the resulting contrast. For this attribute, we also pay special attention to high dynamic range conditions, in which we check the ability of the camera to capture detail from the brightest to the darkest portions of a scene.

Color

The color attribute is a measure of how faithfully the camera reproduces color under a variety of lighting conditions and how pleasing its color rendering is to viewers. As with exposure, good color is important to nearly everyone. Pictures of people benefit greatly from natural and pleasant skin-tone representation.

Texture

The texture attribute focuses on how well the camera can preserve small details. This has become especially important because camera vendors have introduced noise reduction techniques that sometimes lower the amount of detail or add motion blur. For some applications, such as videoconferencing in low-bandwidth network conditions, the preservation of tiny details is not essential. But users using their webcam in high-end videoconferencing applications with decent bandwidth will appreciate a good texture performance score.

Noise

Texture and noise are two sides of the same coin: improving one often leads to degrading the other. The noise attribute indicates the amount of noise in the overall camera experience. Noise comes from the light of the scene itself, but also from the sensor and the electronics of the camera. In low light, the amount of noise in an image increases rapidly. Some cameras increase the integration time, but poor stability or post-processing can produce images with blurred rendering or loss of texture. Image overprocessing for noise reduction also tends to decrease detail and smooth out the texture of the image.

Artifacts

The artifacts attribute quantifies image defects not covered by the other attributes, caused either by a camera’s lens, sensor, or in-camera processing. These can range from straight lines looking curved or strange multi-colored areas indicating failed demosaicing. In addition, lenses tend to be sharper at the center and less sharp at the edges, which we also measure as part of this sub-score. Other artifacts such as ghosts or halo effects can be a consequence of computational photography.

Focus

The focus attribute evaluates how well the camera keeps the subject in focus in varying light conditions and at multiple distances. Most laptops use a fixed-focus lens, though we expect to see autofocus cameras in laptops in the future. Our testing methodology applies to both fixed-focus and autofocus in all tested situations. When several people are at different distances from the camera, a lens design with a shallow depth of field implies that not all people will be in focus. We evaluate the camera’s ability to keep all faces sharp in such situations.

Test environments

Test environments are divided into two parts — lab scenes and natural or real scenes.

  Lab scenes Real scenes
Location Audio, Camera & Display labs Meeting rooms, living room, etc.
Main goal Repeatable conditions Real-life situations
Evaluations Objective and Perceptual Perceptual
Single call
Dual call

Lab setups: repeatable procedures and controlled environments

Video call lab

This setup tests the quality of a video-call capture from a single-user perspective for audio and video in multiple lighting conditions.

Items measured and/or evaluated:

  • Camera
    • Color (color checker)
    • Face exposure (realistic mannequin)
    • Face details (realistic mannequin)
    • Exposure time (Timing box)
  • Audio
    • Voice capture
    • Background noise handling

 

Test conditions

  • Distance: 80cm
  • Light conditions:  D65, LED TL83,TL84, Tungsten, Mixed 1 to 1000 Lux

 

Equipment Used

  • Image
    • Realistic mannequin
    • Color checker
    • Timing box
    • Automated Lighting system
  • Audio
    • Genelec 8010
    • Genelec 8030 (x2)

 

HDR Portrait setup (camera and audio)

This setup tests the quality of video call capture with two users in front of the computer for audio and video in multiple backlit conditions.

Items measured and/or evaluated:

  • Camera
    • Face exposure
    •  Face details
    •  Highlight recovery (entropy)
    •  White balance
    •  Skin tones
    • Noise & Texture
    • Artifacts
  • Audio
      • Voice capture (including spatial)
      •  Background noise handling

 

Test conditions

  • Distance: By framing (FoV dependent)
  • Light conditions: 20 lux A, 100 lux TL84, 1000 lux D65

 

Equipment Used

  • Image
    • Realistic mannequin (x2)
    • HDR Chart
    • Automated Lighting system
  • Audio
    • Genelec 8010 (x2)
    • Genelec 8030 (x2)

 

Depth of field (camera only)

This tests the laptop camera’s ability to keep multiple users who are in front of the camera in focus during a video call at various distances. This involves moving one of the mannequins to the foreground or background to see whether the face remains focused.

Attributes evaluated

  • Camera
    •  Focus / Depth of Field
    • Noise
    • Artifacts

Test conditions

  • Distance: By framing (FoV-dependent)
  • Light conditions: D65 1000 lux & 20 lux SME A

Equipment used

  • Image
    • Realistic manneqin (x2)
    • Automated lighting system

 

DXOMARK Camera charts

DXOMARK Chart

 

DXOMARK chart

Attributes evaluated and/or measured:
Camera
  • Texture
  • Color
  • Noise
  • Artifacts
Test conditions
  • Distance: By framing (FoV-dependent)
  •  Lighting: D65 1000 lux, TL84 5-1000 lux, LED 1-500 lux
Equipment used
  • DXOMARK chart
  • Automated lighting system
Dead Leaves

Dead leaves

Attributes measured
Camera
  • Texture
  • Noise
Test conditions
  • Distance: By framing (FoV-dependent)
  • Lighting: D65 1000 lux, TL84 5-1000 lux, LED 1-500 lux
Equipment used
  • Dead Leaves chart
  • Automated lighting system

Read more about this measurement on this scientific paper: https://corp.dxomark.com/wp-content/uploads/2017/11/Dead_Leaves_Model_EI2010.pdf

Focus range

Focus range chart

 

Attributes measured
Camera
  • Focus / Depth of field
Test conditions
  •  Distance: By framing (FoV-dependent)
  • Lighting: D65 1000 Lux, TL84 5-1000 Lux, LED 1-500 Lux
Equipment used
  • Focus chart
  • Automated lighting system
Visual noise

Visual Noise Chart

Attributes measured
Camera
  • Noise
Test conditions
  • Distance: By framing (FoV-dependent)
  •  Lighting: D65 1000 Lux, TL84 5-1000 Lux, LED 1-500 Lux
Equipment used:
  • Dots chart
  • Automated lighting system
Dots

DOTS chart

Attributes measured:
Camera
  • Resolution
  • Distortion
Test conditions
  •  Distance: By framing (FoV-dependent)
  •  Lighting: 1000 lux D65
Equipment used:
  • Dots chart
  • Automated lighting system

 

 

Presentation of Display-testing equipment

Laptop display tests are conducted in low-light conditions only. We also test color and EOTF for both SDR and HDR video. The reflectance and gloss measurements in low-light are sufficient to indicate the display’s performance in brightly lit environments.

Display color analyzer

  • Attributes measured
    •  Display — Readability
      • Brightness
    • Display — SDR and HDR
      •  Color gamut and rendering
      • EOTF
  • Equipment used
    • Konica Minolta CA410

 

Spectrophotometer

  • Attributes measured
    • Display — Readability
      • Reflectance
  • Equipment used
    • Konica Minolta CM-25d

 

Glossmeter

  • Attributes measured
    •  Display – Readability
      • Gloss & Haze
      • Reflectance profile
  • Equipment used
    •  Rhopoint Glossmeter

 

Presentation of recording lab setups

Video-call audio lab

This setup aims at testing the quality of the voices and sounds captured during a video call when multiple people are in the same room. The audio is captured using a popular videoconferencing application.

  • Attributes evaluated
    • Audio
      • Duplex
  • Equipment used
    • Head Acoustics – 3PASS
    • Yamaha HS7 (background noise)
    • Genelec 8010 (voices)

 

Semi-anechoic room

This semi-anechoic room setup allows for sound to be captured and measured in optimal audio conditions, free of any reverberations and echoes.

  • Attributes measured
    • Audio
      • Frequency response
      • THD+N (distortion)
      • Directivity
      • Volume
  • Equipment used
    • Genelec 8361
    • Earthworks M23R
    • Rotating table

 

Laptop scoring architecture

To better understand consumer laptop preferences and usages, we conducted a survey recently with YouGov that showed laptops were used mostly for web browsing (76%), office work (59%) and streaming video (44%), and listening to music (35%).

Our laptop overall score combines the equally weighted scores of both the Video Call and Music & Video use cases, which in turn are based on the use case and feature scores for camera, audio, and display.

Use case scores

Camera performance has the highest weight in our calculation of the Video Call score, as it is a major pain point for users right now. (We expect this feature to improve a lot in the coming years, as laptop makers are putting a lot of effort into bringing the quality of built-in cameras closer to that of smartphones and external webcams.) Audio comes next, as a video call cannot happen without it! We evaluate both voice playback and capture, but also “duplex” — situations in which more than one person is speaking at a time, which can cause significant problems in terms of intelligibility. Finally, we assess quality of a display’s readability, as many laptops still do not handle bright situations correctly.

The Music & Video score comprises Display and Audio subscores. Display testing focuses on correct reproduction of colors and tones for both SDR and HDR video and movie contents. Although SDR accounts for most content viewed on a laptop, video streaming platforms are providing more and more HDR content. We evaluate laptops both with and without HDR panels using HDR contents. We apply a penalty to the HDR score for any laptop for having a panel that is not HDR-capable, as that can limit certain usages. We also evaluate audio as part of this use case.

Feature scores

Besides the use-case scores, we calculate general feature scores for Audio, Camera and Display. These scores are representative of the overall performance of the laptop for each individual audiovisual feature, independent of the use case. In practice, the camera score is the same as the video call camera score. For the display feature, we reuse the music and video scores; but for audio, we combine the scores from the Video Call and the Music & Video use cases.

Score structure

laptop score structure
We use geometric means to combine all scores according to the weights given in the table above.

We scale the camera scores to have the same impact as audio and display scores in order to keep each feature score relevant for a direct evaluation of perceived quality in our use cases.

Conclusion

Testing a laptop takes about one workweek in different laboratories with up to 20 lab setups.

We hope this article has given you a more detailed idea about some of the scientific equipment and methods we use to test the most important characteristics of your laptop’s video call and music & video performance.

The post A closer look at DXOMARK’s laptop protocol appeared first on DXOMARK.

]]>
https://www.dxomark.com/dxomark-laptop-test-protocol/feed/ 0 Laptop testing (2) Laptop testing (5) Picture2 Graph 1 Lab duo HDR Audio video graph for duo lab Portrait mannequins DXOMARK chart Dead leaves Focus range chart Visual noise chart DOTS chart Spectrophotometer Glossmeter dxomark_lab_setup_videoconference_perceptual_measurements-e1662985443932-1024×548 Semi anechoic room Laptop-Score-Structure_New-1024×550
Introducing the DXOMARK Laptop protocol https://www.dxomark.com/introducing-the-dxomark-laptop-test-protocol/ https://www.dxomark.com/introducing-the-dxomark-laptop-test-protocol/#respond Wed, 28 Jun 2023 13:56:06 +0000 https://www.dxomark.com/?p=151386 DXOMARK has more than two decades of experience testing image quality with its groundbreaking and user-centric camera image quality suite for smartphones, which was followed more recently by extensive audio, display and battery tests. The application of scientific rigor and in-the-field evaluation is what makes DXOMARK’s scores and rankings such reliable indicators of the user [...]

The post Introducing the DXOMARK Laptop protocol appeared first on DXOMARK.

]]>
DXOMARK has more than two decades of experience testing image quality with its groundbreaking and user-centric camera image quality suite for smartphones, which was followed more recently by extensive audio, display and battery tests.

The application of scientific rigor and in-the-field evaluation is what makes DXOMARK’s scores and rankings such reliable indicators of the user experience. So now, in a natural extension of what it already does with smartphones (as well as cameras, lenses and speakers), DXOMARK engineers have turned their expertise to laptops. They have created a testing suite specifically designed to evaluate the user experience when video calling, music listening, and video watching on laptops. Why laptops? Because these devices, whether used at work or at home, have successfully maintained their status as the critical bridge between work and play.

A changed landscape

One of the factors that led to the development of this protocol was the outbreak in early 2020 of the Covid-19 pandemic, which drastically changed lives and the way people work virtually overnight. All at once, videoconferencing wasn’t just an optional and occasional activity: Office workers around the world found themselves attending online meetings on a daily basis. The use of collaborative software like Zoom, Microsoft Teams, and Slack has nearly doubled from 2018 to 2022.1

With lockdowns and bans on traveling, families and friends had to resort to video calling to be able to virtually celebrate special events or to simply keep in touch. At the time, laptops weren’t known for having high-quality built-in cameras, mainly because they were so rarely used. With demand for external webcams outstripping supply, people were left using their laptop’s built-in camera, and the image quality varied greatly among the devices.

Even though quarantines and restrictions have eased, the business landscape today remains changed. Going into the office is now optional or occasional for many workers, meaning that videoconferencing is still a principal way for professionals to collaborate, discuss, and share.

The pandemic showed just how important the laptop has become, not only as a necessary work-from-home tool, but also a personal device for watching videos or listening to music while working. As a result, consumers have started paying more attention to the quality of their laptops’ performance in these areas.

Elements of the Laptop protocol

In light of this new work environment, DXOMARK engineers from the Camera, Audio and Display teams got together to create a protocol that could help manufacturers and end-users alike make good choices when developing or purchasing a laptop.

To better understand consumer laptop preferences and usages, we conducted a survey recently with YouGov that showed laptops were used mostly for web browsing (76%), office work (59%) and streaming video (44%), and listening to music (35%).2

Laptops can do a myriad of things, but since web browsing is highly reliant on internet connectivity and office work does not require high-end specifications for camera, display or audio, we chose to focus this laptop protocol on two use cases: video calling, and music and video playback.

Video calls rely on the camera, audio and display systems of the laptop to work together seamlessly, so that a correctly exposed and focused image is properly displayed, while the speaker’s voice is intelligible and coherent. For example, everyone wants to be able to easily watch the content on their laptop screens, and most screens are perfectly legible in indoor conditions. But how easily can you see screen content in bright conditions, especially if your screen is highly reflective? How well can you discern someone’s facial features when they are backlit?

Music & Video use case assesses the display and audio playback functionalities of a laptop when watching videos and movies, or when listening to music.

The protocol also evaluates the elements of these use cases separately. In Camera, video call quality evaluation assesses attributes such as face exposure in different light conditions, skin-tone rendering and motion blur in multiple situations such as a single formal office meeting to an informal group call from the sofa.

In Audio, capture, playback, and duplex quality during video calls are evaluated, as is playback for listening to music and watching videos. In video calls, these evaluations are related to the main audio quality attributes, as well as the specifics related to video calls such as Duplex audio. For example, timbre, dynamics, spatial volume and artifacts are the specific attributes measured. Those attributes are also applied to audio playback. People want to clearly hear words emanating from the speakers or be able to pick out specific instruments or riffs in a piece of music. This is pretty easy to do if you’re in a quiet room by yourself. But how easily can you understand what someone is saying if there’s a lot of background noise and crosstalk in your location, their location, or both?

In Display, we evaluate the quality of the video rendering while running a video call or watching a movie. Readability is central to our test, with two elements: the brightness range suitability to lighting environment and a qualitative and quantitative evaluation of reflection. Can you enjoy watching a movie on your laptop when you’re in a low-lit bedroom or outside on a terrace? This readability test is relevant for both video calls and multimedia use cases. We also test color rendering, brightness and gamma when playing both SDR and HDR content.

With the building blocks of the protocol set, the testing was ready to begin.

Our testing philosophy

First, our laptop testing applies the typical DXOMARK “recipe,” which consists of a combination of measurements in our labs as well as real life tests. Second, we test under rigorously scientific conditions both inside and outside the lab, using industry-leading equipment and following internationally recognized procedures for our objective and perceptual measurements.

The following examples show how we test laptops, For a more in-depth look at the whole procedure, read the Closer Look article.

An example of lab setup to test a laptop’s handling of color, detail, as well as voice capture. Each measure takes the temporal aspect in account.
An example of video call testing, with a focus on the laptop’s handling of face representation, movement reproduction, image stability, as well as audio capture intelligibility.

How is the laptop score built?

The diagram below helps to illustrate the use cases and features that we test for each use case and shows the overlap between the two, notably for the display attributes of reflectance and color fidelity, and for audio playback quality. As you can see, the video calling use case evaluates many of the same attributes as DXOMARK tests for in its smartphone camera protocol and includes audio capture attributes. By contrast, the Music & Video use case does not include camera testing nor audio capture evaluation; instead, it relies on the attributes we use to score smartphone displays.

Both use cases are accorded nearly equal weight when determining an overall laptop quality score. But as with our other protocols, understanding how well a given laptop performs in each use case is important. If you rarely use your laptop for video calls but watch a lot of movies on it, you will want to pay more attention to your device’s multimedia capabilities. If you’re spending many hours each week in calls with colleagues or clients, then your device’s handling of HDR10 video content may not matter so much to you.

What types of laptop products do we cover?

We debut the Laptop protocol with a selection of 14 main laptops from 11 different brands that are currently available on the market. We focused our first round of tests on laptops that were at the higher end of the market because these types of devices tend to include the latest technologies and have the potential to offer the best-in-class experience in video calling as well as music and video streaming.

We plan to vary the kinds of laptops we test, and our choices will consider various operating systems as well as the evolution of the latest technology.

Ranking results and insights

We observed significant differences in performance for laptops compared to what we are accustomed to seeing on smartphones.

For example, a laptop is equipped with a built-in front-facing camera, just like a smartphone. However, we observed that in most cases, the single laptop camera was less performant than the front camera on a smartphone, particularly in the important areas of face exposure, color, clarity, tone, sharpness, details, parameters that are important when videoconferencing.

Compared to smartphones, laptop displays bring a lot more diversity than smartphones, notably in the type of panel offered. For example, while featuring a matte or glossy panel does not directly impact the score, their implementation could impact readability, which is a key element in our protocol.

When it comes to the video experience, HDR panels still remain a niche even on many of today’s top products. Even when equipped with HDR panels, we observed disparities because devices were not necessarily optimally tuned to benefit from the HDR panels’ full potential. This was particularly true for Windows laptops, which showed strong clippings in the highlights of HDR videos at maximum brightness, therefore limiting the user experience. In other cases, laptops without HDR panels were still able to decode basic HDR formats thanks to tone-mapping adaptation.

In the world of laptop performance, it is rare to find a device that stands out solely for its audio performance since most manufacturers face important tradeoffs for speaker and microphone placement and the challenges of form factor and maximum weight. However, in this regard, our laptop results revealed that when it came to video calls, there were fewer disparities in quality among the 14 devices we tested. Video call scores for these laptops were relatively high across the board, and all the devices passed.

A more detailed look at the results shows that Apple laptops led the ranking, providing excellent performances across both the Video call and Music & Video use cases. In Video call, the two tested MacBooks showed accurate exposure even in difficult conditions, as well as great intelligibility in capture and playback. Their performance in playback and display brought them as well to the top of the rankings in the Music & Video use case.

Video call ranking
Music & Video ranking

Among the distinctive laptop performers operating on Windows, the Lenovo ThinkPad X1 delivered a great overall experience in both use cases, reaching the Top 3 positions in Camera and Display while showing an average performance in Audio tests, despite strong hardware capabilities.

The Microsoft Surface Pro 9 and Surface Pro 9 5G managed a very good performance, particularly in video call, where it scored the best among all the Windows laptops we tested. Both devices’ scores, however, were impaired by their performances in Display.

The Asus Zenbook 14X OLED performed well in Audio, even managing to beat the MacBook, but it showed an average performance in Display, where it lacked readability, and in Camera, where it showed insufficient face exposure in challenging situations such as backlit scenes.

Outside of the laptops mentioned, the other laptops provided a good experience in simple cases, but showed difficulties in challenging conditions.

What can you find on dxomark.com?

You’ll find all the laptop results and rankings on dxomark.com presented in the same manner as the other devices we test. A Product Review will give you a snapshot of the laptop, with the specifications, the scores as well as the ranking. For readers who want to deep-dive into the measurements and comparisons, the product review will contain a link to the test results, which provide a curated selection of evaluations, measurements and comparisons from the full report.

Ranking page
Product review
Test results

These are just the key highlights from our report. The full performance evaluations are available upon request. Please contact us on how to obtain a full report.

For an inside look at the intricate laboratories where we test and all that goes into evaluating laptops for video calls as well as music and video, click on the How We Test tab on dxomark.com.


1    Sources(s): “Collaboration software market revenues from 2015 to 2026 (MUSD)”, Apps Run The World & Statista, September 23, 2022.
2    YouGov RealTime survey conducted on behalf of DXOMARK in Q2 2023, among a minimum sample of 1,000 people per country, representative of the national population aged 18 and over (France, Germany, China, USA) using the quota method.

The post Introducing the DXOMARK Laptop protocol appeared first on DXOMARK.

]]>
https://www.dxomark.com/introducing-the-dxomark-laptop-test-protocol/feed/ 0 Lab laptop setup 1920-1080-max Laptop Score Structure_New Ranking overall v4 Ranking Video call v4 Ranking Music & Video v4 Laptop Ranking Laptop Product Review Laptop Test Results