Cambridge Institute of Technology, KR Puram, Karnataka, India
* Corresponding author
Cambridge Institute of Technology, KR Puram, Karnataka, India
Cambridge Institute of Technology, KR Puram, Karnataka, India
REVA University, Karnataka, India

Article Main Content

This research paper explores the development and evaluation of non-reference image quality metrics specifically tailored for AI-generated images created by task-specific prompts. Given the unique challenges posed by such images, traditional metrics often fall short in assessing their perceptual quality and alignment with the provided prompts. This study introduces a novel approach that integrates multi-granularity similarity measurements and task-specific prompts to evaluate both perceptual and alignment quality. Experimental results on benchmark datasets demonstrate the effectiveness of the proposed metrics, offering a new standard for assessing AI-generated images. 

Introduction

AI-generated images (AIGIs) are becoming increasingly popular, with applications ranging from art creation to practical uses like data augmentation. However, the quality of these images can vary significantly, necessitating robust evaluation metrics. Traditional image quality assessment (IQA) methods often rely on reference images, which are not available for AIGIs. Non-reference image quality metrics (NR-IQMs) are crucial for assessing these images effectively.

The emergence of artificial intelligence (AI) in image generation has revolutionized the way we interpret and evaluate visual content. As AI technology continues to evolve, the necessity for effective evaluation metrics becomes increasingly critical. Traditional reference-based image quality metrics, while useful, often fall short in situations where ground truth images are unavailable or impractical for comparison.

This limitation has prompted researchers to explore non-reference metrics, aiming to assess the quality of AI-generated images more effectively. By delving into the nuances of these novel approaches, we can better understand their potential to provide insights into the fidelity, realism, and overall aesthetic value of generated content. This technique rigorously evaluates various non-reference image quality metrics, establishing a framework to analyse their effectiveness in gauging the quality of AI-generated images in a rapidly evolving digital landscape. Current NR-IQMs are not fully equipped to handle the unique challenges posed by AIGIs, particularly regarding perceptual quality and alignment with task-specific prompts. There is a need for a novel approach that addresses these limitations.

Overview of AI-Generated Images and the Importance of Image Quality Metrics

In the rapidly evolving landscape of artificial intelligence, AI-generated images have emerged as a transformative force across various sectors, influencing art, entertainment, and even urban planning. The quality of these images is paramount for ensuring they meet user expectations and application standards. As the reliance on AI-generated visuals increases, so does the necessity for robust image quality metrics.

In the realm of image quality assessment, non-reference metrics play a crucial role by enabling the evaluation of image quality without the need for a pristine reference image. This is particularly significant in applications such as video surveillance, object tracking, and situations where high-quality reference images are not available or practical, as highlighted in the ongoing development of algorithms for reconstructing background images from cluttered scenes [1].

Unlike traditional metrics that rely on comparison with a reference, non-reference metrics utilize inherent characteristics of the image itself, making them essential for real-time applications and AI-generated content. As artificial intelligence increasingly permeates image generation, the importance of robust non-reference metrics becomes even more pronounced; they must effectively capture the nuances of quality that might otherwise remain unmeasured. This advancement not only enhances the evaluation process but also supports the drive toward more objective assessment in an increasingly automated visual landscape [2].

Understanding Non-Reference Image Quality Metrics

Assessing non-reference image quality metrics requires a nuanced understanding of how these metrics operate in the absence of standard benchmarks. Unlike traditional methods that rely on reference images for comparison, non-reference metrics evaluate the quality of images based solely on the images themselves, necessitating a more sophisticated approach [3]. This is particularly crucial for AI-generated images, which may not conform to established quality norms. Existing evaluation methods often fail to provide a comprehensive view of system performance, obscuring critical shortcomings [4], [5]. As a result, effective analysis must focus on the specific components contributing to perceived quality, thereby illuminating paths for refinement.

Furthermore, the growing need for accountability in AI technologies underscores the importance of a detailed examination of these metrics. Solutions such as those proposed by Pandora demonstrate the potential for hybrid approaches that integrate human insights with algorithmic evaluation, allowing for a more thorough diagnostic process of image quality [6].

These metrics serve as essential tools for evaluating the aesthetic and functional aspects of images produced by algorithms. For instance, the assessment of green spaces in urban areas, as highlighted in the study on Mumbai, underscores the importance of visual quality in geographic evaluations; such assessments often utilize AI-generated imagery to analyze changes in urban landscapes over time [7].

Furthermore, advancements in Point Cloud Quality Assessment illustrate that without effective quality metrics, the utility of AI in complex applications could be significantly diminished [8]. Ultimately, establishing comprehensive metrics is crucial for enhancing the reliability and performance of AI-generated imagery.

Literature Survey

In recent years, the advent of artificial intelligence (AI) has revolutionized many creative fields, leading to the emergence of AI-generated images (AIGIs). With this transformation comes the crucial need for effective quality assessment methods tailored to these new forms of digital content. Non-Reference Image Quality Metrics (NR-IQMs) have shown promise in quality evaluation, but significant gaps remain, particularly in their application to AIGIs. This essay explores existing NR-IQM techniques such as BRISQUE, NIQE, and ILNIQE, analyzes their limitations in assessing AIGIs, and discusses current research gaps concerning quality metrics tailored to these images.

Non-reference image quality metrics are designed to assess image quality without requiring a pristine reference image. They serve as essential tools for evaluating a multitude of image processing scenarios, especially in cases where the original image is unavailable. Among the various NR-IQM techniques, BRISQUE (Blind/Reference less Image Spatial Quality Evaluator), NIQE (Natural Image Quality Evaluator), and ILNIQE (Improved Local Natural Image Quality Evaluator) stand out.

1. BRISQUE operates based on the assumption that natural images exhibit a specific statistical characteristic. By employing local contrast measures and assessing the deviation from these statistical norms, BRISQUE can effectively evaluate the perceptual quality of images. It generates scores correlating well with human visual perception, making it a popular choice in numerous applications.

2. NIQE takes a different approach by focusing on the distribution of features extracted from natural scenes. Rather than relying solely on statistical information, NIQE evaluates image quality by comparing the feature distributions of the test image against those of a corpus of natural images. This method is not dependent on the presence of a reference image, and its design allows it to perform consistently across various image types.

3. ILNIQE improves upon NIQE by incorporating local features into the quality assessment process. It segments the image and evaluates the quality of each local region independently before averaging these assessments. The local assessment offers more granularity, allowing ILNIQE to capture subtleties in image quality that other models may overlook.

Despite their efficacy in evaluating traditional images, BRISQUE, NIQE, and ILNIQE face notable limitations when applied to AIGIs. The primary limitation of existing NR-IQMs such as BRISQUE, NIQE, and ILNIQE is their reliance on feature extraction designed for natural images. AIGIs, generated through machine learning algorithms, often display unique artifacts and characteristics that deviate from conventional statistical distributions.

Consequently, the performance of these metrics can be compromised when assessing AIGIs. For instance, BRISQUE’s reliance on local contrast features may not adequately account for the smooth textures and uniform regions typical in certain AI-generated content. Artifacts commonly found in AIGIs may lead to misinterpretations regarding quality, as these techniques are trained on datasets comprising traditional photographs rather than synthetic images.

Similarly, while NIQE’s strength lies in its ability to assess the naturalness of an image, it often misjudges the quality of images that are deliberately stylized or contain unusual compositions—common traits among AIGIs. The distribution of features utilized by NIQE may not accurately represent the intended aesthetics of AI-generated content, resulting in potentially misleading quality scores.

Even further, ILNIQE’s local assessment approach, which provides detailed analyses of small image sections, can struggle with the global coherence required to evaluate the overall quality of an AIGI. The metrics may become overly sensitive to minor deviations from the “norm,” resulting in penalties for stylistic choices made during the image generation process.

AI-Generated Images and Quality Assessment

AI-generated images have emerged from advanced algorithms capable of creating visually appealing content. These images find applications across domains, including art, marketing, gaming, and more. Given their rising prevalence, the need for effective quality assessment mechanisms becomes paramount. Accurate evaluation is necessary not simply to measure aesthetic appeal but also to ensure that these images meet the standards required for various professional outputs.

The importance of quality assessment in the context of AIGIs cannot be overstated. For businesses looking to utilize these images, understanding the nuances of AI generation—and the subsequent quality of the output—is crucial for maintaining brand integrity and achieving desired communication goals. Current research in this field has focused largely on establishing metrics for evaluating AIGI quality, though many metrics have borrowed principles from traditional image assessment frameworks like the aforementioned NR-IQMs.

Existing studies have noted the importance of prompt alignment—assessing how closely the AI-generated image corresponds to the given input prompt. However, many of the current metrics fail to incorporate this aspect effectively. As a result, most existing quality metrics tend to evaluate images on aesthetic grounds alone, without consideration for the fidelity to the initiating prompts.

The Traditional Approach: Reference-Based Metrics

To appreciate the necessity of NR-IQM, one must first survey traditional, reference-based IQA methods. Metrics like Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mean Squared Error (MSE) serve as standards for many image processing tasks. These methods utilize a ground truth reference against which generated images are evaluated. While effective in controlled environments, these metrics often fall short in real-world applications involving AI-generated content, where references may not exist, or the notion of “quality” is subjective.

For instance, the use of PSNR is critiqued for its inability to account for perceptual differences; an image with a high PSNR can still appear distorted to human observers. SSIM, while improving over PSNR by incorporating luminance, contrast, and structural fidelity, does not encompass the artistic intent or uniqueness inherent in many AI-generated images. Such limitations underscore the pressing need for innovative methodologies that can facilitate quality assessments without reference images.

Identifying Challenges in Non-Reference Image Quality Assessment

Non-reference image quality metrics introduce a new set of challenges. These include:

1. Subjectivity of Perception: Quality perception varies considerably among individuals depending upon cultural background, personal taste, and experience with art. Developing metrics that universally encapsulate this variance remains complex.

2. Diversity of Generated Outputs: The unpredictable nature of AI-generated images results in a variety of styles, textures, and creative expressions, making it difficult to define a standard quality measure applicable across different types.

3. Computational Efficiency: As image sizes and resolutions increase, the computational resources required for real-time assessment become a significant consideration. Effective NR-IQMs should provide quick evaluations without compromising accuracy.

Exploring Novel Approaches to NR-IQM

Recognizing these challenges, researchers have begun exploring innovative non-reference metrics designed to evaluate AI-generated images. Some notable approaches include:

1. Feature-Based Metrics: By extracting visual features from AI-generated images, researchers can develop metrics that reflect quality attributes such as colorfulness, contrast, and sharpness. For example, using deep learning-based feature extractors, such as convolutional neural networks (CNNs), allows for a more nuanced understanding of image quality. DNNs trained on large datasets can learn to identify and evaluate stylistic elements that contribute to perceived quality.

2. Perceptual Models: Integrating perceptual attributes into NR-IQM aligns closely with human visual perception. Metrics like the Visual Information Fidelity (VIF) and Naturalness Image Quality Evaluator (NIQE) consider factors such as local contrast and spatial frequency distributions. By modifying these models to embrace the unique characteristics of AI-generated images, researchers can capture essential quality indicators that resonate with human observers.

3. Machine Learning Approaches: Recent advances in machine learning offer exciting avenues for NR-IQM. Models can be trained using datasets comprising pairs of AI-generated and human-generated images, incorporating user ratings as a proxy for perceived quality. Techniques such as Support Vector Machines (SVMs) and Neural Networks can be employed to predict quality scores directly from features extracted from images.

Gaps in Current Research

Significant gaps exist in current research addressing the limitations of NR-IQMs for AIGIs. First and foremost, there is a clear lack of tailored NR-IQMs specifically designed for evaluating AIGIs. Most existing metrics originated from the paradigms of traditional photography and image processing, creating a mismatch in their applicability to AI-generated content. There is a pressing need for the development of new metrics that account for the unique attributes of AIGIs while incorporating both technical quality and perceptual aspects.

Moreover, the inadequate handling of prompt alignment in existing metrics represents a significant oversight. Effective evaluation of AIGIs requires an understanding of how well an image reflects the input prompt. Metrics need to evolve to measure not only traditional quality aspects but also the degree to which the AI’s output meets the intended design or narrative objectives embedded in the prompt. Innovations in this area could pave the way for more meaningful assessments that align closely with user expectations and the unique capabilities of AI technologies.

Comparative Analysis of Existing Non-Reference Metrics

The effectiveness of non-reference metrics in evaluating image quality has become a focal point in recent research, particularly as AI-generated images continue to proliferate. A comparative analysis of existing metrics reveals significant disparities in their evaluation criteria and underlying assumptions. Some metrics, for instance, focus largely on pixel-level analysis, failing to account for perceptual nuances that affect human visual perception.

In contrast, more advanced techniques incorporate human intuition and context, recognizing that albedo and shading variations in images, as discussed in [9], can greatly influence subjective quality assessments. Moreover, the interplay between data-driven methods and traditional techniques complicates these comparisons; while some metrics benefit from extensive datasets and machine learning algorithms, others rely on rigid models that can lead to overfitting, as illustrated in the NLG advancements referenced in [3]. Thus, establishing a comprehensive understanding of these metrics strengths and weaknesses is essential for refining evaluation approaches suited for AI-generated imagery.

Evaluation of Current Non-Reference Metrics and Their Effectiveness in Assessing AI-Generated Images

The quest for effective non-reference metrics in evaluating AI-generated images is increasingly vital as the complexity of generative models advances. Current metrics often struggle to capture subjective qualities like aesthetics and creativity, focusing primarily on statistical consistency and technical attributes. This limitation stems from the inherent challenge of establishing a universal standard for image quality, particularly in non-reference scenarios where no ground truth exists.

As highlighted in recent research, [3] the evaluation landscape is evolving, with a growing emphasis on integrative approaches that combine traditional evaluation methods and new algorithms tailored for AI outputs. Furthermore, researchers are advocating for methodologies that transcend quantitative measures, thereby encompassing qualitative assessments to better reflect human perceptions of image quality, as noted in [6]. Such comprehensive frameworks are essential for navigating the intricacies of assessing AI creativity and effectiveness in image generation, pushing the field toward more nuanced and reliable evaluation standards.

Methodology

Data Collection

The foundation of any rigorous research methodology lies in the selection and management of datasets. This study draws upon several pertinent datasets, notably AGIQA-1K and AGIQA-3K, along with other relevant data sources tailored to enhance the depth and breadth of analysis. AGIQA-1K comprises a collection of 1000 annotated images designed specifically for the assessment of artificial general intelligence (AGI) quality. This dataset provides a robust basis for understanding implications in varied contexts, such as visual perception tasks and alignment evaluations, which are critical in assessing the performance of artificial intelligence systems. On the other hand, AGIQA-3K expands this context, offering 3,000 examples that give a more comprehensive overview of AGI performance metrics across different domains. These datasets are pivotal as they encapsulate a myriad of scenarios, enabling nuanced analyses of AGI capabilities.

In the realm of data preparation, preprocessing techniques play an essential role in ensuring the integrity and usability of data. Initial steps include data cleaning, which involves removing erroneous entries, duplicate records, and irrelevant information. Standardization techniques follow, wherein data are converted into a consistent format conducive to analysis. For instance, image normalization may be employed to ensure uniformity in pixel values, facilitating better model training outcomes. Additionally, augmentation techniques such as rotation, scaling, and flipping can be applied to enhance dataset diversity and combat overfitting when training machine learning models. This structured approach to data preprocessing not only improves the reliability of input data but also contributes significantly to the overall robustness of the analytical results derived from subsequent modelling efforts.

Proposed Metrics

The evaluation of artificial general intelligence requires the formulation of comprehensive metrics to gauge both perception capabilities and alignment quality. The design of task-specific prompts represents a critical aspect of this evaluation process. By crafting well-defined prompts tailored to specific tasks, researchers can systematically assess AGI’s ability to interpret and respond accurately to complex queries. These prompts are designed to elucidate two-fold aspects: how well AGI comprehends a given task and the fidelity of its responses in reflecting its understanding.

Analytical Methods

To underpin the analytical phase of this study, a variety of machine learning models are utilized to forecast quality outcomes. Among these, Convolutional Neural Networks (CNNs) and Random Forests represent key methodologies. CNNs are particularly well-suited for image recognition tasks due to their ability to capture spatial hierarchies and patterns within visual data. This capability is instrumental in evaluating AGI’s perceptual attributes, contributing to a more profound understanding of how systems interpret images in the context of the data at hand.

Moreover, Random Forests provide a versatile framework for classification and regression tasks, relying on ensemble learning techniques. This method aggregates numerous decision trees to improve accuracy and mitigate overfitting risks. In the context of predicting AGI quality, Random Forests facilitate the identification of intricate relationships within the data, aiding in the discernment of features that significantly correlate with high-performance ratings.

Steps to Develop NR-IQMs for AIGIs

1. Define Quality Aspects by Measuring the overall visual appeal and naturalness of the image (Perception Quality) and ensure the image aligns with the task-specific prompt or input description (Alignment Quality).

2. Data Collection and Annotation: Collect a diverse set of AIGIs from various sources and tasks. Use expert annotators to label images with quality scores based on perception and alignment.

3. Extract features that capture visual attributes such as texture, color, sharpness, and noise (Perception Features). Use techniques like image captioning and object detection to measure how well the image content aligns with the prompt (Alignment Features).

4. Develop a Machine Learning Model to predict quality scores based on extracted features using supervised learning algorithms (e.g., Random Forest, SVM) and implement convolutional neural networks (CNNs) for more complex feature extraction and quality prediction.

5. Multi-Granularity Similarity:

Coarse-Grained Similarity: Measure overall similarity between the entire image and the prompt.

Fine-Grained Similarity: Focus on specific regions or objects within the image to assess alignment more accurately.

6. Evaluate with Benchmark Datasets like AGIQA-1K and AGIQA-3K for the proposed NR-IQMs. Employ metrics such as Mean Squared Error (MSE), Structural Similarity Index (SSIM), and human judgment to validate model performance.

7. Adapt Feedback Loop to Continuously improve the metrics by incorporating feedback from end-users and updating the models with new data. Apply cross-validation techniques to ensure the robustness and generalizability of the metrics.

Results and Discussion

The task-specific prompts, introduces multi-granularity similarity metrics as an innovative approach to measuring the relationship between artificial intelligence-generated insights (AIGIs) and the designed prompts. This methodology encompasses both coarse-grained and fine-grained similarity assessments. Coarse-grained similarity focuses on overarching themes or general responses, providing insight into whether the AGI grasps fundamental concepts conveyed by the prompts. In contrast, fine-grained similarity delves deeper into the nuances of responses, evaluating the precision of AGI’s outputs against expected answers. By employing both similarity types, researchers can derive a layered understanding of AGI’s performance, facilitating targeted improvements in alignment and interaction quality as described in Table I.

Prompt Detected image NRIQA analysis
Plant with flower Size: 75701
Resolution: 896 × 1152
Contrast: 48.66
Noise Level: 0.33
Brightness: Dark
Sharpness: 19.84
Balance: Center-Heavy
Color Harmony: None
Technical Quality: 57.38
Aesthetic Quality: 50.76
AI Generated: 99.62
Group of People enjoying at the beach Size: 62145
Resolution: 896 × 1152
Contrast: 47.22
Noise Level: 0.14
Brightness: Bright
Sharpness: 95.79
Balance: Center-Heavy
Color Harmony: None
Technical Quality: 60.1
Aesthetic Quality: 51.66
AI Generated: 99.79
A parachute that looks like a broccoli. Size: 128416
Resolution: 1152 × 896
Contrast: 61.75
Noise Level: 0.48
Brightness: Dark
Sharpness: 387.98
Balance: Top-Heavy
Color Harmony: Analogous
Technical Quality: 49.07
Aesthetic Quality: 64.6
AI Generated: 93.2
A group of elephants grazing in the forest Size: 97789
Resolution: 896 × 1152
Contrast: 46.44
Noise Level: 0.43
Brightness: Dark
Sharpness: 77.38
Balance: Top-Heavy
Color Harmony: Analogous
Technical Quality: 53.37
Aesthetic Quality: 60.11
AI Generated: 99.99
A parrot eating an apple. Size: 84394
Resolution: 896 × 1152
Contrast: 41.44
Noise Level: 0.39
Brightness: Dark
Sharpness: 30.03
Balance: Left-Heavy, Top-Heavy
Color Harmony: None
Technical Quality: 58.25
Aesthetic Quality: 50.8
AI Generated: 99.9
A rack of clothes Size: 5945382
Resolution: 1920 × 1280
Contrast: 60.71
Noise Level: 0.86
Brightness: Dark
Sharpness: 184.48
Balance: Center-Heavy
Color Harmony: None
Technical Quality: 45.44
Aesthetic Quality: 49.14
AI Generated: 0.74
Table I. NRIQA Analysis for Prompt Generated AI Images

The collective application of these analytical methods and metrics forms a robust methodological framework, ultimately supporting the research objectives of understanding AGI quality and performance. By employing machine learning techniques complemented by well-defined evaluation criteria, this methodology strives to yield actionable insights into improving AGI systems’ perceptual abilities and aligning them more closely with human-like understanding as shown in Fig. 1.

Fig. 1. The Aesthetic and technical quality of AI generated image.

While these advancements highlight the effectiveness of non-reference metrics, challenges remain in standardizing evaluation criteria across diverse image generation contexts, particularly in ensuring consistent quality across various AI models.

The statistics for the relevant metrics images with high sharpness and contrast, along with low noise levels, are likely to be AI-generated. AI-generated images tend to have higher sharpness values, with a mean of approximately 110.50 and a maximum of 387.98. The mean contrast value is around 89.60, indicating that AI-generated images may have more pronounced contrast. The average noise level is very low (approximately 0.35), suggesting that AI-generated images are cleaner with less noise. Both metrics show a mean of 0, indicating that these may not be significant indicators for AI generation in this dataset as shown in Fig. 2.

Fig. 2. Sharpness values of AI Generated Image.

Conclusion

In summarizing the findings of this study, it is evident that the evaluation of image quality metrics, particularly for AI-generated images, necessitates a multi-faceted approach. The research highlights the efficacy of non-reference metrics, which align with the growing need for robust methodologies in assessing visual content without relying on ground truth images. This reflects a shift towards data-driven techniques in various fields, includoing Natural Language Generation, where the importance of evaluation standards is underscored [3]. Additionally, the analysis reveals that proper evaluation can significantly enhance the performance of AI image generation systems. Similar to object detection studies that identify the trade-offs of various proposal methods in improving overall accuracy [4], our findings suggest that a nuanced understanding of different non-reference metrics will yield better insights into image quality. Therefore, embracing these methodologies will advance both academic research and practical applications in the rapidly evolving landscape of artificial intelligence.

Summary of Findings and Implications for Future Research in Image Quality Assessment

The analysis of various non-reference image quality assessment metrics has uncovered significant insights regarding their effectiveness in evaluating AI-generated images. Through comparative studies, it has become evident that metrics such as NIQE and BRISQUE offer robust performance in assessing perceptual quality, capturing subtle distortions that may be overlooked by traditional methods. These findings underscore the necessity for continued exploration into automatic assessment techniques, particularly as AI image generation technology progresses. Future research should prioritize the refinement of these metrics, potentially integrating deeper learning approaches to enhance their adaptability and accuracy. Furthermore, the investigation could benefit from a more diverse dataset that includes a wider array of styles, subjects, and resolutions, which could help reveal performance gaps and lead to the development of standardized benchmarks. By addressing these areas, subsequent studies can substantially elevate the reliability of image quality assessments in dynamic AI landscapes, fostering advancements in both the field of machine learning and visual media evaluation.

References

  1. Karam L, Shrotre A. Full reference objective quality assessment for reconstructed background images. 2018. Available from: http://arxiv.org/abs/1803.04103.
     Google Scholar
  2. Duan J. Improving radiotherapy workflow: evaluation and implementation of deep learning auto-segmentation in a multi-user environment, and development of automatic contour quality assurance system. UKnowledge. 2023. Available from: https://core.ac.uk/download/572729610.pdf.
     Google Scholar
  3. Gatt A, Krahmer E. Survey of the State of the Art in Natural Language Generation: core tasks, applications and evaluation. 2017. Available from: https://core.ac.uk/download/93183864.pdf.
     Google Scholar
  4. Hosang J, Benenson R, Dollár P, Schiele B. What makes for effective detection proposals? IEEE Trans Pattern Anal Mach Intell (TPAMI). 2015;38(4):814–30. Available from: http://arxiv.org/abs/1502.05082.
     Google Scholar
  5. Horvitz E, Kamar E, Nushi B. Towards accountable AI: hybrid human-machine analyses for characterizing system failure. 2018. Available from: http://arxiv.org/abs/1809.07424.
     Google Scholar
  6. Tiotsop F, Lohic. Optimizing perceptual quality prediction models for multimedia processing systems. Italy, 2022. Available from: https://core.ac.uk/download/539314275.pdf.
     Google Scholar
  7. Bardhan R, Ramsankaran RAAJ, Sathyakumar V. Geospatial approach for assessing spatiotemporal dynamics of urban green space distribution among neighbourhoods: a demonstration in Mumbai. Urban Fores Urban Green. 2020;49:126630. Available from: https://core.ac.uk/download/286187914.pdf.
     Google Scholar
  8. Mirkhan A. Enhancing point cloud quality assessment with grouped convolutions: a streamlined approach inspired by COPP- Net. 2024. Available from: https://core.ac.uk/download/604158163. pdf.
     Google Scholar
  9. Bonneel N, Garces E, Lalonde J-F, Meka A. Unsupervised deep single-image intrinsic decomposition using illumination-varying image sequences. 2018. Available from: http://arxiv.org/abs/1803.00805.
     Google Scholar
  10. Miyata T. Interpretable image quality assessment via CLIP with multiple antonym-prompt Pairs. arXiv:2308.13094.2023.arXiv.
     Google Scholar
  11. Gao H, Zhang K, Sun W, Zhai G, et al. PrefIQA: human preference learning for AI-generated image quality assessment. (OpenReview + PDF preprint; presented at IEEE ISCAS 2024). Available from: OpenReviewjhc.sjtu.edu.cnResearchGate.
     Google Scholar
  12. Ge Y, Liu X, Dai Q, Wang K. Automatic no-reference image quality rating metrics in DL-reconstructed image quality assessment and protocol optimization (VSS score). ISMRM 2022 Abstract #3165.
     Google Scholar