Figure 1
Figure 1

Towards End-to-End Semi-Supervised Learning for One-Stage Object Detection in Precision Weed Management

Abstract

Precision Weed Management (PWM), enhanced by advancements in machine vision and deep learning (DL), significantly improves agricultural product quality, optimizes crop yields, and offers a sustainable alternative to traditional herbicide use. However, current DL-based weed detection algorithms predominantly rely on supervised learning methods. These methods necessitate extensive datasets with manual annotations, which are often time-consuming and labor-intensive. Label-efficient learning methods, particularly semi-supervised learning, have emerged as a promising solution in computer vision. They leverage a small amount of labeled data alongside a large volume of unlabeled data to develop high-performance models comparable to supervised models trained on fully labeled datasets. This study evaluates the effectiveness of a semi-supervised learning framework for multi-class weed detection, utilizing two prominent object detection frameworks: FCOS (Fully Convolutional One-Stage Object Detection) and Faster-RCNN (Faster Region-based Convolutional Networks). We specifically assess a generalized student-teacher framework with an enhanced pseudo-label generation module to create reliable pseudo-labels for unlabeled data. To improve generalization, an ensemble student network is used during training. Experimental results demonstrate that our proposed approach achieves approximately 76% and 96% of the detection accuracy of supervised methods using only 10% of labeled data in CottonWeedDet3 and CottonWeedDet12 datasets, respectively. We provide open access to our source code (https://github.com/JiajiaLi04/SemiWeeds), contributing a valuable resource for ongoing research in semi-supervised learning for weed detection and related fields.

Keywords: precision weed management, precision agriculture, label-efficient learning, computer vision, deep learning, one-stage object detection, semi-supervised learning

1. Introduction

Weeds represent a major threat to global crop production, causing estimated losses of up to 43% worldwide (Oerke, 2006). In cotton farming, ineffective weed management can lead to yield reductions as high as 90% (Manalil et al., 2017). Traditional weed control methods, including machinery, manual weeding, and herbicide application, are often labor-intensive and costly. Manual and mechanical weeding are particularly demanding in labor, a challenge exacerbated by global labor shortages due to public health crises like the COVID-19 pandemic and geopolitical conflicts such as the Russia-Ukraine War (Laborde et al., 2020; Ben Hassen and El Bilali, 2022). Furthermore, herbicide use poses significant environmental risks, potential health hazards, and contributes to the development of herbicide-resistant weed species (Norsworthy et al., 2012; Chen et al., 2022b).

Precision Weed Management (PWM), which integrates sensors, computer systems, and robotics into agricultural practices, has emerged as a sustainable and efficient solution for weed control (Young et al., 2013). PWM allows for targeted treatments based on site-specific conditions and weed species, significantly reducing herbicide and resource usage (Gerhards and Christensen, 2003). Accurate weed identification, localization, and monitoring are crucial for effective PWM, necessitating robust machine vision algorithms for weed recognition (Chen et al., 2022b). Traditional image processing techniques, such as edge detection, color analysis, and texture feature extraction, followed by thresholding or supervised modeling, are commonly used in weed classification and detection (Meyer and Neto, 2008; Wang et al., 2019). For instance, Bawden et al. (2017) developed a weed classification algorithm based on texture features, and Ahmad et al. (2018) used local shape and edge orientation features to distinguish between monocot and dicot weeds. However, these conventional methods often require manual feature engineering, demanding extensive domain knowledge, which can be error-prone and time-consuming. Moreover, they may struggle with complex visual tasks and are sensitive to variations in lighting and occlusion (O’Mahony et al., 2020).

Recent advancements in Deep Learning (DL)-based computer vision have shown great promise for sustainable weed management (Farooq et al., 2019; Yu et al., 2019; Parra et al., 2020; Chen et al., 2022b; Coleman et al., 2023; Rahman et al., 2023; Rai et al., 2023; Sportelli et al., 2023). For example, Sportelli et al. (2023) evaluated four YOLO (You Only Look Once) object detectors for weed detection in turfgrass, and Chen et al. (2022b) benchmarked 35 state-of-the-art Deep Neural Networks (DNNs) for multi-class weed classification in cotton production, achieving high classification accuracy (F1 scores > 95%). Despite their effectiveness, DL-based approaches are data-intensive, relying on large, accurately labeled image datasets (Lu and Young, 2020; Rai et al., 2023). Manual labeling of these datasets is error-prone, tedious, expensive, and time-consuming (Li et al., 2023).

To overcome these challenges, label-efficient learning algorithms (Li et al., 2023) have emerged as viable solutions, reducing labeling costs by utilizing unlabeled samples. dos Santos Ferreira et al. (2019) assessed unsupervised learning algorithms like JULE (Yang et al. (2016)) and DeepCluster (Caron et al. (2018)) for weed recognition. Semi-supervised learning for weed classification has also been explored (Liu et al., 2023, 2024; Benchallal et al., 2024). Nong et al. (2022) introduced SemiWeedNet, a semi-supervised method for weed and crop segmentation in complex environments. Hu et al. (2021) used cut-and-paste image synthesis and semi-supervised learning for weed detection, achieving a 46.0 mAP on a four-category dataset with Faster-RCNN (Ren et al., 2015). While promising, this approach was only tested on a two-stage detector and a limited dataset. Therefore, our research further investigates the potential of semi-supervised learning in weed detection, comparing various object detectors and multi-class weed species. The key contributions of this study are:

  • Rigorous evaluation of a semi-supervised learning framework using two open-source cotton weed datasets (CottonWeedDet3 and CottonWeedDet12), which include 3 and 12 weed classes common in U.S. cotton production.
  • Comparative analysis of one-stage and two-stage object detectors within the semi-supervised learning framework to assess their performance in weed detection.
  • Public release of all training and evaluation codes 1 to promote reproducibility and further research in semi-supervised learning for weed detection.

The rest of this paper is structured as follows: Section 2 details the datasets and technical methods. Section 3 presents and analyzes experimental results. Section 4 discusses findings and limitations. Section 5 concludes and suggests future research directions.

2. Materials and Methods

This section describes the datasets used, provides an overview of the Faster R-CNN (two-stage) and FCOS (one-stage) object detectors, details our semi-supervised framework, and outlines evaluation metrics and experimental setups.

2.1. Weed Datasets

We evaluated our semi-supervised framework on two publicly available weed datasets for U.S. cotton production: CottonWeedDet3 (Rahman et al., 2023) and CottonWeedDet12 (Dang et al., 2023).

CottonWeedDet3 2 (Rahman et al., 2023) consists of 848 high-resolution images (4442 × 4335 pixels) with 1532 bounding box annotations. It features three common weed classes in southern U.S. cotton fields, specifically in North Carolina and Mississippi: carpetweed (mollugo verticillata), morning glory (ipomoea genus), and palmer amaranth (amaranthus palmeri). Annotations are available in both YOLO and COCO formats. Approximately 99% of images have fewer than 10 bounding boxes, with a few (9 out of 848) containing up to 93. Carpetweed is the most frequently annotated weed, while palmer amaranth is the least. Visual examples are shown in Figure 1.

Figure 1. Weed Samples from CottonWeedDet3 Dataset

Open in a new tab

Weed samples in the CottonWeedDet3 dataset (Rahman et al., 2023). Each column shows image samples for a specific weed class.

CottonWeedDet12 dataset 3 (Dang et al., 2023) is larger, containing 5648 images of 12 weed classes with 9370 bounding boxes (YOLO and COCO formats). Images exceed 10 megapixels in resolution and were captured under natural lighting across various weed growth stages in cotton fields. Each weed class has over 140 bounding boxes. Waterhemp and morning glory have the most bounding boxes, while goose grass and cutleaf ground cherry have the fewest. CottonWeedDet12 is more than ten times larger than CottonWeedDet3 and is currently the most extensive public dataset for weed detection in cotton production. Figure 2 displays sample annotated images, each primarily featuring a single weed class, although multiple classes may be present within the dataset.

Figure 2. Weed Samples from CottonWeedDet12 Dataset

Open in a new tab

Weed samples in the CottonWeedDet12 dataset (Dang et al., 2023).

2.2. DL-Based Object Detectors

DL-based object detectors typically consist of a backbone for feature extraction and a detection head for object classification and localization (Bochkovskiy et al., 2020). Backbones are usually pre-trained on ImageNet data (Deng et al., 2009). Detectors are categorized as anchor-based (Ren et al., 2015; Cai et al., 2016; Lin et al., 2017) or anchor-free (Law and Deng, 2018; Tian et al., 2022; Zhou et al., 2019). Anchor-based detectors use predefined anchor boxes, adjusted based on IoU scores to match ground-truth boxes. Anchor-free detectors eliminate the need for predefined anchor boxes in the detection head.

2.2.1. Anchor-Based Object Detectors: Faster R-CNN

Anchor-based object detectors, like Faster R-CNN (Ren et al., 2015), are a prevalent approach in object detection. Faster R-CNN, an evolution of Fast RCNN (Girshick, 2015), uses Convolutional Neural Networks (CNNs) to generate region proposals via a Region Proposal Network (RPN), replacing the selective search method of Fast RCNN. Features from the shared convolutional layer are used for both region proposal and region classification. We used Faster R-CNN as a representative two-stage detector in our semi-supervised framework.

2.2.2. Anchor-Free Object Detectors: FCOS

Anchor-free detectors address limitations of anchor-based methods, such as hyperparameter tuning for new datasets (Jiao et al., 2019) and computational cost for mobile/edge devices (Zhang et al., 2020). They directly predict class probabilities and bounding box offsets from images using a single feed-forward CNN, eliminating region proposals and simplifying computation (Liu et al., 2020). YOLO (Redmon et al., 2016) is a well-known one-stage detector that formulates object detection as a regression problem, achieving real-time performance. FCOS (Tian et al., 2022) is another anchor-free, proposal-free one-stage detector that avoids anchor box complexities, making it computationally efficient. We chose FCOS as our primary one-stage object detector due to its accessibility and widespread use (Zhang et al., 2020; Li et al., 2021) for evaluating end-to-end semi-supervised learning.

2.3. Semi-Supervised Learning Framework

Semi-supervised learning is a label-efficient approach that utilizes unlabeled data to improve model training (Van Engelen and Hoos, 2020; Li et al., 2023). Common semi-supervised methods include consistency regularization and self-training (Tarvainen and Valpola, 2017; Berthelot et al., 2019; Xie et al., 2020; Sohn et al., 2020a; Xu et al., 2021).

The teacher-student framework is a popular self-training method for semi-supervised object detection (Sohn et al., 2020a; Xu et al., 2021; Liu et al., 2021b; Li et al., 2022; Chen et al., 2022a), as illustrated in Figure 3. A teacher model is initially trained on labeled data using supervised learning. This teacher model is then duplicated to create a student model and used to generate pseudo-labels for unlabeled data. A combination of confidently pseudo-labeled samples and original labeled samples is used to train the student model. The teacher model is updated using the student model parameters via Exponential Moving Average (EMA) (Tarvainen and Valpola, 2017) as per Equation 1:

Figure 3. Semi-Supervised Weed Detection Framework Pipeline

Open in a new tab

Pipeline of the proposed semi-supervised weed detection framework.

θteacher=α·θteacher+(1−α)·θstudent, (1)

Here, θ teacher and θ student are the parameters of the teacher and student models, and α is the update factor. We used cross-validation to determine the optimal α value of 0.99. EMA reduces variance (Tarvainen and Valpola, 2017). We applied weak augmentations (horizontal flip, multi-scale training [400, 1200], scale jittering) to the student learning and strong augmentations (random grayscale, Gaussian blur, cutout patches (DeVries and Taylor, 2017)) to the teacher learning to improve training performance (Xie et al., 2020; Xu et al., 2021). Figure 3 illustrates this process.

This iterative process is repeated until satisfactory model performance is achieved. Only the teacher model is used for inference after training; the student model is discarded. Self-training methods are versatile and can be integrated with both one-stage and two-stage object detectors. We employed this self-training based semi-supervised learning framework to evaluate Faster RCNN (Ren et al., 2015) and FCOS (Tian et al., 2022).

2.3.1. Pseudo-Labeling for Object Detectors

Generating confident and accurate pseudo-labels is critical in semi-supervised learning. Previous research (Sohn et al., 2020b; Zhou et al., 2021a; Liu et al., 2021b) has utilized pseudo-labeling for semi-supervised object detection, mainly focusing on anchor-based detectors. Our approach generalizes pseudo-labeling to both anchor-free and anchor-based detectors, inspired by (Liu et al., 2021b, 2022).

Using FCOS (Tian et al., 2022) as an example, we demonstrate semi-supervised object detection. FCOS has three prediction branches: classifier, centerness, and regressor, with centerness dominating bounding box scoring. However, centerness score reliability in distinguishing foreground, especially with limited labels, is questionable due to lack of supervision to suppress background centerness scores (Li et al., 2020; Liu et al., 2022). While centerness improves supervised FCOS, it’s ineffective or counterproductive in semi-supervised scenarios (Li et al., 2020; Liu et al., 2022). To address this, we prioritize pseudo-boxes based on classification scores alone (Liu et al., 2022). The classifier is trained with hard labels (one-hot vectors) and box localization weighting. We use standard label assignment, considering all elements within bounding boxes as foreground and outside as background, instead of center-sampling.

2.3.2. Unsupervised Regression Loss

Confidence thresholding is effective (Tarvainen and Valpola, 2017; Sohn et al., 2020b; Liu et al., 2021b), but relying solely on box confidence is insufficient for eliminating misleading instances in box regression. The teacher may still provide incorrect regression directions (Chen et al., 2017; Saputra et al., 2019). We categorize pseudo-labels into beneficial and misleading instances and use relative prediction information between student and teacher to identify and filter misleading instances during regression branch training. We define unsupervised regression loss by selecting beneficial instances where the teacher shows lower localization uncertainty than the student by a margin σ, as shown in Equation 2:

| Lregunsup={∑i||d˜ti−d˜si||, if δti+σ ≤ δsi 0, otherwise. | (2) |
|—|—|

Parameter σ ≧ 0 is the margin between teacher and student localization uncertainties, where uncertainty is loosely related to deviation from ground-truth labels. δti and δsi are teacher and student localization uncertainties, and dti˜ and dsi˜ are regression predictions for teacher and student. For more details, refer to Liu et al. (2022).

2.4. Performance Evaluation Metrics

We used Average Precision (AP) as the primary evaluation metric, derived from precision (P) and recall (R). AP summarizes the P(R) curve into a single scalar value. We used mean Average Precision (mAP) (Liu et al., 2020) to evaluate performance across all weed categories, calculated as the average of AP scores for each category, using Equations 3 and 4:

AP=∫01P(R)dR, (3)
mAP=1n∑i=1nAPi, (4)

where n is the number of weed classes. mAP@[0.5:0.95], the mean AP across IoU thresholds from 0.5 to 0.95, was also used for comprehensive evaluation across varying detection thresholds. Higher area under the PR curve indicates better object detection accuracy.

2.5. Experimental Setups

Datasets were randomly split into training (65%), validation (20%), and testing (15%) sets. For CottonWeedDet3, this resulted in 550, 170, and 128 images, respectively. For CottonWeedDet12, the splits were 3670, 1130, and 848 images. The validation set was used for model selection, and the test set for performance evaluation.

We used transfer learning (Zhuang et al., 2020) for all detector backbones, fine-tuning with ImageNet (Deng et al., 2009) pre-trained weights. Models were implemented using Detectron2 (Wu et al., 2019) and trained for 80k iterations with Stochastic Gradient Descent (SGD) optimizer (momentum 0.9, learning rate 0.01, batch size of 4 labeled and 4 unlabeled images). Weak augmentations (horizontal flip, multi-scale training [400, 1200], scale jittering) were applied to the student, and strong augmentations (random grayscale, Gaussian blur, cutout patches (DeVries and Taylor, 2017), color jittering) to the teacher. Experiments were conducted on a server with Ubuntu 20.04 and two Geforce RTX 2080Ti GPUs (12 GB memory each).

3. Results

This section presents a comparison of object detector performance within the semi-supervised learning framework and a detailed analysis of class-specific performance.

3.1. Semi-Supervised Object Detector Comparison

Figure 4 shows training curves for FCOS and Faster RCNN with different proportions of labeled samples on CottonWeedDet3 and CottonWeedDet12 datasets, comparing supervised and semi-supervised learning. For example, “Faster RCNN-sup-5%” indicates Faster RCNN trained with supervised learning using 5% labeled samples, and “Faster RCNN-semi-5%” denotes semi-supervised training with 5% labeled and 95% unlabeled samples.

Figure 4. Training Curves for FCOS and Faster RCNN on Cotton Weed Datasets

Open in a new tab

Training curves for FCOS and Faster RCNN with different proportions of labeled samples for two cotton weed datasets: CottonWeedDet3 and CottonWeedDet12. (A) Training curves for CottonWeedDet3 dataset (B) Training curves for CottonWeedDet12 dataset.

Semi-supervised learning consistently outperforms supervised learning on both datasets, leveraging unlabeled data to enhance training. Faster RCNN-semi-5% performs better than Faster RCNN-sup-5%. Notably, FCOS-semi-50% achieves comparable performance to FCOS-100% on CottonWeedDet3 and even surpasses FCOS-100% on CottonWeedDet12, indicating improved performance with half the labeling effort and enhanced robustness of semi-supervised learning (Liu et al., 2021a). CottonWeedDet12 shows superior performance over CottonWeedDet3 due to its larger dataset and less complex scenes.

Tables 1 and 2 summarize test performance (mAP@[0.5:0.95]) for supervised and semi-supervised Faster-RCNN and FCOS on CottonWeedDet3 and CottonWeedDet12, respectively. FCOS consistently outperforms Faster-RCNN in both learning contexts, aligning with the training curves in Figure 4. Semi-supervised learning improves test performance across all labeled sample proportions. On CottonWeedDet3, semi-supervised Faster RCNN achieves 86.70% and 93.73% of supervised performance with only 20% and 50% labeled samples. On CottonWeedDet12, FCOS-semi-50% even outperforms fully supervised FCOS-100%, demonstrating the effectiveness of semi-supervised learning in leveraging unlabeled data distribution.

Table 1. Test Performance on CottonWeedDet3 Dataset

Open in a new tab

Testing performance (mAP@[0.5:0.95]) comparison between supervised and semi-supervised Faster-RCNN and FCOS models on the CottonWeedDet3 dataset.

Table 2. Test Performance on CottonWeedDet12 Dataset

Open in a new tab

Testing performance (mAP@[0.5:0.95]) comparison between supervised and semi-supervised Faster RCNN and FCOS models and CottonWeedDet12 dataset.

Figures 5 and 6 show predicted images using supervised and semi-supervised FCOS for CottonWeedDet3 and CottonWeedDet12 with 5% and 10% labeled samples. Semi-supervised FCOS shows visually compelling predictions, particularly in complex and cluttered backgrounds or with dense weed instances, demonstrating superior performance over supervised learning. For example, in Figure 5, semi-supervised FCOS with 5% labeled samples outperforms supervised learning with 5% labels, highlighting the benefit of unlabeled data.

Figure 5. Predicted Labels on CottonWeedDet3 Dataset

Open in a new tab

Examples of images annotated with ground truth labels (A) and predicted labels (B) using semi-supervised FOCS for CottonWeedDet3.

Figure 6. Comparing Supervised and Semi-Supervised Methods on CottonWeedDet12

Open in a new tab

Comparing method results on CottonWeedDet12: (A, C) – supervised baseline, (B, D) semi-supervised FCOS.

3.2. Class-Specific Performance

Tables 3 and 4 show class-specific performance of FCOS on CottonWeedDet3 and CottonWeedDet12, respectively. Instance count reflects bounding boxes per weed category in test images. CottonWeedDet12 shows class imbalance.

Table 3. Class-Specific Test Performance on CottonWeedDet3

Open in a new tab

Test performance (mAP@[0.5:0.95]) on specific weed categories in CottonWeedDet3.

Table 4. Class-Specific Test Performance on CottonWeedDet12

Open in a new tab

Test performance (mAP@[0.5:0.95]) on specific weed categories in CottonWeedDet12.

On CottonWeedDet3, semi-supervised learning shows promising performance, with the 50% labeled sample model outperforming the fully supervised model, especially for palmer amaranth. Carpetweed detection accuracy remains lower due to its small size. Similar trends are seen in Table 4 for CottonWeedDet12.

On CottonWeedDet12, semi-supervised FCOS with 50% and 20% labeled samples outperforms the fully supervised model for 8/12 and 6/12 weed classes, respectively. For minority weed classes (cutleaf groundcherry, goosegrass, sicklepod), semi-supervised FCOS performs better even with 50% labeling costs, highlighting its potential to address class imbalance and improve performance with fewer labels.

3.3. Semi-Supervised Learning vs. Ground Truth Inaccuracies

Semi-supervised learning improves performance even with limited labeled data, surpassing supervised learning. Figure 7 shows CottonWeedDet12 samples with ground truth and semi-supervised FCOS-10% predictions. Ground truth annotations show inaccuracies and mislabels, indicating challenges in manual labeling. Semi-supervised learning effectively mitigates these issues, enhancing accuracy and correcting ground truth errors.

Figure 7. Ground Truth vs. Semi-Supervised Predictions on CottonWeedDet12

Open in a new tab

Image samples from CottonWeedDet12 with ground truth annotations (A) and predicted results with semi-supervised FCOS-10% (B).

4. Discussions

4.1. Key Contributions

Multi-class weed detection and localization remain underexplored (Dang et al., 2023; Rai et al., 2023). Next-generation machine vision weeding systems require higher precision and weed-specific controls, making differentiation and identification of weed species critical. While DL-based weed detection has advanced (dos Santos Ferreira et al., 2017; Wang et al., 2019; Wu et al., 2021; Dang et al., 2022, 2023), it relies on large, manually labeled datasets, which are costly and error-prone. Our review (Li et al., 2023) highlighted label-efficient learning in agriculture, but these technologies are still largely unexplored in multi-class weed detection. This study uniquely contributes to weed detection research by applying semi-supervised learning to reduce labeling costs. Our evaluation of one-stage and two-stage detectors on open-source datasets demonstrates that semi-supervised learning can significantly reduce labeling costs without compromising performance, and even enhance it.

This research has positive implications for phytosanitary product use and precision agriculture. By improving weed detection efficiency and accuracy, our approach can lead to more targeted and effective use of phytosanitary products, enhancing agricultural productivity and sustainability.

4.2. Limitations

This study acknowledges limitations. We did not evaluate all DL-based object detectors, missing potentially high-performing models like SSD (Liu et al., 2016), RetinaNet (Lin et al., 2017), EfficientDet (Tan et al., 2020), YOLO series (Dang et al., 2023; Terven and Cordova-Esparza, 2023), DINO (Zhang et al., 2022), CenterNetv2 (Zhou et al., 2021b), and RTMDet (Lyu et al., 2022). Future work will incorporate these models into our benchmark.

We assumed unlabeled samples are from the same distribution as labeled samples. However, unlabeled data may contain unknown classes, posing an open-set challenge (Chen et al., 2020) that could affect label-efficient learning. Future research will address out-of-distribution (OOD) issues using advanced sample-specific selection strategies to identify and downplay OOD samples (Guo et al., 2020), enhancing generalization and robustness in datasets with unseen classes.

5. Conclusion

This study extensively evaluated semi-supervised learning for multi-class weed detection, using both one-stage and two-stage object detectors and CottonWeedDet3 and CottonWeedDet12 datasets relevant to U.S. cotton production. Semi-supervised learning significantly reduced labeling costs with minimal impact on detection performance and improved model robustness and accuracy by leveraging abundant unlabeled data and mitigating ground truth noise. These results highlight semi-supervised learning as a cost-effective and efficient approach for agricultural applications requiring extensive data annotations.

Future work will refine our semi-supervised learning framework by testing and incorporating more high-performing object detectors. We will also address the open-set challenge and out-of-distribution issues using advanced sample selection strategies to improve the robustness and generalization of our approach in real-world agricultural settings.

Funding Statement

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Footnotes

1 https://github.com/JiajiaLi04/SemiWeeds

2CottonWeedDet3 dataset: https://www.kaggle.com/datasets/yuzhenlu/cottonweeddet3

3CottonWeedDet12 dataset: https://zenodo.org/record/7535814

Data availability statement

The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.

Author contributions

JL: Writing – review & editing, Writing – original draft, Visualization, Validation, Software, Methodology, Investigation, Formal analysis, Conceptualization. DC: Writing – original draft, Formal analysis, Conceptualization. XY: Writing – review & editing, Investigation. ZL: Writing – review & editing, Supervision, Resources.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *