Federal University of São Carlos
· +3
The evaluation of vision-language models (VLMs) has mainly relied on English-language benchmarks, leaving significant gaps in both multilingual and multicultural coverage. While multilingual benchmarks have expanded, both in size and languages, many rely on translations of English datasets, failing to capture cultural nuances. In this work, we propose Kaleidoscope, as the most comprehensive exam benchmark to date for the multilingual evaluation of vision-language models. Kaleidoscope is a large-scale, in-language multimodal benchmark designed to evaluate VLMs across diverse languages and visual inputs. Kaleidoscope covers 18 languages and 14 different subjects, amounting to a total of 20,911 multiple-choice questions. Built through an open science collaboration with a diverse group of researchers worldwide, Kaleidoscope ensures linguistic and cultural authenticity. We evaluate top-performing multilingual vision-language models and find that they perform poorly on low-resource languages and in complex multimodal scenarios. Our results highlight the need for progress on culturally inclusive multimodal evaluation frameworks.
Conformal prediction methods create prediction bands with distribution-free guarantees but do not explicitly capture epistemic uncertainty, which can lead to overconfident predictions in data-sparse regions. Although recent conformal scores have been developed to address this limitation, they are typically designed for specific tasks, such as regression or quantile regression. Moreover, they rely on particular modeling choices for epistemic uncertainty, restricting their applicability. We introduce EPICSCORE\texttt{EPICSCORE}, a model-agnostic approach that enhances any conformal score by explicitly integrating epistemic uncertainty. Leveraging Bayesian techniques such as Gaussian Processes, Monte Carlo Dropout, or Bayesian Additive Regression Trees, EPICSCORE\texttt{EPICSCORE} adaptively expands predictive intervals in regions with limited data while maintaining compact intervals where data is abundant. As with any conformal method, it preserves finite-sample marginal coverage. Additionally, it also achieves asymptotic conditional coverage. Experiments demonstrate its good performance compared to existing methods. Designed for compatibility with any Bayesian model, but equipped with distribution-free guarantees, EPICSCORE\texttt{EPICSCORE} provides a general-purpose framework for uncertainty quantification in prediction problems.
Segment Anything Model (SAM), a new AI model from Meta AI released in April 2023, is an ambitious tool designed to identify and separate individual objects within a given image through semantic interpretation. The advanced capabilities of SAM are the result of its training with millions of images and masks, and a few days after its release, several researchers began testing the model on medical images to evaluate its performance in this domain. With this perspective in focus -- i.e., optimizing work in the healthcare field -- this work proposes the use of this new technology to evaluate and study chest X-ray images. The approach adopted for this work, with the aim of improving the model's performance for lung segmentation, involved a transfer learning process, specifically the fine-tuning technique. After applying this adjustment, a substantial improvement was observed in the evaluation metrics used to assess SAM's performance compared to the masks provided by the datasets. The results obtained by the model after the adjustments were satisfactory and similar to cutting-edge neural networks, such as U-Net.
Prediction algorithms, such as deep neural networks (DNNs), are used in many domain sciences to directly estimate internal parameters of interest in simulator-based models, especially in settings where the observations include images or complex high-dimensional data. In parallel, modern neural density estimators, such as normalizing flows, are becoming increasingly popular for uncertainty quantification, especially when both parameters and observations are high-dimensional. However, parameter inference is an inverse problem and not a prediction task; thus, an open challenge is to construct conditionally valid and precise confidence regions, with a guaranteed probability of covering the true parameters of the data-generating process, no matter what the (unknown) parameter values are, and without relying on large-sample theory. Many simulator-based inference (SBI) methods are indeed known to produce biased or overly confident parameter regions, yielding misleading uncertainty estimates. This paper presents WALDO, a novel method to construct confidence regions with finite-sample conditional validity by leveraging prediction algorithms or posterior estimators that are currently widely adopted in SBI. WALDO reframes the well-known Wald test statistic, and uses a computationally efficient regression-based machinery for classical Neyman inversion of hypothesis tests. We apply our method to a recent high-energy physics problem, where prediction with DNNs has previously led to estimates with prediction bias. We also illustrate how our approach can correct overly confident posterior regions computed with normalizing flows.
We investigate the escape dynamics in an open circular billiard under the influence of a uniform gravitational field. The system properties are investigated as a function of the particle total energy and the size of two symmetrically placed holes in the boundary. Using a suite of quantitative tools including escape basins, basin entropy (SbS_b), mean escape time (τˉ\bar{\tau}), and survival probability (P(n)P(n)), we characterize a system that transitions from a fully chaotic, hyperbolic regime at low energies to a non-hyperbolic, mixed phase space at higher energies. Our results demonstrate that this transition is marked by the emergence of Kolmogorov-Arnold-Moser (KAM) islands. We show that both the basin entropy and the mean escape time are sensitive to this transition, with the former peaking and the latter increasing sharply as the sticky KAM islands appear. The survival probability analysis confirms this dynamical picture, shifting from a pure exponential decay in the hyperbolic regime to a power-law-like decay with a saturation plateau in the mixed regime, which directly quantifies the measure of trapped orbits. In the high-energy limit, the system dynamics approaches an integrable case, leading to a corresponding decrease in complexity as measured by both SbS_b and τˉ\bar{\tau}.
A novel neural network architecture from Brazilian physics researchers combines mixed-function neurons with second-order neurons to solve differential equations, reducing parameter count by up to 4 orders of magnitude compared to standard Physics-Informed Neural Networks while maintaining accuracy and enabling extraction of interpretable analytical expressions.
In this work, we present web scraping techniques to extract in- formation from patent tables, clean and structure them for future use in predictive machine learning models to develop new glasses. We extracted compositions and three properties relevant to the development of new glasses and structured them into a database to be used together with information from other available datasets. We also analyzed the consistency of the information obtained and what it adds to the existing databases. The extracted liquidus temperatures comprise 5,696 compositions; the second subset includes 4,298 refractive indexes and, finally, 1,771 compositions with Abbe numbers. The extraction performed here increases the available information by approximately 10.4% for liquidus temperature, 6.6% for refractive index, and 4.9% for Abbe number. The impact extends beyond quantity: the newly extracted data introduce compositions with property values that are more diverse than those in existing databases, thereby expanding the accessible compositional and property space for glass modeling applications. We emphasize that the compositions of the new database contain relatively more titanium, magnesium, zirconium, niobium, iron, tin, and yttrium oxides than those of the existing bases.
We introduce orbifolds from the classical point of view, using charts, and present orbifold versions of elementary objects from Algebraic Topology, such as the fundamental group, coverings and Euler characteristic; Differential Topology/Geometry, including orbibundles, differential forms, integration and (equivariant) De Rham cohomology; and Riemannian Geometry, surveying generalizations of classical theorems to this setting.
Quantum computing with qudits, an extension of qubits to multiple levels, is a research field less mature than qubit-based quantum computing. However, qudits can offer some advantages over qubits, by representing information with fewer separated components. In this article, we present QuForge, a Python-based library designed to simulate quantum circuits with qudits. This library provides the necessary quantum gates for implementing quantum algorithms, tailored to any chosen qudit dimension. Built on top of differentiable frameworks, QuForge supports execution on accelerating devices such as GPUs and TPUs, significantly speeding up simulations. It also supports sparse operations, leading to a reduction in memory consumption compared to other libraries. Additionally, by constructing quantum circuits as differentiable graphs, QuForge facilitates the implementation of quantum machine learning algorithms, enhancing the capabilities and flexibility of quantum computing research.
Cross-platform development solutions can help to make software available on different devices and platforms. But these are normally restricted to preconfigured platforms and consider that each individual solution is equal or similar to each other. As a result, developers have to resort to native development and build individual solutions, one for each device/platform, that cooperate to deliver the desired global functionality. This article presents an approach that takes advantage of existing solutions and have support for extending and including new platforms, and distributing functionality across devices. The approach is based on a general-purpose language that raises the abstraction level in order to keep the software free from platform details. Automatic transformations produce executable code that can be properly divided and deployed separately into different platforms. The proposed approach was evaluated in four ways. In the first evaluation, an existing cross-platform system was recreated using the approach. The second and third evaluations was conducted with expert and novice developers, who tested the approach in practice. The fourth evaluation introduced support for cross-platform testing. Results have brought evidence supporting the following main contributions: use of a single environment, the ability to reuse similar concepts between platforms and the potential to reduce costs.
Heuristic evaluation is a widely used method in Human-Computer Interaction (HCI) to inspect interfaces and identify issues based on heuristics. Recently, Large Language Models (LLMs), such as GPT-4o, have been applied in HCI to assist in persona creation, the ideation process, and the analysis of semi-structured interviews. However, considering the need to understand heuristics and the high degree of abstraction required to evaluate them, LLMs may have difficulty conducting heuristic evaluation. However, prior research has not investigated GPT-4o's performance in heuristic evaluation compared to HCI experts in web-based systems. In this context, this study aims to compare the results of a heuristic evaluation performed by GPT-4o and human experts. To this end, we selected a set of screenshots from a web system and asked GPT-4o to perform a heuristic evaluation based on Nielsen's Heuristics from a literature-grounded prompt. Our results indicate that only 21.2% of the issues identified by human experts were also identified by GPT-4o, despite it found 27 new issues. We also found that GPT-4o performed better for heuristics related to aesthetic and minimalist design and match between system and real world, whereas it has difficulty identifying issues in heuristics related to flexibility, control, and user efficiency. Additionally, we noticed that GPT-4o generated several false positives due to hallucinations and attempts to predict issues. Finally, we highlight five takeaways for the conscious use of GPT-4o in heuristic evaluations.
Particle physics experiments rely on the (generalised) likelihood ratio test (LRT) for searches and measurements, which consist of composite hypothesis tests. However, this test is not guaranteed to be optimal, as the Neyman-Pearson lemma pertains only to simple hypothesis tests. Any choice of test statistic thus implicitly determines how statistical power varies across the parameter space. An improvement in the core statistical testing methodology for general settings with composite tests would have widespread ramifications across experiments. We discuss an alternate test statistic that provides the data analyzer an ability to focus the power of the test on physics-motivated regions of the parameter space. We demonstrate the improvement from this technique compared to the LRT on a Higgs ττ\rightarrow\tau\tau dataset simulated by the ATLAS experiment and a dark matter dataset inspired by the LZ experiment. We also employ machine learning to efficiently perform the Neyman construction, which is essential to ensure statistically valid confidence intervals.
Automating aircraft manufacturing still relies heavily on human labor due to the complexity of the assembly processes and customization requirements. One key challenge is achieving precise positioning, especially for large aircraft structures, where errors can lead to substantial maintenance costs or part rejection. Existing solutions often require costly hardware or lack flexibility. Used in aircraft by the thousands, threaded fasteners, e.g., screws, bolts, and collars, are traditionally executed by fixed-base robots and usually have problems in being deployed in the mentioned manufacturing sites. This paper emphasizes the importance of error detection and classification for efficient and safe assembly of threaded fasteners, especially aeronautical collars. Safe assembly of threaded fasteners is paramount since acquiring sufficient data for training deep learning models poses challenges due to the rarity of failure cases and imbalanced datasets. The paper addresses this by proposing techniques like class weighting and data augmentation, specifically tailored for temporal series data, to improve classification performance. Furthermore, the paper introduces a novel problem-modeling approach, emphasizing metrics relevant to collar assembly rather than solely focusing on accuracy. This tailored approach enhances the models' capability to handle the challenges of threaded fastener assembly effectively.
Unmanned Aerial Vehicles need an online path planning capability to move in high-risk missions in unknown and complex environments to complete them safely. However, many algorithms reported in the literature may not return reliable trajectories to solve online problems in these scenarios. The Q-Learning algorithm, a Reinforcement Learning Technique, can generate trajectories in real-time and has demonstrated fast and reliable results. This technique, however, has the disadvantage of defining the iteration number. If this value is not well defined, it will take a long time or not return an optimal trajectory. Therefore, we propose a method to dynamically choose the number of iterations to obtain the best performance of Q-Learning. The proposed method is compared to the Q-Learning algorithm with a fixed number of iterations, A*, Rapid-Exploring Random Tree, and Particle Swarm Optimization. As a result, the proposed Q-learning algorithm demonstrates the efficacy and reliability of online path planning with a dynamic number of iterations to carry out online missions in unknown and complex environments.
Due to their unique optical and electronic functionalities, chalcogenide glasses are materials of choice for numerous microelectronic and photonic devices. However, to extend the range of compositions and applications, profound knowledge about composition-property relationships is necessary. To this end, we collected a large quantity of composition-property data on chalcogenide glasses from SciGlass database regarding glass transition temperature (TgT_g), Young's modulus (EE), coefficient of thermal expansion (CTE), and refractive index (nDn_D). With these data, we induced predictive models using three machine learning algorithms: Random Forest, K-nearest Neighbors, and Classification and Regression Trees. Finally, the induced models were interpreted by computing the SHAP (SHapley Additive exPlanations) values of the chemical features, which revealed the key elements that significantly impacted the tested properties and quantified their impact. For instance, Ge and Ga increase TgT_g and EE and decrease CTE (three properties that depend on bond strength), whereas Se has the opposite effect. Te, As, Tl, and Sb increase nDn_D (which strongly depends on polarizability), whereas S, Ge, and P diminish it. Knowledge about the effect of each element on the glass properties is precious for semi-empirical compositional development trials or simulation-driven formulations. The induced models can be used to design novel chalcogenide glasses with required combinations of properties.
The kk-nearest neighbor (kk-NN) algorithm is one of the most popular methods for nonparametric classification. However, a relevant limitation concerns the definition of the number of neighbors kk. This parameter exerts a direct impact on several properties of the classifier, such as the bias-variance tradeoff, smoothness of decision boundaries, robustness to noise, and class imbalance handling. In the present paper, we introduce a new adaptive kk-nearest neighbours (kKkK-NN) algorithm that explores the local curvature at a sample to adaptively defining the neighborhood size. The rationale is that points with low curvature could have larger neighborhoods (locally, the tangent space approximates well the underlying data shape), whereas points with high curvature could have smaller neighborhoods (locally, the tangent space is a loose approximation). We estimate the local Gaussian curvature by computing an approximation to the local shape operator in terms of the local covariance matrix as well as the local Hessian matrix. Results on many real-world datasets indicate that the new kKkK-NN algorithm yields superior balanced accuracy compared to the established kk-NN method and also another adaptive kk-NN algorithm. This is particularly evident when the number of samples in the training data is limited, suggesting that the kKkK-NN is capable of learning more discriminant functions with less data considering many relevant cases.
People with some kind of disability face a high level of difficulty for everyday tasks because, in many cases, accessibility was not considered necessary when the task or process was designed. An example of this scenario is a computer's BIOS configuration screens, which do not consider the specific needs, such as screen readers, of visually impaired people. This paper proposes the idea that it is possible to make the pre-operating system environment accessible to visually impaired people. We report our work-in-progress in creating a screen reader prototype, accessing audio cards compatible with the High Definition Audio specification in systems running UEFI compliant firmware.
Tropical cyclone (TC) intensity forecasts are issued by human forecasters who evaluate spatio-temporal observations (e.g., satellite imagery) and model output (e.g., numerical weather prediction, statistical models) to produce forecasts every 6 hours. Within these time constraints, it can be challenging to draw insight from such data. While high-capacity machine learning methods are well suited for prediction problems with complex sequence data, extracting interpretable scientific information with such methods is difficult. Here we leverage powerful AI prediction algorithms and classical statistical inference to identify patterns in the evolution of TC convective structure leading up to the rapid intensification of a storm, hence providing forecasters and scientists with key insight into TC behavior.
Kernel methods provide a flexible and theoretically grounded approach to nonlinear and nonparametric learning. While memory and run-time requirements hinder their applicability to large datasets, many low-rank kernel approximations, such as random Fourier features, were recently developed to scale up such kernel methods. However, these scalable approaches are based on approximations of isotropic kernels, which cannot remove the influence of irrelevant features. In this work, we design random Fourier features for a family of automatic relevance determination (ARD) kernels, and introduce RFFNet, a new large-scale kernel method that learns the kernel relevances' on the fly via first-order stochastic optimization. We present an effective initialization scheme for the method's non-convex objective function, evaluate if hard-thresholding RFFNet's learned relevances yield a sensible rule for variable selection, and perform an extensive ablation study of RFFNet's components. Numerical validation on simulated and real-world data shows that our approach has a small memory footprint and run-time, achieves low prediction error, and effectively identifies relevant features, thus leading to more interpretable solutions. We supply users with an efficient, PyTorch-based library, that adheres to the scikit-learn standard API and code for fully reproducing our results.
It is well known in astronomy that propagating non-Gaussian prediction uncertainty in photometric redshift estimates is key to reducing bias in downstream cosmological analyses. Similarly, likelihood-free inference approaches, which are beginning to emerge as a tool for cosmological analysis, require a characterization of the full uncertainty landscape of the parameters of interest given observed data. However, most machine learning (ML) or training-based methods with open-source software target point prediction or classification, and hence fall short in quantifying uncertainty in complex regression and parameter inference settings. As an alternative to methods that focus on predicting the response (or parameters) y\mathbf{y} from features x\mathbf{x}, we provide nonparametric conditional density estimation (CDE) tools for approximating and validating the entire probability density function (PDF) p(yx)\mathrm{p}(\mathbf{y}|\mathbf{x}) of y\mathbf{y} given (i.e., conditional on) x\mathbf{x}. As there is no one-size-fits-all CDE method, the goal of this work is to provide a comprehensive range of statistical tools and open-source software for nonparametric CDE and method assessment which can accommodate different types of settings and be easily fit to the problem at hand. Specifically, we introduce four CDE software packages in Python\texttt{Python} and R\texttt{R} based on ML prediction methods adapted and optimized for CDE: NNKCDE\texttt{NNKCDE}, RFCDE\texttt{RFCDE}, FlexCode\texttt{FlexCode}, and DeepCDE\texttt{DeepCDE}. Furthermore, we present the cdetools\texttt{cdetools} package, which includes functions for computing a CDE loss function for tuning and assessing the quality of individual PDFs, along with diagnostic functions. We provide sample code in Python\texttt{Python} and R\texttt{R} as well as examples of applications to photometric redshift estimation and likelihood-free cosmological inference via CDE.
There are no more papers matching your filters at the moment.