FernUniversität in Hagen
Modern password hashing remains a critical defense against credential cracking, yet the transition from theoretically secure algorithms to robust real-world implementations remains fraught with challenges. This paper presents a dual analysis of Argon2, the Password Hashing Competition winner, combining attack simulations quantifying how parameter configurations impact guessing costs under realistic budgets, with the first large-scale empirical study of Argon2 adoption across public GitHub software repositories. Our economic model, validated against cryptocurrency mining benchmarks, demonstrates that OWASP's recommended 46 MiB configuration reduces compromise rates by 42.5% compared to SHA-256 at \$1/account attack budgets for strong user passwords. However, memory-hardness exhibits diminishing returns as increasing allocations to RFC 9106's 2048 MiB provides just 23.3% (\1)and17.71) and 17.7% (\20) additional protection despite 44.5 times greater memory demands. Crucially, both configurations fail to mitigate risks from weak passwords, with 96.9-99.8% compromise rates for RockYou-like credentials regardless of algorithm choice. Our repository analysis shows accelerating Argon2 adoption, yet weak configuration practices: 46.6% of deployments use weaker-than-OWASP parameters. Surprisingly, sensitive applications (password managers, encryption tools) show no stronger configurations than general software. Our findings highlight that a secure algorithm alone cannot ensure security, effective parameter guidance and developer education remain essential for realizing Argon2's theoretical advantages.
How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student, parent-child, neighbors, siblings, or employer-employee are governed by specific norms that prescribe or proscribe cooperative functions including hierarchy, care, transaction, and mating. These norms shape our judgments of what is appropriate for each partner. For example, workplace norms may allow a boss to give orders to an employee, but not vice versa, reflecting hierarchical and transactional expectations. As AI agents and chatbots powered by large language models are increasingly designed to serve roles analogous to human positions - such as assistant, mental health provider, tutor, or romantic partner - it is imperative to examine whether and how human relational norms should extend to human-AI interactions. Our analysis explores how differences between AI systems and humans, such as the absence of conscious experience and immunity to fatigue, may affect an AI's capacity to fulfill relationship-specific functions and adhere to corresponding norms. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.
Large language models (LLMs) offer the potential to simulate human-like responses and behaviors, creating new opportunities for psychological science. In the context of self-regulated learning (SRL), if LLMs can reliably simulate survey responses at scale and speed, they could be used to test intervention scenarios, refine theoretical models, augment sparse datasets, and represent hard-to-reach populations. However, the validity of LLM-generated survey responses remains uncertain, with limited research focused on SRL and existing studies beyond SRL yielding mixed results. Therefore, in this study, we examined LLM-generated responses to the 44-item Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich \& De Groot, 1990), a widely used instrument assessing students' learning strategies and academic motivation. Particularly, we used the LLMs GPT-4o, Claude 3.7 Sonnet, Gemini 2 Flash, LLaMA 3.1-8B, and Mistral Large. We analyzed item distributions, the psychological network of the theoretical SRL dimensions, and psychometric validity based on the latent factor structure. Our results suggest that Gemini 2 Flash was the most promising LLM, showing considerable sampling variability and producing underlying dimensions and theoretical relationships that align with prior theory and empirical findings. At the same time, we observed discrepancies and limitations, underscoring both the potential and current constraints of using LLMs for simulating psychological survey data and applying it in educational contexts.
We propose a high-dimensional structural vector autoregression framework that features a factor structure in the error terms and accommodates a large number of linear inequality restrictions on impact impulse responses, structural shocks, and their element-wise products. In particular, we demonstrate that narrative restrictions can be imposed via constraints on the structural shocks, which can be used to sharpen inference and disentangle structurally interpretable shocks. To estimate the model, we develop a highly efficient sampling algorithm that scales well with both the model dimension and the number of inequality restrictions on impact responses and structural shocks. It remains computationally feasible even in settings where existing algorithms may break down. To illustrate the practical utility of our approach, we identify five structural shocks and examine the dynamic responses of thirty macroeconomic variables, highlighting the model's flexibility and feasibility in complex empirical applications. We provide empirical evidence that financial shocks are the most important driver of business cycle dynamics.
We use a FAVAR model with proxy variables and sign restrictions to investigate the role of the euro area's common output and inflation cycles in the transmission of monetary policy shocks. Our findings indicate that common cycles explain most of the variation in output and inflation across member countries. However, Southern European economies exhibit a notable divergence from these cycles in the aftermath of the financial crisis. Building on this evidence, we demonstrate that monetary policy is homogeneously propagated to member countries via the common cycles. In contrast, country-specific transmission channels lead to heterogeneous country responses to monetary policy shocks. Consequently, our empirical results suggest that the divergent effects of ECB monetary policy are attributable to heterogeneous country-specific exposures to financial markets, rather than to dis-synchronized economies within the euro area.
This paper presents a novel learning analytics method: Transition Network Analysis (TNA), a method that integrates Stochastic Process Mining and probabilistic graph representation to model, visualize, and identify transition patterns in the learning process data. Combining the relational and temporal aspects into a single lens offers capabilities beyond either framework, including centralities to capture important learning events, community detection to identify behavior patterns, and clustering to reveal temporal patterns. Furthermore, TNA introduces several significance tests that go beyond either method and add rigor to the analysis. Here, we introduce the theoretical and mathematical foundations of TNA and we demonstrate the functionalities of TNA with a case study where students (n=191) engaged in small-group collaboration to map patterns of group dynamics using the theories of co-regulation and socially-shared regulated learning. The analysis revealed that TNA can map the regulatory processes as well as identify important events, patterns, and clusters. Bootstrap validation established the significant transitions and eliminated spurious transitions. As such, TNA can capture learning dynamics and provide a robust framework for investigating the temporal evolution of learning processes. Future directions include -- inter alia -- expanding estimation methods, reliability assessment, and building longitudinal TNA.
We introduce QCLAB, an object-oriented MATLAB toolbox for constructing, representing, and simulating quantum circuits. Designed with an emphasis on numerical stability, efficiency, and performance, QCLAB provides a reliable platform for prototyping and testing quantum algorithms. For advanced performance needs, QCLAB++ serves as a complementary C++ package optimized for GPU-accelerated quantum circuit simulations. Together, QCLAB and QCLAB++ form a comprehensive toolkit, balancing the simplicity of MATLAB scripting with the computational power of GPU acceleration. This paper serves as an introduction to the package and its features along with a hands-on tutorial that invites researchers to explore its capabilities right away.
12
A sequence of random variables is called exchangeable if the joint distribution of the sequence is unchanged by any permutation of the indices. De Finetti's theorem characterizes all {0,1}\{0,1\}-valued exchangeable sequences as a "mixture" of sequences of independent random variables. We present an new, elementary proof of de Finetti's Theorem. The purpose of this paper is to make this theorem accessible to a broader community through an essentially self-contained proof.
This article introduces a novel and fast method for refining pre-trained static word or, more generally, token embeddings. By incorporating the embeddings of neighboring tokens in text corpora, it continuously updates the representation of each token, including those without pre-assigned embeddings. This approach effectively addresses the out-of-vocabulary problem, too. Operating independently of large language models and shallow neural networks, it enables versatile applications such as corpus exploration, conceptual search, and word sense disambiguation. The method is designed to enhance token representations within topically homogeneous corpora, where the vocabulary is restricted to a specific domain, resulting in more meaningful embeddings compared to general-purpose pre-trained vectors. As an example, the methodology is applied to explore storm events and their impacts on infrastructure and communities using narratives from a subset of the NOAA Storm Events database. The article also demonstrates how the approach improves the representation of storm-related terms over time, providing valuable insights into the evolving nature of disaster narratives.
In recent years, inconsistency in Bayesian deep learning has attracted significant attention. Tempered or generalized posterior distributions are frequently employed as direct and effective solutions. Nonetheless, the underlying mechanisms and the effectiveness of generalized posteriors remain active research topics. In this work, we interpret posterior tempering as a correction for model misspecification via adjustments to the joint probability, and as a recalibration of priors by reducing aleatoric uncertainty. We also introduce the generalized Laplace approximation, which requires only a simple modification to the Hessian calculation of the regularized loss and provides a flexible and scalable framework for high-quality posterior inference. We evaluate the proposed method on state-of-the-art neural networks and real-world datasets, demonstrating that the generalized Laplace approximation enhances predictive performance.
In this survey article, we give an introduction to two methods of proof in random matrix theory: The method of moments and the Stieltjes transform method. We thoroughly develop these methods and apply them to show both the semicircle law and the Marchenko-Pastur law for random matrices with independent entries. The material is presented in a pedagogical manner and is suitable for anyone who has followed a course in measure-theoretic probability theory.
Generative AI (GenAI) is advancing rapidly, and the literature in computing education is expanding almost as quickly. Initial responses to GenAI tools were mixed between panic and utopian optimism. Many were fast to point out the opportunities and challenges of GenAI. Researchers reported that these new tools are capable of solving most introductory programming tasks and are causing disruptions throughout the curriculum. These tools can write and explain code, enhance error messages, create resources for instructors, and even provide feedback and help for students like a traditional teaching assistant. In 2024, new research started to emerge on the effects of GenAI usage in the computing classroom. These new data involve the use of GenAI to support classroom instruction at scale and to teach students how to code with GenAI. In support of the former, a new class of tools is emerging that can provide personalized feedback to students on their programming assignments or teach both programming and prompting skills at the same time. With the literature expanding so rapidly, this report aims to summarize and explain what is happening on the ground in computing classrooms. We provide a systematic literature review; a survey of educators and industry professionals; and interviews with educators using GenAI in their courses, educators studying GenAI, and researchers who create GenAI tools to support computing education. The triangulation of these methods and data sources expands the understanding of GenAI usage and perceptions at this critical moment for our community.
Descriptor revision by Hansson is a framework for addressing the problem of belief change. In descriptor revision, different kinds of change processes are dealt with in a joint framework. Individual change requirements are qualified by specific success conditions expressed by a belief descriptor, and belief descriptors can be combined by logical connectives. This is in contrast to the currently dominating AGM paradigm shaped by Alchourrón, Gärdenfors, and Makinson, where different kinds of changes, like a revision or a contraction, are dealt with separately. In this article, we investigate the realisation of descriptor revision for a conditional logic while restricting descriptors to the conjunction of literal descriptors. We apply the principle of conditional preservation developed by Kern-Isberner to descriptor revision for conditionals, show how descriptor revision for conditionals under these restrictions can be characterised by a constraint satisfaction problem, and implement it using constraint logic programming. Since our conditional logic subsumes propositional logic, our approach also realises descriptor revision for propositional logic.
The increasing availability of brain data within and outside the biomedical field, combined with the application of artificial intelligence (AI) to brain data analysis, poses a challenge for ethics and governance. We identify distinctive ethical implications of brain data acquisition and processing, and outline a multi-level governance framework. This framework is aimed at maximizing the benefits of facilitated brain data collection and further processing for science and medicine whilst minimizing risks and preventing harmful use. The framework consists of four primary areas of regulatory intervention: binding regulation, ethics and soft law, responsible innovation, and human rights.
We study whether a given graph can be realized as an adjacency graph of the polygonal cells of a polyhedral surface in R3\mathbb{R}^3. We show that every graph is realizable as a polyhedral surface with arbitrary polygonal cells, and that this is not true if we require the cells to be convex. In particular, if the given graph contains K5K_5, K5,81K_{5,81}, or any nonplanar 33-tree as a subgraph, no such realization exists. On the other hand, all planar graphs, K4,4K_{4,4}, and K3,5K_{3,5} can be realized with convex cells. The same holds for any subdivision of any graph where each edge is subdivided at least once, and, by a result from McMullen et al. (1983), for any hypercube. Our results have implications on the maximum density of graphs describing polyhedral surfaces with convex cells: The realizability of hypercubes shows that the maximum number of edges over all realizable nn-vertex graphs is in Ω(nlogn)\Omega(n \log n). From the non-realizability of K5,81K_{5,81}, we obtain that any realizable nn-vertex graph has O(n9/5)O(n^{9/5}) edges. As such, these graphs can be considerably denser than planar graphs, but not arbitrarily dense.
Canonical orderings and their relatives such as st-numberings have been used as a key tool in algorithmic graph theory for the last decades. Recently, a unifying concept behind all these orders has been shown: they can be described by a graph decomposition into parts that have a prescribed vertex-connectivity. Despite extensive interest in canonical orderings, no analogue of this unifying concept is known for edge-connectivity. In this paper, we establish such a concept named edge-orders and show how to compute (1,1)-edge-orders of 2-edge-connected graphs as well as (2,1)-edge-orders of 3-edge-connected graphs in linear time, respectively. While the former can be seen as the edge-variants of st-numberings, the latter are the edge-variants of Mondshein sequences and non-separating ear decompositions. The methods that we use for obtaining such edge-orders differ considerably in almost all details from the ones used for their vertex-counterparts, as different graph-theoretic constructions are used in the inductive proof and standard reductions from edge- to vertex-connectivity are bound to fail. As a first application, we consider the famous Edge-Independent Spanning Tree Conjecture, which asserts that every k-edge-connected graph contains k rooted spanning trees that are pairwise edge-independent. We illustrate the impact of the above edge-orders by deducing algorithms that construct 2- and 3-edge independent spanning trees of 2- and 3-edge-connected graphs, the latter of which improves the best known running time from O(n^2) to linear time.
Standard macroeconomic models assume that households are rational in the sense that they are perfect utility maximizers, and explain economic dynamics in terms of shocks that drive the economy away from the stead-state. Here we build on a standard macroeconomic model in which a single rational representative household makes a savings decision of how much to consume or invest. In our model households are myopic boundedly rational heterogeneous agents embedded in a social network. From time to time each household updates its savings rate by copying the savings rate of its neighbor with the highest consumption. If the updating time is short, the economy is stuck in a poverty trap, but for longer updating times economic output approaches its optimal value, and we observe a critical transition to an economy with irregular endogenous oscillations in economic output, resembling a business cycle. In this regime households divide into two groups: Poor households with low savings rates and rich households with high savings rates. Thus inequality and economic dynamics both occur spontaneously as a consequence of imperfect household decision making. Our work here supports an alternative program of research that substitutes utility maximization for behaviorally grounded decision making.
26 Oct 2020
Consider two paths ϕ,ψ:[0;1][0;1]2\phi,\psi:[0;1]\to [0;1]^2 in the unit square such that ϕ(0)=(0,0)\phi(0)=(0,0), ϕ(1)=(1,1)\phi(1)=(1,1), ψ(0)=(0,1)\psi(0)=(0,1) and ψ(1)=(1,0)\psi(1)=(1,0). By continuity of ϕ\phi and ψ\psi there is a point of intersection. We prove that from ϕ\phi and ψ\psi we can compute closed intervals $S_\phi,S_\psi \subseteq [0;1]suchthat such that \phi(S_\phi)=\psi(S_\psi)$.
Many modern manufacturing companies have evolved from a single production site to a multi-factory production environment that must handle both geographically dispersed production orders and their multi-site production steps. The availability of a range of machines in different locations capable of performing the same operation and shipping times between factories have transformed planning systems from the classic Job Shop Scheduling Problem (JSSP) to Distributed Flexible Job Shop Scheduling Problem (DFJSP). As a result, the complexity of production planning has increased significantly. In our work, we use Quantum Annealing (QA) to solve the DFJSP. In addition to the assignment of production orders to production sites, the assignment of production steps to production sites also takes place. This requirement is based on a real use case of a wool textile manufacturer. To investigate the applicability of this method to large problem instances, problems ranging from 50 variables up to 250 variables, the largest problem that could be embedded into a D-Wave quantum annealer Quantum Processing Unit (QPU), are formulated and solved. Special attention is dedicated to the determination of the Lagrange parameters of the Quadratic Unconstrained Binary Optimization (QUBO) model and the QPU configuration parameters, as these factors can significantly impact solution quality. The obtained solutions are compared to solutions obtained by Simulated Annealing (SA), both in terms of solution quality and calculation time. The results demonstrate that QA has the potential to solve large problem instances specific to the industry.
The relation between entropy and information has great significance for computation. Based on the strict reversibility of the laws of microphysics, Landauer (1961), Bennett (1973), Priese (1976), Fredkin and Toffoli (1982), Feynman (1985) and others envisioned a reversible computer that cannot allow any ambiguity in backward steps of a calculation. It is this backward capacity that makes reversible computing radically different from ordinary, irreversible computing. The proposal aims at a higher kind of computer that would give the actual output of a computation together with the original input, with the absence of a minimum energy requirement. Hence, information retrievability and energy efficiency due to diminished heat dissipation are the exquisite tasks of quantum computer technology.
1
There are no more papers matching your filters at the moment.