Institute of Information and Communication Technologies - Bulgarian Academy of Sciences
A database of high precision trivial choreographies for the planar three-body problem
Trivial choreographies are special periodic solutions of the planar three-body problem. In this work we use a modified Newton's method based on the continuous analog of Newton's method and a high precision arithmetic for a specialized numerical search for new trivial choreographies. As a result of the search we computed a high precision database of 462 such orbits, including 397 new ones. The initial conditions and the periods of all found solutions are given with 180 correct decimal digits. 108 of the choreographies are linearly stable, including 99 new ones. The linear stability is tested by a high precision computing of the eigenvalues of the monodromy matrices.
View blog
Resources
Distributed non-negative RESCAL with Automatic Model Selection for Exascale Data
With the boom in the development of computer hardware and software, social media, IoT platforms, and communications, there has been an exponential growth in the volume of data produced around the world. Among these data, relational datasets are growing in popularity as they provide unique insights regarding the evolution of communities and their interactions. Relational datasets are naturally non-negative, sparse, and extra-large. Relational data usually contain triples, (subject, relation, object), and are represented as graphs/multigraphs, called knowledge graphs, which need to be embedded into a low-dimensional dense vector space. Among various embedding models, RESCAL allows learning of relational data to extract the posterior distributions over the latent variables and to make predictions of missing relations. However, RESCAL is computationally demanding and requires a fast and distributed implementation to analyze extra-large real-world datasets. Here we introduce a distributed non-negative RESCAL algorithm for heterogeneous CPU/GPU architectures with automatic selection of the number of latent communities (model selection), called pyDRESCALk. We demonstrate the correctness of pyDRESCALk with real-world and large synthetic tensors, and the efficacy showing near-linear scaling that concurs with the theoretical complexities. Finally, pyDRESCALk determines the number of latent communities in an 11-terabyte dense and 9-exabyte sparse synthetic tensor.
View blog
Resources
Solving Larger Maximum Clique Problems Using Parallel Quantum Annealing
Quantum annealing has the potential to find low energy solutions of NP-hard problems that can be expressed as quadratic unconstrained binary optimization problems. However, the hardware of the quantum annealer manufactured by D-Wave Systems, which we consider in this work, is sparsely connected and moderately sized (on the order of thousands of qubits), thus necessitating a minor-embedding of a logical problem onto the physical qubit hardware. The combination of relatively small hardware sizes and the necessity of a minor-embedding can mean that solving large optimization problems is not possible on current quantum annealers. In this research, we show that a hybrid approach combining parallel quantum annealing with graph decomposition allows one to solve larger optimization problem accurately. We apply the approach on the Maximum Clique problem on graphs with up to 120 nodes and 6395 edges.
View blog
Resources
Replication-based quantum annealing error mitigation
Quantum annealers like those from D-Wave Systems implement adiabatic quantum computing to solve optimization problems, but their analog nature and limited control functionalities present challenges to correcting or mitigating errors. As quantum computing advances towards applications, effective error suppression is an important research goal. We propose a new approach called replication based mitigation (RBM) based on parallel quantum annealing. In RBM, physical qubits representing the same logical qubit are dispersed across different copies of the problem embedded in the hardware. This mitigates hardware biases, is compatible with limited qubit connectivity in current annealers, and is suited for available noisy intermediate-scale quantum (NISQ) annealers. Our experimental analysis shows that RBM provides solution quality on par with previous methods while being compatible with a much wider range of hardware connectivity patterns. In comparisons against standard quantum annealing without error mitigation, RBM consistently improves the energies and ground state probabilities across parameterized problem sets.
View blog
Resources
Increasing the Hardness of Posiform Planting Using Random QUBOs for Programmable Quantum Annealer Benchmarking
Posiform planting is a method for constructing QUBO problems with a single unique planted solution that can be tailored to arbitrary connectivity graphs. In this study we investigate making posiform planted QUBOs computationally harder by fusing many smaller random discrete coefficient spin-glass Ising models, whose global minimum energy is computed classically using classical binary integer programming optimization software, with posiform-planted QUBOs. The single unique ground-state solution of the resulting QUBO problem is the concatenation of (exactly one of) the ground-states of each of the smaller problems. We apply these modified posiform planted QUBOs to the task of benchmarking programmable D-Wave quantum annealers. The proposed method enables generating binary variable combinatorial optimization problems that cover the entire quantum annealing processor hardware graph, have a unique solution, are entirely hardware-graph-native, and can have tunable computational hardness. We benchmark the capabilities of three D-Wave superconducting qubit quantum annealing processors, having from 563 up to 5627 qubits, to sample the optimal unique planted solution of problems generated by our proposed method and compare them against simulated annealing and Gurobi. We find that the D-Wave QPU ground-state sampling success rate does not change with respect to the size of the random QUBOs we employ. Surprisingly, we find that some of these classes of QUBOs are solved at very high success rates at short annealing times compared to longer annealing times for the Zephyr connectivity graph QPUs.
View blog
Resources
A Multidisciplinary Approach to Telegram Data Analysis
This paper presents a multidisciplinary approach to analyzing data from Telegram for early warning information regarding cyber threats. With the proliferation of hacktivist groups utilizing Telegram to disseminate information regarding future cyberattacks or to boast about successful ones, the need for effective data analysis methods is paramount. The primary challenge lies in the vast number of channels and the overwhelming volume of data, necessitating advanced techniques for discerning pertinent risks amidst the noise. To address this challenge, we employ a combination of neural network architectures and traditional machine learning algorithms. These methods are utilized to classify and identify potential cyber threats within the Telegram data. Additionally, sentiment analysis and entity recognition techniques are incorporated to provide deeper insights into the nature and context of the communicated information. The study evaluates the effectiveness of each method in detecting and categorizing cyber threats, comparing their performance and identifying areas for improvement. By leveraging these diverse analytical tools, we aim to enhance early warning systems for cyber threats, enabling more proactive responses to potential security breaches. This research contributes to the ongoing efforts to bolster cybersecurity measures in an increasingly interconnected digital landscape.
View blog
Resources
There are no more papers matching your filters at the moment.