Rafael Advanced Defense Systems
Rule-ATT&CK Mapper (RAM) is an LLM-based framework that automates the mapping of structured SIEM rules to MITRE ATT&CK techniques and sub-techniques without requiring pre-training or fine-tuning. It achieved an Average Recall of 0.75 and Average Precision of 0.52, surpassing established baselines by incorporating external contextual information to enhance LLM performance.
Magnetic confinement fusion reactors produce ash particles that must be removed for efficient operation. It is suggested to use autoresonance (a continuous phase-locking between anharmonic motion and a chirped drive) to remove the ash particles from a magnetic mirror, the simplest magnetic confinement configuration. An analogy to the driven pendulum is established via the guiding center approximation. The full 3D dynamics is simulated for α\alpha particles (the byproduct of DT fusion) in agreement with the approximated 1D model. Monte Carlo simulations sampling the phase space of initial conditions are used to quantify the efficiency of the method. The DT fuel particles are out of the bandwidth of the chirped drive and, therefore, stay in the mirror for ongoing fusion. The method is also applicable for advanced, aneutronic reactors, such as p-11^{11}B.
A broad range of technologies rely on remote inference, wherein data acquired is conveyed over a communication channel for inference in a remote server. Communication between the participating entities is often carried out over rate-limited channels, necessitating data compression for reducing latency. While deep learning facilitates joint design of the compression mapping along with encoding and inference rules, existing learned compression mechanisms are static, and struggle in adapting their resolution to changes in channel conditions and to dynamic links. To address this, we propose Adaptive Rate Task-Oriented Vector Quantization (ARTOVeQ), a learned compression mechanism that is tailored for remote inference over dynamic links. ARTOVeQ is based on designing nested codebooks along with a learning algorithm employing progressive learning. We show that ARTOVeQ extends to support low-latency inference that is gradually refined via successive refinement principles, and that it enables the simultaneous usage of multiple resolutions when conveying high-dimensional data. Numerical results demonstrate that the proposed scheme yields remote deep inference that operates with multiple rates, supports a broad range of bit budgets, and facilitates rapid inference that gradually improves with more bits exchanged, while approaching the performance of single-rate deep quantization methods.
There are no more papers matching your filters at the moment.