There are many approaches for training decision trees. This work introduces a
novel gradient-based method for constructing decision trees that optimize
arbitrary differentiable loss functions, overcoming the limitations of
heuristic splitting rules. Unlike traditional approaches that rely on heuristic
splitting rules, the proposed method refines predictions using the first and
second derivatives of the loss function, enabling the optimization of complex
tasks such as classification, regression, and survival analysis. We demonstrate
the method's applicability to classification, regression, and survival analysis
tasks, including those with censored data. Numerical experiments on both real
and synthetic datasets compare the proposed method with traditional decision
tree algorithms, such as CART, Extremely Randomized Trees, and SurvTree. The
implementation of the method is publicly available, providing a practical tool
for researchers and practitioners. This work advances the field of decision
tree-based modeling, offering a more flexible and accurate approach for
handling structured data and complex tasks. By leveraging gradient-based
optimization, the proposed method bridges the gap between traditional decision
trees and modern machine learning techniques, paving the way for further
innovations in interpretable and high-performing models.