The main objective of work package 4 is to develop novel efficient and adaptive algorithms for nonlocal methods exploiting the characterisation of the nonlocal operators and their theoretical properties studied in work package 3. For the optimization part we follow two different research directions. First, we want to investigate novel second-order and stochastic optimization schemes for graph-structured data and try to combine them. To enable parallel computations we will investigate the potential of applying the recently proposed block-splitting schemes to different optimization algorithms, i.e., inverse power iteration, steepest descent, primal-dual hybrid gradient methods (PDHGM), and stochastic optimization methods. Iterative regularization methods based on splitting schemes will be also developed. Next, we aim to devise new approaches to make nonlocal methods parallelizable on modern GPGPUs by mimicking local proximity. To overcome the fundamental problem of memory shortage for huge data sets from real world applications we want to elaborate data simplification methods, such as hierarchical graphs. To do so we will formulate suitable optimization problems involving sparse regularisation. In addition, stochastic data sampling and matrix completion techniques will be used to further reduce the computational costs, especially in the case when quasi-Newton non-smooth methods are considered. The design of such algorithms in terms of the characterisation and properties investigated in work package 3 (e.g., their spectral decomposition) are essential to overcome the specificity of state-of-the art neural network approaches.