The paper focuses on how computational models and methods impact on current legal systems, and in particular, on criminal justice. While the discussion about the suitabilty of the exploitation of learning machines and Artificial Intelligence (AI) either as surveillance means and human substitutes in the judicial decision-making process is arising, the authors reflect upon the risk of using AI and algorithm-based evidence in criminal proceedings. The claim of the paper is twofold: on the one hand, we should reinterpret todays legal frameworks, e. g. the European Convention of Human Rights, shifting the attention from possible violations of the right to privacy to potential infringements on a basic fair trial feature, the Equality of Arms. On the other hand, we should aknowledge that main legal issues, triggered by the breathtaking advancements in AI, can properly be addressed mainly through technical solutions (e. g. methods for assessing the completeness and correctness of digital evidence related to mobile devices and conversations). No legal theory, which overlooks the crossover of juridical and computational expertise, will survive the present time.