<p><i>Although the
apparent hyperbole about the promises of AI algorithms has successfully entered
upon the judicial precincts; it has also procreated some robust concerns spanning
from unfairness, privacy invasion, bias, discrimination, and the lack of legitimacy</i><i> to the lack of transparency</i><i> and
explainability</i><i>, </i><i>etc.</i><i> Notably, critics have
already denounced </i><i>the current use
of the </i><i>predictive algorithm in the judicial decision-making
process in many ways, and branded them as
ethically, legally, and technically distressing.</i><i> So
contextually, whereas there is already an ongoing transparency debate on board,
this paper attempts to revisit, extend and contribute to such simmering debate
with a particular focus from a judicial perspective. Since there is a good
cause to preserve and promote trust and confidence in the judiciary as a whole,
a searchlight is beamed on exploring how and why justice algorithms ought to be
transparent as to their outcomes, with a sufficient level of explainability, interpretability,
intelligibility, and contestability. This paper also ends up delineating the
tentative paths to do away with black-box effects, and suggesting the way out for
the use of algorithms in the high-stake areas like the judicial settings.</i></p>