Here we build on abduction-based explanations for machine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP).
Here we demonstrate provable guarantees on the robustness of decision rules, paving the way towards provably causally robust decision-making systems.
Here we introduce the first method for verifying the time-unbounded safety of neural networks controlling dynamical systems.