Israeli Startup Develops New Tools To Determine AI Liability
As artificial intelligence becomes deeply integrated into high-stakes industries like healthcare and finance, the question of accountability has become a primary bottleneck for adoption. When an algorithm makes a life-altering medical error or a costly financial miscalculation, developers and companies often struggle to pinpoint whether the failure originated in the data, the model architecture, or the deployment environment. Identifying these blind spots is critical for trust and regulatory compliance.
Israel-based startup Coherent is tackling this challenge by developing tools that act as a "black box" recorder for AI systems. By providing a clear trail of how decisions are made, the technology aims to solve the industry’s blame-game problem. The goal is to move past the idea that AI is an unpredictable or opaque entity, instead treating technical failures with the same forensic rigor applied to traditional mechanical or software engineering.
The shift toward AI accountability matters because businesses are currently hesitant to scale autonomous systems due to legal and ethical risks. If a startup can successfully bridge the gap between complex code and legal liability, it could pave the way for wider use of generative and predictive models in regulated sectors. This development signals a transition from the "experimental" phase of AI to a more mature, professionalized era of software management.
Moving forward, industry experts will be watching how regulatory bodies in the US and Europe integrate such diagnostic tools into their frameworks for AI safety. As the technology evolves, the focus will likely shift from simply making AI "smarter" to making it more auditable. This push for transparency is detailed in a recent report by the Jerusalem Post.




