10/06/2025 | Press release | Distributed by Public on 10/06/2025 05:32
Author: Simas Joneliunas, Senior Engineer
Since the initial machine learning algorithms became available, researchers and executives have seen the possibility of these models being applied in text understanding scenarios. The state of the art models have turned this into reality. For example, JPMorgan Chase now completes contract reviews in seconds - work that previously required hundreds of thousands of hours annually[1]. It has managed to implement the tooling to perform work that previously required large legal teams months to finish, redefining the pace of decision-making across its finance, legal, and compliance functions.
Achieving such performance is not possible when working with single documents. Modern AI platforms, such as ChatGPT, Claude, and Microsoft Copilot are able to process multiple files (5-40) simultaneously and cross-reference them when answering questions.
These platforms, called Document Assistants, allow professionals to query entire document sets in plain language. Instead of combing through contracts or regulatory filings manually, analysts can ask: "Which supplier agreements include ESG commitments?" or "Where do liability caps fall below $5 million?". The system is able to cross-reference the provided documents and return answers accompanied by the references to source passages, and highlighted inconsistencies that might escape manual review.
Document assistants enable the legal, financial and investment teams to not only increase the speed of their work, but also reduce the degree of human error when comparing or cross-referencing large document sets.
The productivity gains are significant. Goldman Sachs attributes a 27% profitability lift in certain trading operations to AI assistance[2], particularly in commodities research. In the legal sector, adoption of AI-powered document review jumped from 19% to 79% in a single year, according to Clio's 2024 Legal Trends Report[3]. Legal associates are now able to complete reviews in hours rather than days.
Compliance and ESG reporting see similar improvements. The ICAEW estimates that AI reduces manual ESG data handling by about 70%[4], allowing companies to assemble full TCFD reports in days instead of weeks. At Morgan Stanley, 98% of financial advisors now use AI tools, contributing to a 35% increase in client engagement metrics[5].
Routine work looks different when documents can be processed at scale. For example, risk officers upload disclosures and can immediately focus on outliers. Compliance teams map new regulations against existing policies to reveal gaps automatically.
Investment analysts reviewing data can ask precise questions about projections or competitive risks, rather than leafing through every file. Legal teams query entire contract portfolios for unusual provisions or clauses that diverge from standards. The effect is not just speed-it is the ability to focus human judgment where it matters most.
These advantages come with caveats: general-purpose AI systems usually provide results without explaining how they reached them or by explaining them insufficiently or in a non-compliant way. While this is not a problem for the larger public, in highly regulated industries, such lack of traceability is an undisputed compliance breach.
FINRA requires firms operating in the US to document how AI-driven conclusions are formed[6], and the SEC scrutinizes advisory services that rely on AI. The risks are not theoretical: in Mata v. Avianca, attorneys were sanctioned after filing AI-generated briefs containing fabricated citations[7].
Financial regulators stress that existing securities laws apply to AI implementations, which means firms must maintain model risk frameworks that include validation, testing, and monitoring. Consumer AI tools rarely meet those standards.
Consumer-grade systems are powerful but not designed for regulated industries. Professionals need transparency: conclusions must link to specific document passages. They need audit trails that withstand regulatory review. And they require strict controls on where and how sensitive data is stored.
General-purpose platforms often process data on external servers with little enterprise control, creating compliance risks under GDPR, HIPAA, and financial regulations. The EU AI Act raises the stakes further, with potential fines of up to €35 million or 7% of annual revenue for non-compliance[8].
This is one of the main reasons organizations increasingly choose specialized AI solutions. Legal tech funding illustrates the trend: in 2024, more than three-quarters of investments in the field went to AI-focused platforms[9].
Specialised AI platforms embed domain knowledge, regulatory frameworks, and internal standards that consumer tools lack. They also allow organisations to encode their own firm-specific rules and practices, turning AI into a partner that reflects internal expertise.
Analysts expect the intelligent document processing market to expand from $2.3 billion in 2024 to $12.35 billion by 2030[10], with the fastest growth in compliance-focused and industry-specific solutions.
Increasingly, firms are moving beyond generic document tools to adopt domain-specific AI systems that mirror how their teams actually work. Platforms like Insig AI's Generative Intelligence Engine (GIE) not only process documents at scale, but also codify a firm's internal logic, including decision rules, review protocols, and regulatory frameworks. These systems can be tailored by sector through specialized intelligence modules across ESG, Legal, Risk, and Investment, and are deployed securely within private infrastructure. The result is AI that is not just fast, but explainable, regulator-aligned, and fully grounded in institutional expertise.
A practical example is Insig's IFRS Accelerator, designed for ESG advisories and reporting teams. It runs structured, audit-ready gap analyses on draft disclosures, automatically mapping outputs to IFRS S1, S2, and TCFD criteria. This reduces weeks of manual review to minutes, while preserving full traceability back to the source text. (Read our latest article on the IFRS Accelerator here.)
The technology delivers value only when implemented thoughtfully. Successful organizations start with high-impact use cases, ensure outputs can be explained and audited, and keep human oversight in place for final decisions. They also train professionals to use AI effectively rather than treating it as a black box.
Most importantly, they select solutions built for their industry rather than trying to retrofit consumer tools. The results JPMorgan achieved are possible elsewhere-but only for firms that choose AI systems built for their domain, and ready for regulatory scrutiny.
If you're exploring how this could work for your team, our AI specialists are here to show how to apply domain-specific AI in ways that are practical, compliant, and immediately useful. Click here to book a demo.
References