New: AI Audit Assessment. Discover every AI opportunity in your business. From £999. Learn More →
Prime Ai
AI for Finance 8 min read

AI Case Study: Automating Forecast Commentary in Finance

By Prime AI Solutions · Published 17 February 2026

This is the story of a mid-market manufacturing business - around £180 million revenue, a finance team of twelve - that transformed one of its most time-consuming monthly processes using AI tools that cost nothing to access. The process was forecast commentary. The transformation took four weeks. The results were immediate and measurable.

We share this case study because it is representative. The specific numbers will differ from your organisation, but the structure of the problem - and the approach to solving it - applies broadly to finance teams across industries. For the FP&A context around this case study, see our guide to using AI in FP&A. For more examples across all finance functions, see our complete AI use cases in finance guide.

The Problem: 3 Days Writing Commentary

Every month, the FP&A team produced a detailed variance commentary covering actuals against budget and forecast. The organisation had fourteen cost centres, three business units, and a consolidated P&L - each requiring narrative explanation of material variances. Management wanted commentary for every line item above £50K variance, which meant roughly 40-60 narrative paragraphs per monthly pack.

The process consumed three working days of the senior FP&A analyst's time each month. She would extract variance data from the ERP, build a working spreadsheet with prior period comparisons and year-to-date figures, investigate the significant variances by speaking to budget holders, and then write each narrative paragraph from a blank page.

Two problems compounded the time pressure. First, because the close process itself took eight days, the commentary window was compressed - the analyst was writing forty-plus paragraphs in a three-day sprint while simultaneously supporting the close. Second, the commentary was inconsistent in style and depth. Different months had different levels of analytical rigour depending on how much time was available. Board members occasionally flagged that the commentary was hard to compare across periods.

The FP&A lead estimated that commentary production was consuming approximately 15% of the senior analyst's total working time - time that could have been spent on planning, forecasting, and business partnering with operational managers.

The Solution: AI-Assisted Commentary Generation

The proposed solution was straightforward: use AI to draft the commentary, with the analyst reviewing, editing, and adding strategic context. The AI would handle the blank-page problem and the boilerplate structure; the analyst would add the judgement and strategic insight that only she could provide.

The technical approach used structured prompts built on the RACEF framework for finance prompting. Rather than asking the AI to “write commentary about this variance,” each prompt specified the role (senior finance analyst preparing board-level management commentary), the action (explain the variance and its business implications), the context (specific numbers, comparison periods, and any known business drivers provided by the analyst), examples (two or three sample paragraphs from previous high-quality months' packs), and the format (single paragraph of 80-120 words, professional tone, past tense for actuals commentary).

The workflow had three stages. First, data extraction: the analyst ran a standard Excel report from the ERP that produced the variance data in a structured format. Second, AI drafting: a semi-automated process fed each material variance into the prompt template and generated a draft paragraph. Third, review and finalisation: the analyst reviewed each paragraph, edited for accuracy, added strategic context (business decisions the AI could not know about), and approved the final version.

Want to go deeper? Our AI for Finance Leaders course covers this in detail with practical templates and exercises.

Implementation: Week by Week

Week 1: Prompt development and baseline testing. The analyst spent two days developing and testing prompt templates, using the previous three months' commentary packs as reference material. The goal was to produce AI drafts that required minimal editing rather than complete rewrites. By end of week one, she had a working prompt template for cost-centre variance commentary and had tested it against twelve historical examples with acceptable results.

Week 2: Parallel running - first live month. The first live month's commentary was produced in parallel: the analyst wrote the commentary using her existing process, then used the AI-assisted process, and compared outputs side by side. The AI drafts required an average of five minutes of editing per paragraph compared to fifteen minutes of writing from scratch - a 67% time reduction per paragraph. Overall commentary time dropped from three days to one day for the first parallel run.

Week 3: Prompt refinement. Based on the parallel run, the team identified twelve prompt refinements - mostly around handling specific variance types (one-off items, prior year comparisons, reforecast adjustments) that the initial template did not handle well. These were incorporated into a revised template library with specific prompt variants for different variance categories.

Week 4: Full transition. The team transitioned to the AI-assisted process as the primary approach. The parallel process was discontinued. The analyst used the refined prompt library throughout the month-end process, with the FP&A lead reviewing a sample of AI-drafted paragraphs for quality assurance.

The Results

After two full months on the AI-assisted process, the measured outcomes were:

Case Study Results at a Glance

3 days → 4 hours
Time saved on monthly commentary
More consistent
Standardised language and structure across periods
100% coverage
All cost centres covered every month without exception
Higher satisfaction
Analyst time redirected to planning and business partnering

The time saving - from three days to four hours - represented approximately eleven hours per month recovered from a single process. The FP&A lead directed that time into a monthly business partnering programme with operational managers, which had previously been impossible due to the commentary workload.

The consistency improvement was noted by the finance director unprompted in the third month of the new process. Board members commented that the commentary was easier to read across periods - a direct result of the standardised prompt templates producing consistent structure and language.

Key Lessons Learned

Start with one report. The team resisted the temptation to automate all commentary at once. Starting with the monthly management commentary only - and doing it properly - produced better results than a broader simultaneous rollout would have. The lessons from the first implementation informed every subsequent automation project.

Prompt quality determines output quality. The difference between a generic AI commentary paragraph and a finance-quality one was almost entirely in the prompt structure. Investing two full days in prompt development at the start saved weeks of editing frustration later. The RACEF framework provided the structure - the specific language and examples came from the team's own historical commentary.

Validate rigorously in the first three months. The parallel running period was essential. It identified prompt gaps, built the analyst's confidence in the AI output, and caught the edge cases that the initial templates did not handle well. Teams that skip parallel running and go straight to full transition tend to encounter problems that damage confidence and sometimes lead to reverting to the old process.

Iterate prompts continuously. The prompt library is a living document. Every month the team identifies one or two situations where the AI output needed significant editing - and adds a new prompt variant to handle that situation better. After six months, the prompt library had grown from eight templates to twenty-three, and the average editing time per paragraph had dropped from five minutes to three.

The human review step is non-negotiable. The AI draft is a starting point, not an endpoint. Every paragraph that goes into the management pack is reviewed and approved by the analyst. This is not a limitation of the approach - it is the correct design. The analyst's judgement and contextual knowledge are the difference between a technically accurate paragraph and an analytically useful one.

Recommended Training£99

AI for Finance Leaders: From Awareness to Action

8 modules, 59 lessons. Master AI for FP&A, reporting, governance, and automation — no coding required.

Frequently Asked Questions

Get AI insights for business leaders
Subscribe Free

Related Resources

Blog
How to Use AI in FP&A

Complete guide to AI applications in FP&A.

Learn More
Blog
RACEF Prompt Framework

The prompt framework used in this case study.

Learn More
Training
AI for Finance Leaders Course

Learn the prompt structures and workflows from this case study.

Learn More
Automate Commentary in Your Finance Team

Book a free consultation to implement AI-assisted commentary in your finance reporting process.