Phase 1 Summary

CT AI Transformation Program
Phase 1 — Current Process Analysis

Key Highlights

  • Scope: 14 products analyzed across 5 service lines; Assurance, Consulting, EY Parthenon, Fabric, Tax
  • Key Finding #1: Incomplete and inconsistent business requirements — Business requirements arrive vague and high-level, lacking the detail needed for delivery decisions. Engineers resolve ambiguity instead of building, QA works from incomplete criteria, and the same requirement is rewritten across handoffs. Resource planning also suffers. Direct knock-on with Key Finding #2
  • Key Finding #2: Time-consuming compliance and approval dependencies — Every release must pass compliance and approval gates — some internal to CT (CAB documentation, release governance) and others owned by external functions (InfoSec, PIA, BIA, TTAR) where CT has limited control over timelines. Both types constrain release frequency.
  • Key CT Finding: Definition of Ready gates are absent or inconsistently enforced. Without structured entry criteria, incomplete work enters the pipeline and drives rework, late defects, and the compliance delays described above.
  • AI readiness: All five service lines identified concrete AI opportunities spanning intake through release.
  • Phase 2 focus: Shift-left redesign anchored on requirements quality, AI-assisted intake, compliance automation, test automation, and IaC (Infrastructure as Code)
5
Service Lines
14
Documents
161
Pain Points
115
Root Causes
218
Opportunities
136
Repetitive Tasks

Executive Summary

Phase 1 analyzed 14 process documents across 5 service lines (Assurance, Consulting, EY Parthenon, Fabric, Tax), covering the full PDLC from ideation through release. The analysis surfaced 161 pain points, 115 root causes, 136 repetitive tasks, and 218 improvement opportunities. Two dominant findings were captured.

#1: Incomplete and inconsistent business requirements

Business requirements arrive vague, high-level, and without the detail needed to drive delivery decisions. Definition of Ready gates are either absent or inconsistently enforced, and scope changes continue well into development. This was the single most consistent finding across all five service lines:

  • Assurance - late scope additions, including a major feature funded one sprint before release
  • Consulting - high-level requirements not benchmarked against actual client needs, multiple rounds of clarification
  • EY Parthenon - unclear or unstable requirements identified as the primary driver of rework loops
  • Fabric - conflicting requirements and change requests that disrupt delivery
  • Tax - requirements without essential details, compounded by frequent priority shifts

The downstream impact is significant: engineers resolve requirement ambiguity instead of building, QA depends on incomplete acceptance criteria, the same requirement is rewritten multiple times across handoffs, and defects discovered late in UAT trace back to unclear inputs. The result is rework loops, sprint spillover, carry-over of partially planned work, and context loss at every handoff from BA to Dev to QA to Ops.

Resource planning is also affected: teams cannot accurately scope work until requirements stabilise, leading to late resource requests and capacity bottlenecks.

Critically, these upstream ambiguities have a direct knock-on effect to compliance triage cannot begin — a problem explored in Finding #2.

#2: Time-consuming compliance and approval dependencies

Before any release can proceed, teams must satisfy compliance and approval gates. Some are internal to CT (CAB documentation, release governance). Others are external dependencies (InfoSec reviews, PIA, BIA, TTAR assessments) where CT has limited control over timelines. Both constrain release frequency:

  • Assurance - 17-week compliance critical path with 55 tasks and 16 manual approval steps
  • Consulting - time-consuming CAB documentation and manual InfoSec approval processes
  • EY Parthenon - security and compliance tooling not integrated into the development pipeline
  • Fabric - InfoSec sign-off is manual, slowing down release frequency
  • Tax - CAB deliverables need streamlining; InfoSec awareness comes too late

These two findings reinforce each other. Without clear requirements, teams cannot determine early whether a feature requires InfoSec review, PIA, BIA, or TTAR - so compliance triage is deferred until mid-delivery, creating the approval bottlenecks that constrain release.

#3: Key CT Finding: Definition of Ready

Definition of Ready gates are absent or inconsistently enforced. Without structured entry criteria, incomplete work enters the pipeline and drives rework, late defects, and the compliance delays described above.

AI Amplifies a Shift-Left Strategy

  • All five service lines identified actionable AI opportunities that support this shift-left approach, from intake through release.
  • AI enables earlier intervention - structuring requirements at intake, validating completeness before planning, and flagging gaps in real time.

Recommendations

Phase 2 can adopt a shift-left redesign anchored on five levers — supporting Agentic AI development:

  • Requirements quality uplift - mandatory, structured Definition of Ready with completeness criteria enforced by Product Delivery
  • AI-assisted intake and DOR enforcement - AI to summarize intake, auto-generate acceptance criteria drafts, validate completeness, and flag gaps
  • Compliance automation - pre-assemble compliance artifacts, auto-determine required reviews, orchestrate approvals in parallel with development
  • Test automation uplift - shift test design to requirements phase; automate regression, smoke, and security testing
  • Environment standardization via IaC - eliminate configuration drift and enable self-service environment provisioning

These five levers work together: requirements quality provides the foundation; AI makes it enforceable at scale; downstream levers benefit from structured, complete data. The result is a PDLC where quality, compliance, and architectural soundness are built in from the start - not bolted on at the end.

Program Objectives

Accelerate Time-to-Market
Enhance Software Quality & Security
Increase Efficiency
Reduce Total Cost of Delivery
Enforce Compliance

Cross-Service Line Themes

Eight systemic themes emerged across the service lines. These are not isolated issues; they reinforce each other in a cycle of upstream ambiguity driving downstream waste.

1

Vague Requirements & Scope Instability

Business requirements arrive vague and without the detail needed to drive delivery decisions. Definition of Ready gates are either absent or inconsistently enforced, and scope changes continue well into development.

Affects: Assurance, Consulting, EY Parthenon, Fabric, Tax
  • Assurance: Late scope additions driven by budget allocation complexities and architectural debates, including a major feature funded one sprint before release.
  • Consulting: High-level requirements not benchmarked against actual client needs, with multiple rounds of clarification and scope revisions post-estimation.
  • EY Parthenon: Unclear or unstable requirements identified as the primary driver of rework loops.
  • Fabric: Conflicting requirements and change requests that disrupt delivery.
  • Tax: Business teams provide requirements without essential details such as what the functionality should achieve or how it should work.
2

Rework Loops & Late Defect Discovery

Upstream ambiguity propagates downstream as defects, rework, and sprint spillover. Defects discovered in QA/UAT trace back to unclear requirements or missing acceptance criteria.

Affects: Assurance, Consulting, EY Parthenon, Fabric, Tax
  • Assurance: DevOps constraints and configuration management are significant challenges, driving many decisions in the development process, influencing workflows and creating downstream impacts on lower environments.
  • Consulting: Spillover from development to subsequent sprints
  • EY Parthenon: Unclear or unstable requirements → rework loops
  • Fabric: No feedback loops
  • Tax: Long feedback loops
3

Compliance & Release Overhead

Internal CT processes (CAB documentation, release governance) and external dependencies (InfoSec reviews, PIA, BIA, TTAR assessments) are largely manual and sequential, constraining release frequency across all service lines.

Affects: Assurance, Consulting, EY Parthenon, Fabric, Tax
  • Assurance: High number of required approvals for a release permit, high number of stakeholders that need to aware
  • Consulting: CAB documentation preparation time-consuming
  • EY Parthenon: Tools missing early in flow (e.g., InfoSec tests not in Dev pipeline)
  • Fabric: InfoSec sign-off is manual, slowing down release frequency
  • Tax: Lack of test and infosec impact awareness on many bugs after feature DOD; little reaction time and difficult to remediate before install
4

Sequential Handoffs & Context Loss

Role-based, sequential handoffs (BA to Dev to QA to Ops) cause context dilution, duplicate clarification requests, and non-value-add wait time at every gate.

Affects: Assurance, Consulting, EY Parthenon, Fabric, Tax
  • Assurance: High number of required approvals for a release permit, high number of stakeholders that need to aware
  • Consulting: Infosec approval process and steps included are manual and time consuming with dependency on assigned resources
  • EY Parthenon: Unclear or unstable requirements → rework loops
  • Fabric: Knowledge silos
  • Tax: Requirements are not tracked from start to end, so user stories miss details. This causes rework and many clarification cycles with PM, engineers, QA, and SMEs.
5

Test Automation Deficit

Manual testing dominates across most teams. Regression, smoke, and security testing are manually executed each sprint, with inconsistent coverage between environments.

Affects: Assurance, Consulting, EY Parthenon, Fabric, Tax
  • Assurance: BTC testing scripts are created manually
  • Consulting: Over‑processing: repeated regression testing
  • EY Parthenon: QA capability gap → poor test case writing, missing regression coverage
  • Fabric: Quality of code is poor due to low test coverage
  • Tax: Conducting manual testing in Production, UAT, and QA/DEV environments results in inconsistencies.
6

Tool Fragmentation & Knowledge Silos

Requirements, design artifacts, test results, and compliance evidence live across Aha, ADO, SharePoint, email, and wikis. Tribal knowledge fills the gaps between tools.

Affects: Assurance, Consulting, EY Parthenon, Fabric, Tax
  • Assurance: Project information is scattered across tools (SharePoint, Power BI, OneNote, Teams, etc.)
  • Consulting: Absence of centralized repository for Product documentation
  • EY Parthenon: Tools missing early in flow (e.g., InfoSec tests not in Dev pipeline)
  • Fabric: Fragmented user experience , although fabric.ey.com is starting to fix a number of those issues
  • Tax: Lack of documentation and knowledge transfer across teams
7

Environment & Infrastructure Fragility

Environments are hand-built and inconsistent. Configuration drift between DEV/QA/UAT/PROD causes failures, burns test time, and delays releases.

Affects: Assurance, Consulting, Fabric, Tax
  • Assurance: DevOps constraints and configuration management are significant challenges, driving many decisions in the development process, influencing workflows and creating downstream impacts on lower environments.
  • Consulting: Over‑processing: repeated testing across environments
  • Fabric: Development environments are not consistent
  • Tax: Environment build transitions are prone to misconfiguration and errors that burn valuable test time
8

High AI-Readiness Signal

Each service line identified concrete AI opportunities: intake summarization, AC/story generation, DOR validation, test case generation, compliance pre-assembly, and release readiness scoring.

Affects: Assurance, Consulting, EY Parthenon, Fabric, Tax

Heatmaps

Click any cell to view the underlying findings.

Pain & Waste Density by PDLC Stage

Number of pain points, root causes, and repetitive tasks mapped to each PDLC stage. Darker = higher concentration of issues.

Service Line Ideation Requirements Planning Development Testing Release Ops
Assurance 1 1 2 8 15 27 2
Consulting 13 52 34 60 116 63 4
EY Parthenon 14 26 5 5 15 5 3
Fabric 1 13 8 38 35 23 2
Tax 1 24 10 12 21 16 4

Finding Category by Service Line

Service LinePain PointsRoot CausesOpportunitiesRepetitive Tasks
Assurance 17 9 9 5
Consulting 67 52 108 69
EY Parthenon 12 13 41 16
Fabric 41 23 27 22
Tax 24 18 33 24

Theme Presence by Service Line

Theme Assurance Consulting EY Parthenon Fabric Tax
Vague Requirements & Scope Instability Low High High High High
Rework Loops & Late Defect Discovery Medium High High Low Medium
Compliance & Release Overhead High High Medium High High
Sequential Handoffs & Context Loss Medium High High Low Low
Test Automation Deficit Low High Low High High
Tool Fragmentation & Knowledge Silos Medium High Low High High
Environment & Infrastructure Fragility Medium High - High High
High AI-Readiness Signal Yes Yes Yes Yes Yes

Pain Point Intensity by Theme

Number of pain points per cross-cutting theme per service line. Darker red = higher concentration, helping contextualize where pain is most acute.

Theme Assurance Consulting EY Parthenon Fabric Tax Total
Vague Requirements & Scope Instability 1 11 3 4 6 25
Rework Loops & Late Defect Discovery 2 4 7 1 2 16
Compliance & Release Overhead 3 8 0 2 2 15
Sequential Handoffs & Context Loss 2 13 6 1 1 23
Test Automation Deficit 0 1 0 3 1 5
Tool Fragmentation & Knowledge Silos 1 4 0 3 2 10
Environment & Infrastructure Fragility 4 3 0 4 4 15
High AI-Readiness Signal 0 0 0 0 0 0
Total 13 44 16 18 18 109

Key Findings Across All Service Lines

Representative findings from each service line, extracted directly from team submissions.

Note: Awaiting Assurance Opportunities & AI Ideas. Current mapping based on translation of pain points.

Pain Points (161)

  • Assurance: Dependencies with the Authorization Service, which was handled by the Core Platform team, was qualified as a big pain point.
  • Assurance: DevOps constraints and configuration management are significant challenges, driving many decisions in the development process, influencing workflows and creating downstream impacts on lower environments.
  • Assurance: DevOps - path to production is not optimized
  • Assurance: High number of required approvals for a release permit, high number of stakeholders that need to aware
  • Consulting: High level requirements which are not benchmarked against actual Client needs
  • Consulting: Delay in scope freeze / Multiple rounds of clarification
  • Consulting: Multiple scope revisions post estimation
  • Consulting: Absence of centralized repository for Product documentation
  • EY Parthenon: Unclear or unstable requirements → rework loops
  • EY Parthenon: Architecture response time → waiting time bottleneck – amplifies loop
  • EY Parthenon: Missing acceptance criteria → creates rework → defects & slow QA
  • EY Parthenon: UX late involvement → late cycle churn
  • Fabric: Release cycles are slow
  • Fabric: Quality of code is poor due to low test coverage
  • Fabric: CI/CD processes are not consistent
  • Fabric: Support challenges
  • Tax: Late injects cause QA disruption and risk install integrity… need stricter definition of done
  • Tax: Lack of test and infosec impact awareness on many bugs after feature DOD; little reaction time and difficult to remediate before install
  • Tax: CAB deliverables and process can be more streamlined
  • Tax: Lengthy initial performance test setup time for major installs

Root Causes (115)

  • Assurance: BTC testing scripts are created manually
  • Assurance: Deployment requests require manual posts in Teams channels, often lacking complete information. While a template existed, it became obsolete, and now requests are mostly free-form, increasing the risk of errors.
  • Assurance: For most products, certification required extensive documentation and artifact mapping, which was a heavy and manual process
  • Assurance: Manual creation and update of enablement and go-to-market materials for each release, in a variety of formats, for distribution in various channels to varied target public. Manual formatting of slides to align with the l
  • Consulting: Insufficient early involvement of product and engineering
  • Consulting: Business inputs vary in quality and depth Product & Engineering cannot derive actionable clarity.
  • Consulting: No automated traceability Requirements ↔ design ↔ test cases (eventually) are not auto-linked.
  • Consulting: Lack of Governance - Gaps in drafted requirements, flows, and integration mappings
  • EY Parthenon: Business alignment gaps → priorities shift, unclear problem statements
  • EY Parthenon: BA and Scrum teams not empowered to push back → incomplete requirements progress downstream
  • EY Parthenon: Architecture bandwidth constraint → availability and specialist skill constraint
  • EY Parthenon: Weak Definition of Ready understanding → inconsistent story quality
  • Fabric: Low skill level of product management
  • Fabric: Low skill level of engineering leadership and general engineering talent
  • Fabric: Limited awareness of cloud native solutions
  • Fabric: Limited knowledge of modern CI/CD approaches, ( this has recently been addressed with new Fabric Developer Workflow)
  • Tax: Ambiguous or incomplete business requirements
  • Tax: Acceptance criteria needs work
  • Tax: No feature flagging meant unfinished code could ship
  • Tax: Absence of Automated Infrastructure – Recurring tasks are not streamlined or automated

Opportunities & AI Ideas (218)

  • Assurance: Automate compliance checklist: 16 manual approval steps per release (Code Freeze, QA Sign Off, UAT Sign Off, InfoSec, Performance, DevOps readiness, etc.) could be orchestrated via an automated release readiness workflow
  • Assurance: IaC-based environment provisioning: configuration management across ~30 applications and 15 data centers is the single biggest engineering challenge, with config issues causing ~80% of deployment failures
  • Assurance: Automate BTC testing scripts, which are currently created manually for each release
  • Assurance: Structured deployment request templates: replace free-form Teams channel posts with automated deployment request forms that enforce required fields
  • Consulting: AI-driven Requirement Assistant & Document generation: Automatically evaluates requirement submissions for completeness:
  • Consulting: detects missing flows
  • Consulting: flags unclear requirements
  • Consulting: suggests edge scenarios
  • EY Parthenon: AI‑Native Automation Opportunities
  • EY Parthenon: Auto‑generate problem statements, requirements drafts, epics, features, stories, and acceptance criteria from business input
  • EY Parthenon: Auto‑summarise business conversations/emails into structured intake forms
  • EY Parthenon: AI‑assisted UX wireframe extraction from requirements
  • Fabric: Traditional automation opportunities:
  • Fabric: All infrastructure should be defined into standard Landing Zones for common compute types from Kubernetes, Container Apps to DataBricks
  • Fabric: All code should be built and deployed using KaaS Single and Multi Tenant solutions
  • Fabric: All code should have full range of automated tests
  • Tax: Architecture and Infsec involved early in feature creation; explicit gate
  • Tax: Update ADO user story (and bug) template to identify infosec and testing impact
  • Tax: Automated user story code review and PR approval
  • Tax: Infosec automated code inspection and certification

Repetitive Tasks (136)

  • Assurance: Critical path of the compliance process is 17 weeks long for a major release with 55 distinct tasks, with activities with external dependencies for 7 weeks, and multiple teams [weekly diagram and list of tasks available]
  • Assurance: Enablement materials tend to grow in size as new releases are added, but it's unclear to what degree users are finding them useful, or which parts are useful
  • Assurance: Post-release incident tracking and reporting is manual
  • Assurance: Product owners and developers spend significant time on manually preparing documentation (generating user stories, linking wireframes, prioritizing, etc.)
  • Consulting: Manual tracking of dependencies across teams
  • Consulting: Repeated clarifications due to high level requirement
  • Consulting: Design discussions and finalization is repetitive
  • Consulting: Define scope-freeze gates before estimation.
  • EY Parthenon: BA ↔ Business for clarity (multiple cycles)
  • EY Parthenon: BA ↔ Architecture for feasibility (slow cycles)
  • EY Parthenon: BA ↔ UX for aligning flows and screens
  • EY Parthenon: Dev ↔ BA for missing acceptance criteria
  • Fabric: Too many meetings
  • Fabric: Lack of firm requirements defined by product teams
  • Fabric: Engineering making engineering decisions
  • Fabric: Architects previously were just for sign off had no influence on the design of the Epic
  • Tax: Feature churn (although it’s gotten better)... Clarifying requirements, refactoring to smaller acceptance criteria, and estimation refinement
  • Tax: infosec testing lacks sufficient automation
  • Tax: Low Automation & Overreliance on Manual Testing
  • Tax: Decouple from GTP deployment process

Assurance

17
Pain Points
9
Root Causes
9
Opportunities
5
Repetitive Tasks

Products in Scope: Helix Spotfire Refactoring, Risk Radar Product Build

Note: Awaiting Assurance Opportunities & AI Ideas. Current mapping based on translation of pain points.

Pain Points (17)
  • Dependencies with the Authorization Service, which was handled by the Core Platform team, was qualified as a big pain point.
  • DevOps constraints and configuration management are significant challenges, driving many decisions in the development process, influencing workflows and creating downstream impacts on lower environments.
  • DevOps - path to production is not optimized
  • High number of required approvals for a release permit, high number of stakeholders that need to aware
  • Managing configurations across approximately 30 applications and 15 data centers (each with extensive configuration lines) is the single biggest [engineering] challenge, with configuration issues causing about 80% of deployment issues.
  • Need for more actionable data for Scrum Masters, improved reporting, and dependency tracking in ADO
  • Results from the Adobe Analytics team were delayed for 1-2 sprints
  • Setting up environments and managing DevOps (such as getting approvals for creating environments) often involves other teams.
  • The Authorization Service feature was funded late. Budget allocation complexities and architectural decisions (building it as a micro front-end that can be reusable component) complicated the process. As was built extremely late due to scope debates, it falls outside patterns used in Assurance.
  • Ideally, QA could trigger their own deployments, but DevOps must be involved to prevent surprises in upper environments, especially due to configuration changes. This adds inefficiency but reduces production risk.
  • Dependency tracking was often managed in Excel due to ADO’s limitations, especially in reporting, dashboard quality, and advanced queries
  • Enablement materials require updating PowerPoint decks in SharePoint with no Track Changes, which had to be compensated through manual tracking of edits made by the BPM and by the Activation & Enablement team. This process is common across projects.
  • OSS list is Excel-based
  • Project information is scattered across tools (SharePoint, Power BI, OneNote, Teams, etc.)
  • Shared Excel file used for tracking defects gathered during the BTC event required manual review of hundreds of entries and required BPM annotations. Defects were not grouped in any meaningful way and included duplications and repeated issues. The file failed to auto-save BPM's notes. This process i
  • PIA keeps adding questions which are increasingly complex
  • Tracking usage and adoption requires additional planning and coordination with the ADI team
Root Causes (9)
  • BTC testing scripts are created manually
  • Deployment requests require manual posts in Teams channels, often lacking complete information. While a template existed, it became obsolete, and now requests are mostly free-form, increasing the risk of errors.
  • For most products, certification required extensive documentation and artifact mapping, which was a heavy and manual process
  • Manual creation and update of enablement and go-to-market materials for each release, in a variety of formats, for distribution in various channels to varied target public. Manual formatting of slides to align with the latest EY templates.
  • Product owners and developers spend significant time on preparing for and attending refinement sessions
  • Product teams get repeated requests to prepare and do demos for various audiences
  • Significant time, resources, and budget are spending for implementing features we don't know how will be adopted by users
  • Tasking out user stories
  • The drag-and-drop feature was added late (one sprint before release
Opportunities & AI Ideas (9)
  • Automate compliance checklist: 16 manual approval steps per release (Code Freeze, QA Sign Off, UAT Sign Off, InfoSec, Performance, DevOps readiness, etc.) could be orchestrated via an automated release readiness workflow
  • IaC-based environment provisioning: configuration management across ~30 applications and 15 data centers is the single biggest engineering challenge, with config issues causing ~80% of deployment failures
  • Automate BTC testing scripts, which are currently created manually for each release
  • Structured deployment request templates: replace free-form Teams channel posts with automated deployment request forms that enforce required fields
  • AI-assisted defect triage: BTC defect tracking via shared Excel required manual review of hundreds of entries with duplications; automate grouping and deduplication
  • Automate enablement material generation: manual creation and formatting of go-to-market materials for each release across multiple channels and formats
  • Automate certification artifact mapping: documentation and artifact mapping for product certification is heavy and manual
  • Self-service QA deployments: allow QA to trigger their own deployments without DevOps dependency, with automated config validation gates
  • Reduce compliance critical path: 17-week compliance process for major releases with 55 tasks; identify parallelization and automation opportunities
Repetitive Tasks (5)
  • Critical path of the compliance process is 17 weeks long for a major release with 55 distinct tasks, with activities with external dependencies for 7 weeks, and multiple teams [weekly diagram and list of tasks available]. Most issues stem from the tasks in the last 2 weeks before release, which cann
  • Enablement materials tend to grow in size as new releases are added, but it's unclear to what degree users are finding them useful, or which parts are useful
  • Post-release incident tracking and reporting is manual
  • Product owners and developers spend significant time on manually preparing documentation (generating user stories, linking wireframes, prioritizing, etc.)
  • The Infosec process, especially MSB scans and GIS manual reviews, was identified as the most significant source of delays due to slow response times, unforeseen personnel changes, and communication delays due to time zone differences.
Additional Notes (22)
  • Go Live
  • Major Release
  • Minor Release
  • New Data Center
  • New Country
  • Procurement New
  • Procurement Renewal
  • Code Freeze
  • QA Envt Sign Off
  • UAT Envt QA team Sign off
  • UAT Envt Business team Sign off
  • Compatibilty/Backwards test Sign off (if applicable)
  • DevOps Confirmation for Prod Envt readiness
  • Create Tag format tradition - Dev Lead
  • Tag format Review - Release Manager
  • InfoSec testing Sign off
  • Performance testing sign off (if applicable)
  • Catalogue SR by Complince Manager (if applicable)
  • Local Support Confirmation
  • Neon Release Permit (if applicable) - By PO
  • Dependencies List (if applicable)
  • Go-No Go Call - 1 Week prior Alex's Checklist Review
All Slide Details (Raw)

Assurance Risk Radar - Phase 1 Content.pptx

Slide 3: Delivery Processes on a Page (Pain Points) Assurance: Risk Radar
Type: table

Pain Points:

  • Dependencies with the Authorization Service, which was handled by the Core Platform team, was qualified as a big pain point.
  • DevOps constraints and configuration management are significant challenges, driving many decisions in the development process, influencing workflows and creating downstream impacts on lower environments.
  • DevOps - path to production is not optimized
  • High number of required approvals for a release permit, high number of stakeholders that need to aware
  • Managing configurations across approximately 30 applications and 15 data centers (each with extensive configuration lines) is the single biggest [engineering] challenge, with configuration issues causing about 80% of deployment issues.
  • Need for more actionable data for Scrum Masters, improved reporting, and dependency tracking in ADO
  • Results from the Adobe Analytics team were delayed for 1-2 sprints
  • Setting up environments and managing DevOps (such as getting approvals for creating environments) often involves other teams.
  • The Authorization Service feature was funded late. Budget allocation complexities and architectural decisions (building it as a micro front-end that can be reusable component) complicated the process. As was built extremely late due to scope debates,
  • Ideally, QA could trigger their own deployments, but DevOps must be involved to prevent surprises in upper environments, especially due to configuration changes. This adds inefficiency but reduces production risk.
  • Dependency tracking was often managed in Excel due to ADO’s limitations, especially in reporting, dashboard quality, and advanced queries
  • Enablement materials require updating PowerPoint decks in SharePoint with no Track Changes, which had to be compensated through manual tracking of edits made by the BPM and by the Activation & Enablement team. This process is common across projects.
  • OSS list is Excel-based
  • Project information is scattered across tools (SharePoint, Power BI, OneNote, Teams, etc.)
  • Shared Excel file used for tracking defects gathered during the BTC event required manual review of hundreds of entries and required BPM annotations. Defects were not grouped in any meaningful way and included duplications and repeated issues. The fi
  • PIA keeps adding questions which are increasingly complex
  • Tracking usage and adoption requires additional planning and coordination with the ADI team

Root Causes:

  • BTC testing scripts are created manually
  • Deployment requests require manual posts in Teams channels, often lacking complete information. While a template existed, it became obsolete, and now requests are mostly free-form, increasing the risk of errors.
  • For most products, certification required extensive documentation and artifact mapping, which was a heavy and manual process
  • Manual creation and update of enablement and go-to-market materials for each release, in a variety of formats, for distribution in various channels to varied target public. Manual formatting of slides to align with the latest EY templates.
  • Product owners and developers spend significant time on preparing for and attending refinement sessions
  • Product teams get repeated requests to prepare and do demos for various audiences
  • Significant time, resources, and budget are spending for implementing features we don't know how will be adopted by users
  • Tasking out user stories
  • The drag-and-drop feature was added late (one sprint before release

Repetitive Tasks:

  • Critical path of the compliance process is 17 weeks long for a major release with 55 distinct tasks, with activities with external dependencies for 7 weeks, and multiple teams [weekly diagram and list of tasks available]. Most issues stem from the ta
  • Enablement materials tend to grow in size as new releases are added, but it's unclear to what degree users are finding them useful, or which parts are useful
  • Post-release incident tracking and reporting is manual
  • Product owners and developers spend significant time on manually preparing documentation (generating user stories, linking wireframes, prioritizing, etc.)
  • The Infosec process, especially MSB scans and GIS manual reviews, was identified as the most significant source of delays due to slow response times, unforeseen personnel changes, and communication delays due to time zone differences.
Slide 8: Compliance Process Effort
Type: table

Additional:

  • Go Live
  • Major Release
  • Minor Release
  • New Data Center
  • New Country
  • Procurement New
  • Procurement Renewal
Slide 9: Compliance Checklist for Releases
Type: table

Additional:

  • Code Freeze
  • QA Envt Sign Off
  • UAT Envt QA team Sign off
  • UAT Envt Business team Sign off
  • Compatibilty/Backwards test Sign off (if applicable)
  • DevOps Confirmation for Prod Envt readiness
  • Create Tag format tradition - Dev Lead
  • Tag format Review - Release Manager
  • InfoSec testing Sign off
  • Performance testing sign off (if applicable)
  • Catalogue SR by Complince Manager (if applicable)
  • Local Support Confirmation
  • Neon Release Permit (if applicable) - By PO
  • Dependencies List (if applicable)
  • Go-No Go Call - 1 Week prior Alex's Checklist Review

Consulting

67
Pain Points
52
Root Causes
108
Opportunities
69
Repetitive Tasks

Products in Scope: EY.AI for Risk-Internal Audit, Marketing.AI, EY Workforce Platform (EYWP)

Pain Points (67)
  • High level requirements which are not benchmarked against actual Client needs
  • Delay in scope freeze / Multiple rounds of clarification
  • Multiple scope revisions post estimation
  • Absence of centralized repository for Product documentation
  • Gaps between product expectations and dev understanding
  • Spillover from development to subsequent sprints
  • QA dependency on incomplete or unclear acceptance criteria
  • UAT observations logged manually in SharePoint
  • Duplicate bugs or repeated observations
  • Business misunderstanding feature behaviour
  • CAB documentation preparation time-consuming
  • Missing artifacts require follow-ups
  • Business or QA signoff delays final deployment
  • Infosec approval process and steps included are manual and time consuming with dependency on assigned resources
  • No automated readiness dashboard
  • Separate requirement discussion and backlog creation process, it should be one.
  • Lack of clarity in priorities/competing priorities.
  • Incomplete or unclear requirements (to a large extent Replit can help with this)
  • Planning delay due to lack of knowledge in external systems like Salesforce, Dynamics and unavailability of experts.
  • Lack of Sandbox
  • License procurement is a time consuming process
  • Improve PR Review
  • QA lags Dev.
  • Lack of optimal automation coverage
  • Formation of the team
  • Delay in code being deployed in QA
  • Testing stories in isolation and not looking at the entire flow.
  • Elaborate QA sign off process
  • Waiting: releases blocked by approval queues
  • Over‑processing: repeated testing across environments
  • CAB process is manual
  • Lack of availability of Production equivalent data in lower environment
  • Effective UAT not conducted
  • Waiting on clarifications, reviews, and approvals
  • Context loss across BA → Eng → QA → Business
  • Late discovery of defects
  • Rework and repeated validation
  • Inventory of “done but unreleased” work
  • Process delays associated with CAB approval
  • Lack of production like data
  • Challenges in generation of synthetic data
  • Testing and validation repeated across environments
  • License procurement
  • Waiting: stories blocked pending clarification from BA/BPM
  • Over‑processing: same requirement rewritten multiple times
  • Context loss: original intent diluted across handoffs
  • Non‑utilized talent: engineers involved late, not during ideation
  • Defects (upstream): unclear requirements discovered during dev/test
  • Transportation waste: information moved across tools instead of flowing
  • Waiting: planning delayed due to unclear stories
  • Over‑processing: re‑breaking down poorly shaped stories
  • Inventory: partially planned work carried sprint‑to‑sprint
  • Motion: constant context switching during sprint start
  • Defects (upstream): planning assumptions invalidated during dev
  • Waiting: PRs stalled awaiting review or clarification
  • Defects: logic issues found during review or later testing
  • Over‑processing: repeated fixes due to unclear requirements
  • Motion: developers switching between code, chat, ADO, PRs
  • Non‑utilized talent: engineers resolving requirement ambiguity instead of building
  • Waiting: QA blocked waiting for fixes
  • Defects: issues discovered late in lifecycle
  • Over‑processing: repeated regression testing
  • Motion: switching between ADO, test tools, builds, chat
  • Extra‑processing: re‑testing unchanged areas
  • Motion: heavy coordination overhead
  • Inventory: completed work waiting for release window
  • Defects: late issues discovered during UAT or Stg
Root Causes (52)
  • Insufficient early involvement of product and engineering
  • Business inputs vary in quality and depth Product & Engineering cannot derive actionable clarity.
  • No automated traceability Requirements ↔ design ↔ test cases (eventually) are not auto-linked.
  • Lack of Governance - Gaps in drafted requirements, flows, and integration mappings
  • Ambiguity in freeze criteria No strict “definition of ready” - Engineering receives stories at varying maturity levels.
  • User stories lacking precision Acceptance criteria not fully scenario‑based => Developers interpret differently.
  • Siloed communication dynamics BA, Dev, and QA ask similar questions but in different channels.
  • Manual tracking in ADO No automated prompts for missing fields or incomplete requirements.
  • No predictive visibility of sprint risk Capacity issues or spikes in complexity discovered only mid‑sprint.
  • QA dependent on tribal knowledge Test coverage relies on personal understanding rather than system‑generated test scenarios.
  • Business does not see consolidated view No AI‑based grouping of observations → duplicates multiply.
  • Manual classification effort Product & Engineering manually categorize items (bug/enhancement/change request).
  • Lack of automated impact analysis Teams manually determine how UAT issues affect timelines or scope.
  • Release readiness info scattered across systems UAT signoff → SharePoint QA reports → email or internal tools InfoSec → email or internal tools Hardening/PT → engineering artifacts
  • Manual collation of artifacts Teams must gather documentation manually before submitting to CAB.
  • No automated compliance checks CAB checks become human-driven.
  • Sequential hand off
  • Different sources of truth
  • Lack of adequate market analysis
  • Planning is driven by capacity, rethink in terms of value and GTM
  • Estimation process is largely manual and unstated requirements are not handled.
  • Inconsistent use of automation
  • Knowledge is embedded in individuals
  • Automation is not yet mature
  • Lack of awareness of the entire flow
  • Release process is approval‑heavy and manual
  • Testing and validation repeated across environments
  • Late discovery of issues
  • Production release constrained by cadence
  • Fragmented tooling and sources of truth
  • Sequential role‑based handoffs
  • Late technical and business validation
  • Heavy reliance on individuals (BA, QA, reviewers)
  • Control‑driven release model compensating for late risk discovery
  • Fragmented source of truth
  • Manual translation between systems
  • Late technical validation
  • Weak feedback loop from delivery
  • Stories enter planning without being truly “ready”
  • Planning is capacity‑driven, not outcome‑driven
  • Manual estimation with low feedback learning
  • Dependencies identified too late
  • Single sprint is treated as planning horizon
  • Development starts with incomplete context
  • PR‑centric workflow concentrates risk
  • Late discovery of defects
  • Testing is primarily manual
  • Defects discovered late
  • Weak linkage between requirements and tests
  • High fix‑verification churn
  • Environment‑driven constraints
  • High coordination overhead
Opportunities & AI Ideas (108)
  • AI-driven Requirement Assistant & Document generation: Automatically evaluates requirement submissions for completeness:
  • detects missing flows
  • flags unclear requirements
  • suggests edge scenarios
  • auto-generates acceptance criteria
  • AI Q&A Repository / Knowledge Engine Automatically clusters past clarifications and suggests answers to similar new questions.
  • AI-based code assistant Suggests code patterns, detects integration dependencies, predicts defects early.
  • AI-driven Acceptance Criteria Enhancer Takes AC and expands into scenario-based detailed test steps.
  • AI Sprint Risk Predictor Using past sprint data and current story complexity:
  • predicts story spillover
  • flags stories needing refinement
  • highlights potential capacity bottlenecks
  • AI Test Case Generator Generates test cases from user stories automatically.
  • AI-powered UAT Assistant
  • Reads UAT submissions
  • Removes duplicates
  • Suggests classification (bug/enhancement/change request)
  • Groups similar issues
  • AI Chatbot for Business FAQs
  • Explains feature behavior
  • Reduces repeated queries
  • Helps business validate expected vs actual behavior
  • AI’s Release Readiness Scoring Engine Pulls data from:
  • QA reports
  • UAT signoffs
  • InfoSec gating
  • Performance reports And generates a readiness score.
  • Automated CAB Documentation Compiler Auto-generates a CAB packet:
  • UAT signoff summary
  • QA certification
  • Risk summary
  • PT/Load results
  • Known issues list
  • AI-based Compliance Checker
  • Automatic trigger of Infosec/PIA/QRM compliance as per the changes in product architecture and build update
  • Ensures no missing documents before CAB submission.
  • Tradition:
  • Explore ways of improving the market analysis.
  • Leverage AI to identify the differentiators
  • Traditional automation opportunities:
  • Estimation
  • Identification of risks and issues
  • AI/agentic opportunities:
  • Accelerate the Tech Spikes and try to conclude during planning
  • Improve Definition of Ready before development starts
  • Enforce minimum testing expectations per story
  • Reduce reliance on review stage to catch basic issues
  • Repetitive code generation and refactoring patterns
  • Leverage the knowledge from the PR Comments
  • Generation of test scripts
  • Embed the security process in sprint
  • Auto suggestion of profiles against required JD
  • Automate regression and run it frequently.
  • Test the feature, not the individual stories
  • Simplify bug logging process.
  • Leverage tools like Playwright to automate the test case generation and recording results.
  • Developers to run the test cases before handing over to QA
  • Reduce redundant testing
  • Simplify CAB approval process
  • Release checks and build quality check
  • Preparation of production equivalent data
  • Reduce duplicate documentation and re‑entry
  • Improve confidence earlier in the lifecycle
  • Set Agent workflow development with human intervention
  • Automation in QA
  • Test case and execution
  • Approval
  • CAB process automation based on different requirements
  • Infosec certified RAI Approved Code cartridges/APIs central repo
  • Performance testing to be shifted left leveraging Agents
  • Reduce reliance on late‑stage testing and approvals
  • Operate cost monitoring
  • Available profiles match against JD
  • Reduce duplicate documentation between Aha, Confluence, and ADO
  • Enforce consistent requirement structure before story creation
  • Earlier involvement of engineering during requirement shaping
  • Standardize acceptance‑criteria quality gates before handoff
  • High volume of repetitive requirement rewriting
  • Pattern‑based clarification questions recurring across stories
  • Manual consistency checks between Aha → ADO artifacts
  • Opportunity to reduce BPM cognitive load caused by re‑entry and re‑interpretation
  • Enforce stronger “Definition of Ready” before planning
  • Standardize estimation inputs across teams
  • Make dependency identification explicit and visible in tooling
  • Repetitive task breakdown patterns
  • Repeated estimation of similar work items
  • Manual dependency detection across teams
  • Standardize PR templates to capture intent and context
  • Recurrent PR review comments across services
  • Manual quality and consistency checks during review
  • Opportunity to shift issue detection earlier than PR review
  • Improve alignment between acceptance criteria and test cases
  • Reduce manual regression through better test prioritization
  • Shift defect detection earlier in the lifecycle
  • Standardize defect severity and triage practices
  • Repetitive test execution patterns
  • Predictable regression scenarios across sprints
  • Manual defect classification and triage effort
  • Opportunity to reduce late‑stage defect discovery
  • Reduce redundant testing between environments
  • Improve confidence carry‑over from earlier stages
  • Simplify approval flows where risk is low
  • Reduce dependency on fixed release windows
  • Repetitive promotion and validation patterns
  • Manual release readiness checks
  • Manual defect risk assessment during release decisions
  • Opportunity to reduce late‑stage surprises
  • Smooth flow between stages instead of gated transitions
Repetitive Tasks (69)
  • Manual tracking of dependencies across teams
  • Repeated clarifications due to high level requirement
  • Design discussions and finalization is repetitive
  • Define scope-freeze gates before estimation.
  • Standardize Dev-to-QA-to-UAT handover checklists.
  • Decision tracking and communication
  • Rewriting similar code patterns across modules
  • Manually reviewing logs to debug issues
  • Manually updating ADO states
  • Manual peer review cycles
  • Reproducing scenarios manually while Unit testing
  • Repetitive feasibility checks across components
  • Maintenance of manual dependency tracker
  • Handover of Pre-UAT post dev code freeze
  • Detailed steps (reiterative) to be shared to capture feedback/ observation on deployed scope
  • Collation of build artifacts for CAB signoff
  • Publishing Release notes for every release
  • Multiple standard communications (Internal and external)on Deployment, downtime, deployment status etc
  • Manual compliance process to ensure compliance on PIA/BIA/QRM
  • QA to Infosec handover to Signoff
  • Update of Product status on multiple portals – Product Registry, Empire, Release notes, PdM status updates and Weekly Governance report
  • Repetition of the standard backlog of EY standards in every project. This should be a knowledge that is readily available for consumption.
  • PIA and BIA are largely manual
  • Standards at every step have to be added manually, this needs to be automated.
  • Development env setup
  • For new projects, the starter kit should be prepared.
  • PR Review can be automated, as it itotally manual
  • CI/CD configuration
  • Preparation of BAD for Infosec.
  • Regression
  • Manual execution of similar test cases
  • Time spent on recording and documentation of the issues.
  • Release notes
  • Consolidating CAB artefacts
  • Late identification of defects
  • Aha → ADO manual translation.
  • Requirements rewritten multiple times
  • Manual estimation and task breakdown every sprint
  • Manual test execution and regression execution
  • Manual environment promotions and approvals
  • Testing and validation repeated across environments
  • Aha → ADO transition
  • PPT → ADO manual re‑entry
  • BPM ↔ Stakeholders multiple clarification loops
  • BPM ↔ Dev post‑handoff clarifications
  • Re‑work when downstream teams reinterpret intent
  • BPM ↔ ENG clarification during planning
  • ENG ↔ BPM dependency discussions repeated mid‑sprint
  • Re‑estimation when hidden work surfaces
  • Carry‑over stories due to mis‑sizing
  • Developer picks up assigned story (ENGG)
  • Local development & implementation (ENGG)
  • Local testing & fixes (ENGG)
  • Pull Request creation (ENGG)
  • PR clarification loop (ENGG ↔ Reviewer / BA)
  • ENG ↔ BA clarifications during development
  • ENG ↔ Reviewer discussions on intent vs implementation
  • Re‑work when acceptance criteria are interpreted differently
  • Manual execution of similar test cases every sprint
  • Manual regression testing across environments
  • Manual verification after each fix
  • QA ↔ Dev for late defect fixes
  • DevOps ↔ QA for environment readiness
  • Business ↔ QA for UAT clarifications
  • Re‑promotion cycles across environments
  • Manual approval requests at each environment
  • Manual coordination of release timing
  • Manual release notes and status updates
  • Manual test execution and regression
Process Steps (88)
  • Input: (Business, Pdm, Eng, BA)
  • Business Overview: Business teams articulate the objective, value proposition, and strategic alignment.
  • Cross-Functional Discovery: Workshops to clarify expectations, identify risks, and understand edge cases. ( 3. Dependency Assessment: Early identification of UI/UX needs, data model impacts, and POCs. 4. Estimation & Release Planning: Effort sizing and sprint-level planning. 5. Design Finalization:
  • Output:
  • Stable and agreed‑upon scope with structured approach
  • Clarified enhancement requirements, stronger feasibility insights, and sprint‑ready backlog items
  • Improved risk visibility, clarified dependencies, feasible design direction, reliable planning, and sprint‑aligned delivery
  • Requirements Gathering
  • Develop and Testing
  • Input:
  • Product & Integration Manual: Product team documents detailed requirements and integration flows.
  • User Story Creation: User stories and features are created in ADO and assigned during sprint planning.
  • Scope Sizing /Effort Estimation
  • Sprint planning
  • Alignment on Release timelines
  • Alignment on Functional flow
  • Dev work to kickstart based on agreed scope and Technical approach
  • Engineering Alignment: Dev and QA leads participate in requirement finalization discussions. 2. Internal Engineering Communication: Leads brief extended teams on upcoming development scope. 3. Daily Cadence: Daily standups and BA–Engineering discussions ensure progress and resolve clarifications. 4.
  • Team awareness and update on requirements 2. Alignment on requirement and technical approach
  • Build Handover: Early UAT begins while QA validation is in progress to capture early business feedback. 2. Feedback Logging: SharePoint portal used for UAT comments; integrated with ADO. 3. Internal Review: Product and Engineering regularly review feedback and classify issues. 4. Triage Call: Alignm
  • Early feedback – Issues/ Observations on Business expectations
  • On-time resolution of Issues / observations
  • Input: Business, Product, Project Manager
  • Documentation Review: CAB reviews UAT signoff, InfoSec approvals, QRM reports, PT/Load/Hardening reports. 2.Release Approval: CAB authorizes deployment upon complete compliance review.
  • Approval on mandatory artifacts – PIA, QRM report, BIA, Architecture diagram update
  • Governance process
  • Create the initial prototype for the application using Replit (BPM, PDM)
  • Stakeholder alignment on the requirements (Business)
  • Formation of the Epics, Stories etc (PDM, BA) – Leveraging SLA and Factory.AI
  • Requirement analysis and third party integration (Engg)
  • Support on 3rd party integration (SME)
  • Market analysis and research leveraging the AI Tools.
  • Evaluation of the existing tools that deliver similar functionality and gap analysis.
  • Competitor analysis
  • Replit prototype
  • Features at high level
  • PI Planning with high level T shirt sizing
  • Release Planning
  • Identification of the dependencies
  • Identification of required tech spike
  • Issue and risk Register
  • Detailed project plan
  • Initiation of PIA and BIA
  • High Level Reference Architecture
  • Sprint backlog committed in ADO
  • User Stories with acceptance criteria (variable quality)
  • Task breakdowns created during planning
  • Dependencies and assumptions (often informal)
  • Open Pull Request in GitHub
  • Code ready for review (quality varies by developer)
  • Sprint demo
  • Code deployed in QA
  • QA performs the testing of the features
  • Defects logged
  • QA Validated Build
  • Infosec certified app
  • QRM certified app
  • Prepare release notes
  • Deployment
  • Build is moved to prod
  • Business / Client requirement raised outside delivery tooling (end users)
  • Feature / Epic definition created upstream (Aha)
  • User Stories created in ADO
  • Acceptance criteria documented (variable quality)
  • Assumptions and constraints scattered across tools
  • Limited traceability back to original problem statement
  • Approved User Stories in ADO (from Ideation / Requirements Gathering)
  • Feature / Epic context (often incomplete or stale)
  • Sprint / PI planning timelines
  • Capacity assumptions (team‑level)
  • Tasks assigned to developers
  • Dependencies tracked informally
  • Limited traceability back to original intent
  • Tests partially present or missing
  • Merged code from GitHub (post‑PR approval)
  • Deployed build in Dev‑Int
  • Sprint scope as implemented (often with deviations from original intent)
  • Limited test documentation from development phase
  • Tested build in Dev‑Int
  • Defects logged and partially resolved
  • Test coverage varies by feature and sprint
  • QA‑validated build from UAT
  • Open defects resolved or deferred
  • Release notes (manually prepared)
  • Environment availability and release window constraints
  • Code deployed to Production
  • Business sign‑off recorded
  • Release completed within sprint boundary
Additional Notes (48)
  • Shareholders:
  • Business
  • Engineering
  • Tools:
  • Internal tools
  • Factory
  • Scrum Master/ Project Manager
  • Sharepoint / Internal tools
  • Product
  • Scrum Master
  • Copilot
  • Github
  • Engineering – Dev & QA
  • Stakeholders:
  • Project Manager
  • Sharepoint
  • Internal portals for tracking compliance etc
  • Product Management
  • Architecture/ Engineering
  • QA (indirect, downstream)
  • Most ideation defects are discovered after development starts
  • Waste introduced here amplifies across Build, Test, and Release
  • This stage sets the quality ceiling for the entire PDLC
  • Engineering Leads
  • Developers
  • TPM / Delivery Managers
  • Leverage Factory to accelerate Tech spike.
  • Engg
  • Devops
  • Team uses factory.AI for devleopment
  • QA Engineers
  • Business Analysts (clarifications)
  • TPM / Delivery Leads
  • DevOps (environment support)
  • Business / Product Owners
  • Delivery / TPM
  • Future teams should have a higher composition of domain knowledge roles.
  • Focus on the costing model
  • Business / Client
  • PR Reviewers (Senior Devs / Leads)
  • QA (downstream)
  • DevOps (downstream)
  • Release stage amplifies all upstream inefficiencies
  • Conservative release model is a response to late risk discovery
  • Most “slow delivery” perception is visible here, but created earlier
  • Most delays visible in Release originate upstream
  • QA and Release absorb quality gaps created earlier
  • Current PDLC optimizes for control over flow
All Slide Details (Raw)

Internal Audit.pptx

Slide 1: Ideation | Requirements Gathering SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: columnar

Repetitive Tasks:

  • Manual tracking of dependencies across teams
  • Repeated clarifications due to high level requirement
  • Design discussions and finalization is repetitive

Process Steps:

  • Input: (Business, Pdm, Eng, BA)
  • Business Overview: Business teams articulate the objective, value proposition, and strategic alignment.
  • Cross-Functional Discovery: Workshops to clarify expectations, identify risks, and understand edge cases. ( 3. Dependency Assessment: Early identification of UI/UX needs, data model impacts, and POCs. 4. Estimation & Release Planning: Effort sizing
  • Output:
  • Stable and agreed‑upon scope with structured approach
  • Clarified enhancement requirements, stronger feasibility insights, and sprint‑ready backlog items
  • Improved risk visibility, clarified dependencies, feasible design direction, reliable planning, and sprint‑aligned delivery

Pain Points:

  • High level requirements which are not benchmarked against actual Client needs
  • Delay in scope freeze / Multiple rounds of clarification
  • Multiple scope revisions post estimation
  • Absence of centralized repository for Product documentation
Slide 2: Ideation | Requirements Gathering SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: columnar

Root Causes:

  • Insufficient early involvement of product and engineering
  • Business inputs vary in quality and depth Product & Engineering cannot derive actionable clarity.
  • No automated traceability Requirements ↔ design ↔ test cases (eventually) are not auto-linked.
  • Lack of Governance - Gaps in drafted requirements, flows, and integration mappings

Opportunities:

  • AI-driven Requirement Assistant & Document generation: Automatically evaluates requirement submissions for completeness:
  • detects missing flows
  • flags unclear requirements
  • suggests edge scenarios
  • auto-generates acceptance criteria
  • AI Q&A Repository / Knowledge Engine Automatically clusters past clarifications and suggests answers to similar new questions.

Additional:

  • Shareholders:
  • Business
  • Engineering
  • Tools:
  • Internal tools
  • Factory
Slide 3: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop (and Testing)
Slide 4: Build | Plan SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: columnar

Process Steps:

  • Input:
  • Product & Integration Manual: Product team documents detailed requirements and integration flows.
  • User Story Creation: User stories and features are created in ADO and assigned during sprint planning.
  • Scope Sizing /Effort Estimation
  • Sprint planning
  • Output:
  • Alignment on Release timelines
  • Alignment on Functional flow
  • Dev work to kickstart based on agreed scope and Technical approach

Repetitive Tasks:

  • Define scope-freeze gates before estimation.
  • Standardize Dev-to-QA-to-UAT handover checklists.
  • Decision tracking and communication

Pain Points:

  • Delay in scope freeze / Multiple rounds of clarification
  • Multiple scope revisions post estimation
Slide 5: Build | Plan SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: columnar

Root Causes:

  • Ambiguity in freeze criteria No strict “definition of ready” - Engineering receives stories at varying maturity levels.

Opportunities:

  • AI Q&A Repository / Knowledge Engine - Automatically clusters past clarifications and suggests answers to similar new questions.

Additional:

  • Shareholders:
  • Business
  • Engineering
  • Scrum Master/ Project Manager
  • Tools:
  • Sharepoint / Internal tools
  • Factory
Slide 6: Process Mapping & Analysis
Type: narrative

Process Steps

  • Develop and Testing
  • Requirements Gathering
Slide 7: Build | Development & Testing SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: columnar

Process Steps:

  • Input:
  • Engineering Alignment: Dev and QA leads participate in requirement finalization discussions. 2. Internal Engineering Communication: Leads brief extended teams on upcoming development scope. 3. Daily Cadence: Daily standups and BA–Engineering discussi
  • Output:
  • Team awareness and update on requirements 2. Alignment on requirement and technical approach

Repetitive Tasks:

  • Rewriting similar code patterns across modules
  • Manually reviewing logs to debug issues
  • Manually updating ADO states
  • Manual peer review cycles
  • Reproducing scenarios manually while Unit testing
  • Repetitive feasibility checks across components
  • Maintenance of manual dependency tracker

Pain Points:

  • Gaps between product expectations and dev understanding
  • Spillover from development to subsequent sprints
  • QA dependency on incomplete or unclear acceptance criteria
Slide 8: Build | Development SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: columnar

Root Causes:

  • User stories lacking precision Acceptance criteria not fully scenario‑based => Developers interpret differently.
  • Siloed communication dynamics BA, Dev, and QA ask similar questions but in different channels.
  • Manual tracking in ADO No automated prompts for missing fields or incomplete requirements.
  • No predictive visibility of sprint risk Capacity issues or spikes in complexity discovered only mid‑sprint.
  • QA dependent on tribal knowledge Test coverage relies on personal understanding rather than system‑generated test scenarios.

Opportunities:

  • AI-based code assistant Suggests code patterns, detects integration dependencies, predicts defects early.
  • AI-driven Acceptance Criteria Enhancer Takes AC and expands into scenario-based detailed test steps.
  • AI Sprint Risk Predictor Using past sprint data and current story complexity:
  • predicts story spillover
  • flags stories needing refinement
  • highlights potential capacity bottlenecks
  • AI Test Case Generator Generates test cases from user stories automatically.

Additional:

  • Shareholders:
  • Product
  • Engineering
  • Scrum Master
  • Tools:
  • Sharepoint / Internal tools
  • Factory
  • Copilot
  • Github
Slide 9: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop and Testing
Slide 10: Build | UAT SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: columnar

Process Steps:

  • Input:
  • Build Handover: Early UAT begins while QA validation is in progress to capture early business feedback. 2. Feedback Logging: SharePoint portal used for UAT comments; integrated with ADO. 3. Internal Review: Product and Engineering regularly review fe
  • Output:
  • Early feedback – Issues/ Observations on Business expectations
  • On-time resolution of Issues / observations

Repetitive Tasks:

  • Handover of Pre-UAT post dev code freeze
  • Detailed steps (reiterative) to be shared to capture feedback/ observation on deployed scope

Pain Points:

  • UAT observations logged manually in SharePoint
  • Duplicate bugs or repeated observations
  • Business misunderstanding feature behaviour
Slide 11: Build | UAT SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: columnar

Root Causes:

  • Business does not see consolidated view No AI‑based grouping of observations → duplicates multiply.
  • Manual classification effort Product & Engineering manually categorize items (bug/enhancement/change request).
  • Lack of automated impact analysis Teams manually determine how UAT issues affect timelines or scope.

Opportunities:

  • AI-powered UAT Assistant
  • Reads UAT submissions
  • Removes duplicates
  • Suggests classification (bug/enhancement/change request)
  • Groups similar issues
  • AI Chatbot for Business FAQs
  • Explains feature behavior
  • Reduces repeated queries
  • Helps business validate expected vs actual behavior

Additional:

  • Shareholders
  • Business
  • Product
  • Engineering – Dev & QA
  • Scrum Master
  • Tools:
  • Sharepoint / Internal tools
Slide 12: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop and Testing
Slide 13: Build | Release SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: columnar

Repetitive Tasks:

  • Collation of build artifacts for CAB signoff
  • Publishing Release notes for every release
  • Multiple standard communications (Internal and external)on Deployment, downtime, deployment status etc
  • Manual compliance process to ensure compliance on PIA/BIA/QRM
  • QA to Infosec handover to Signoff
  • Update of Product status on multiple portals – Product Registry, Empire, Release notes, PdM status updates and Weekly Governance report

Pain Points:

  • CAB documentation preparation time-consuming
  • Missing artifacts require follow-ups
  • Business or QA signoff delays final deployment
  • Infosec approval process and steps included are manual and time consuming with dependency on assigned resources
  • No automated readiness dashboard

Process Steps:

  • Input: Business, Product, Project Manager
  • Documentation Review: CAB reviews UAT signoff, InfoSec approvals, QRM reports, PT/Load/Hardening reports. 2.Release Approval: CAB authorizes deployment upon complete compliance review.
  • Approval on mandatory artifacts – PIA, QRM report, BIA, Architecture diagram update
  • Output:
  • Governance process
Slide 14: Build | Release SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: columnar

Root Causes:

  • Release readiness info scattered across systems UAT signoff → SharePoint QA reports → email or internal tools InfoSec → email or internal tools Hardening/PT → engineering artifacts
  • Manual collation of artifacts Teams must gather documentation manually before submitting to CAB.
  • No automated compliance checks CAB checks become human-driven.

Opportunities:

  • AI’s Release Readiness Scoring Engine Pulls data from:
  • QA reports
  • UAT signoffs
  • InfoSec gating
  • Performance reports And generates a readiness score.
  • Automated CAB Documentation Compiler Auto-generates a CAB packet:
  • UAT signoff summary
  • QA certification
  • Risk summary
  • PT/Load results
  • Known issues list
  • AI-based Compliance Checker
  • Automatic trigger of Infosec/PIA/QRM compliance as per the changes in product architecture and build update
  • Ensures no missing documents before CAB submission.

Additional:

  • Stakeholders:
  • Engineering
  • Scrum Master
  • Project Manager
  • Tools:
  • Sharepoint
  • Internal portals for tracking compliance etc
Slide 15: 2.1
Type: generic
  • Templates for ‘Delivery on a Page’
Slide 16: About
Type: generic
  • Template slides
  • Following the work to capture the current process mapping across the PDLC, there is significant value in being able to view E2E delivery on a page.
  • Delivery on a page provides a perspective of all processes and complexity, of the entire delivery process.
  • The ‘current view’ will also be overlaid against the ‘end view’ for comparison
  • Manual work is needed for this next phase, but it is a simple copy-paste exercise.
Slide 17: Delivery Processes on a Page SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: generic
  • Requirements Gathering
  • Business Overview: Business teams articulate the objective, value proposition, and strategic alignment.
  • Cross-Functional Discovery: Workshops to clarify expectations, identify risks, and understand edge cases. ( 3. Dependency Assessment: Early identification of UI/UX needs, data model impacts, and POCs. 4. Estimation & Release Planning: Effort sizing
  • Stakeholders: Business, Pdm, Eng, BA
  • Product & Integration Manual: Product team documents detailed requirements and integration flows.
  • User Story Creation: User stories and features are created in ADO and assigned during sprint planning.
  • Scope Sizing /Effort Estimation
  • Sprint planning
  • Stakeholders:
  • Business, PdM, BA, Engineering, Scrum Master/ Project Manager
  • Engineering Alignment: Dev and QA leads participate in requirement finalization discussions. 2. Internal Engineering Communication: Leads brief extended teams on upcoming development scope. 3. Daily Cadence: Daily standups and BA–Engineering discussi
  • Stakeholders:
  • Product, Engineering, BA, Scrum Master
  • Build Handover: Early UAT begins while QA validation is in progress to capture early business feedback. 2. Feedback Logging: SharePoint portal used for UAT comments; integrated with ADO. 3. Internal Review: Product and Engineering regularly review fe
  • Shareholders:
  • Business, Product, Engineering – Dev & QA, Scrum Master
  • Documentation Review: CAB reviews UAT signoff, InfoSec approvals, QRM reports, PT/Load/Hardening reports. 2.Release Approval: CAB authorizes deployment upon complete compliance review.
  • Approval on mandatory artifacts – PIA, QRM report, BIA, Architecture diagram update
  • Shareholders:
  • Engineering, Scrum master, Project Manager
Slide 18: Delivery Processes on a Page SL/Portfolio: Product – EY.AI for Risk: Internal Audit
Type: generic
  • Repetitive Tasks / Manual Loops / Manual entry
  • Pain Points / Waste in the process
  • Root Cause of Pain Points
  • Opportunities / Ideas (Automation & AI)
  • Manual and repetitive tracking across requirements, dependencies, ADO states, and compliance processes due to lack of automation.
  • Repeated design and scope clarification cycles caused by high‑level or evolving requirements.
  • Frequent manual communication efforts: release notes, deployment updates, CAB preparations, and stakeholder notifications.
  • Manual QA, testing, and review processes including scenario reproduction, log review, and peer review cycles.
  • Redundant development effort such as rewriting code patterns and generating repeated documentation artifacts.
  • High‑level, unclear, or frequently changing requirements leading to repeated clarifications and scope freeze delays.
  • Lack of centralized, structured product documentation causing misalignment across Product, Dev, QA, and Business.
  • Manual and fragmented UAT and QA processes resulting in duplicate bugs, unclear acceptance criteria, and spillovers.
  • Time‑consuming compliance and CAB documentation cycles with missing artifacts and manual follow‑ups.
  • No automated readiness or governance dashboards, causing delays in signoff and low visibility into risks.
  • Insufficient early alignment across Product, Engineering, and Business causing unclear inputs and ambiguous requirements.
  • Absence of automated traceability linking requirements, design, development, and test cases.
  • Lack of clear definition of ready or freeze criteria leading to variable story maturity and misunderstanding.
  • Manual classification, tracking, and impact analysis causing duplication and inefficiency across teams.
  • Release readiness and compliance information scattered across systems requiring manual collation and validation.
  • AI‑driven requirement assistant to validate completeness, detect gaps, and auto‑generate acceptance criteria.
  • Central AI knowledge engine to reduce repeated clarifications and provide contextual Q&A across teams.
  • AI‑based automation for test case generation, scenario expansion, risk prediction, and UAT issue deduplication.
  • Automated release readiness and compliance scoring that integrates QA, UAT, InfoSec, and performance artifacts.
  • Automated CAB and compliance document generation to eliminate manual collation and prevent missing artifacts.
  • Internal tools/ Sharepoint
  • Figma
  • Copilot
  • Factory
  • Github
  • Visio / Draw.io

Phase 1 KO Slides and Templates - Marketing.pptx

Slide 1: Ideation | Requirements Gathering SL/Portfolio: Marketing.AI
Type: columnar

Process Steps:

  • Create the initial prototype for the application using Replit (BPM, PDM)
  • Stakeholder alignment on the requirements (Business)
  • Formation of the Epics, Stories etc (PDM, BA) – Leveraging SLA and Factory.AI
  • Requirement analysis and third party integration (Engg)
  • Support on 3rd party integration (SME)
  • Market analysis and research leveraging the AI Tools.
  • Evaluation of the existing tools that deliver similar functionality and gap analysis.
  • Competitor analysis

Repetitive Tasks:

  • Repetition of the standard backlog of EY standards in every project. This should be a knowledge that is readily available for consumption.

Pain Points:

  • Separate requirement discussion and backlog creation process, it should be one.
  • Lack of clarity in priorities/competing priorities.
Slide 2: Ideation | Requirements Gathering SL/Portfolio: Product X
Type: columnar

Root Causes:

  • Sequential hand off
  • Different sources of truth
  • Lack of adequate market analysis

Opportunities:

  • Tradition:
  • Explore ways of improving the market analysis.
  • Leverage AI to identify the differentiators

Additional:

  • Business
  • Product Management
  • Architecture/ Engineering
  • QA (indirect, downstream)
  • Most ideation defects are discovered after development starts
  • Waste introduced here amplifies across Build, Test, and Release
  • This stage sets the quality ceiling for the entire PDLC
Slide 3: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop (and Testing)
Slide 4: Build | Plan SL/Portfolio: Marketing.AI
Type: columnar

Process Steps:

  • Input:
  • Replit prototype
  • Features at high level
  • PI Planning with high level T shirt sizing
  • Release Planning
  • Identification of the dependencies
  • Identification of required tech spike
  • Issue and risk Register
  • Output:
  • Detailed project plan
  • Initiation of PIA and BIA
  • High Level Reference Architecture

Repetitive Tasks:

  • PIA and BIA are largely manual
  • Standards at every step have to be added manually, this needs to be automated.

Pain Points:

  • Incomplete or unclear requirements (to a large extent Replit can help with this)
  • Planning delay due to lack of knowledge in external systems like Salesforce, Dynamics and unavailability of experts.
  • Lack of Sandbox
  • License procurement is a time consuming process
Slide 5: Build | Plan SL/Portfolio: Marketing.AI
Type: columnar

Root Causes:

  • Planning is driven by capacity, rethink in terms of value and GTM
  • Estimation process is largely manual and unstated requirements are not handled.

Opportunities:

  • Traditional automation opportunities:
  • Estimation
  • Identification of risks and issues
  • AI/agentic opportunities:
  • Accelerate the Tech Spikes and try to conclude during planning

Additional:

  • Engineering Leads
  • Developers
  • TPM / Delivery Managers
  • Leverage Factory to accelerate Tech spike.
Slide 6: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop (and Testing)
Slide 7: Build | Development SL/Portfolio: Marketing.AI
Type: columnar

Process Steps:

  • Input:
  • Sprint backlog committed in ADO
  • User Stories with acceptance criteria (variable quality)
  • Task breakdowns created during planning
  • Dependencies and assumptions (often informal)
  • Output:
  • Open Pull Request in GitHub
  • Code ready for review (quality varies by developer)
  • Sprint demo

Repetitive Tasks:

  • Development env setup
  • For new projects, the starter kit should be prepared.
  • PR Review can be automated, as it itotally manual
  • CI/CD configuration
  • Preparation of BAD for Infosec.

Pain Points:

  • Improve PR Review
  • QA lags Dev.
  • Lack of optimal automation coverage
  • Formation of the team
Slide 8: Build | Development SL/Portfolio: Marketing.AI
Type: columnar

Root Causes:

  • Inconsistent use of automation
  • Knowledge is embedded in individuals

Opportunities:

  • Traditional automation opportunities:
  • Improve Definition of Ready before development starts
  • Enforce minimum testing expectations per story
  • Reduce reliance on review stage to catch basic issues
  • AI/agentic opportunities:
  • Repetitive code generation and refactoring patterns
  • Leverage the knowledge from the PR Comments
  • Generation of test scripts
  • Embed the security process in sprint
  • Auto suggestion of profiles against required JD

Additional:

  • Engg
  • Devops
  • Team uses factory.AI for devleopment
Slide 9: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop Testing
Slide 10: Build | Testing SL/Portfolio: Marketing.AI
Type: columnar

Process Steps:

  • Input:
  • Code deployed in QA
  • Sprint Demo
  • QA performs the testing of the features
  • Output:
  • Defects logged

Repetitive Tasks:

  • Regression
  • Manual execution of similar test cases
  • Time spent on recording and documentation of the issues.

Pain Points:

  • Delay in code being deployed in QA
  • Testing stories in isolation and not looking at the entire flow.
  • QA lags dev.
  • Elaborate QA sign off process
Slide 11: Build | Testing SL/Portfolio: Marketing.AI
Type: columnar

Root Causes:

  • Automation is not yet mature
  • Lack of awareness of the entire flow

Opportunities:

  • Traditional automation opportunities:
  • Automate regression and run it frequently.
  • Test the feature, not the individual stories
  • Simplify bug logging process.
  • AI/agentic opportunities:
  • Leverage tools like Playwright to automate the test case generation and recording results.
  • Developers to run the test cases before handing over to QA

Additional:

  • QA Engineers
  • Developers
  • Business Analysts (clarifications)
  • TPM / Delivery Leads
  • DevOps (environment support)
Slide 12: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop Testing
Slide 13: Build | Release SL/Portfolio: Marketing.AI
Type: columnar

Process Steps:

  • Input:
  • QA Validated Build
  • Infosec certified app
  • QRM certified app
  • Prepare release notes
  • Deployment
  • Output:
  • Build is moved to prod

Repetitive Tasks:

  • Release notes
  • Consolidating CAB artefacts
  • Late identification of defects

Pain Points:

  • Waiting: releases blocked by approval queues
  • Over‑processing: repeated testing across environments
  • CAB process is manual
  • Lack of availability of Production equivalent data in lower environment
  • Effective UAT not conducted
Slide 14: Build | Release SL/Portfolio: Marketing.AI
Type: columnar

Root Causes:

  • Release process is approval‑heavy and manual
  • Testing and validation repeated across environments
  • Late discovery of issues
  • Production release constrained by cadence

Opportunities:

  • Traditional automation opportunities:
  • Reduce redundant testing
  • Simplify CAB approval process
  • AI/agentic opportunities:
  • Release checks and build quality check
  • Preparation of production equivalent data

Additional:

  • Developers
  • DevOps
  • Business / Product Owners
  • Delivery / TPM
Slide 15: 2.1
Type: generic
  • Templates for ‘Delivery on a Page’
Slide 16: About
Type: generic
  • Template slides
  • Following the work to capture the current process mapping across the PDLC, there is significant value in being able to view E2E delivery on a page.
  • Delivery on a page provides a perspective of all processes and complexity, of the entire delivery process.
  • The ‘current view’ will also be overlaid against the ‘end view’ for comparison
  • Manual work is needed for this next phase, but it is a simple copy-paste exercise.
Slide 17: Delivery Processes on a Page Consulting : Marketing.AI
Type: generic
  • Requirements Gathering
  • Define the prototype in Replit
  • Feature / Epic defined in Aha
  • Manual interpretation and elaboration by BA
  • Handoff to delivery with prototype and context
  • Completion of the market analysis, evaluation of different available tools.
  • Identify the third party integration.
  • Complete the PI planning and release planning.
  • Define the MVP
  • Identify the features requiring Tech Spike.
  • Stories reviewed for readiness during planning
  • Identification of dependencies
  • Initiate PIA and BIA
  • Form the reference architecture and identify infra requirements
  • Identify and initiate license requirements
  • Resource requests
  • Developers implement stories based on the sprint planning
  • Developers use GitHub Copilot and now Factory.AI
  • PRs created manually in GitHub
  • Automated unit test cases are completed
  • PR reviews
  • Unit testing and hand over to QA
  • Sprint demos
  • Initiation of Infosec and RAI
  • Prepare BAD
  • Auto‑deploy to Dev‑Int
  • Manual integration & regression testing by QA
  • Defects logged in ADO
  • Fix → redeploy → retest loops
  • QA lags development by few days.
  • Performance testing
  • Completion of all security, RAI
  • Initiation of the UAT
  • Multiple approval gates
  • Business sign‑off in UAT
  • CAB Artefacts preparation
  • Roll back plan identification
  • Perform dry run for complex deployment
  • Inform business on the downtime.
  • Production deploy
Slide 18: Delivery Processes on a Page Consulting : Marketing.AI
Type: columnar

Repetitive Tasks:

  • Aha → ADO manual translation.
  • Requirements rewritten multiple times
  • Manual estimation and task breakdown every sprint
  • Manual test execution and regression execution
  • Manual environment promotions and approvals
  • Testing and validation repeated across environments

Pain Points:

  • Waiting on clarifications, reviews, and approvals
  • Context loss across BA → Eng → QA → Business
  • Late discovery of defects
  • Rework and repeated validation
  • Inventory of “done but unreleased” work
  • Process delays associated with CAB approval
  • Lack of production like data
  • Challenges in generation of synthetic data
  • Testing and validation repeated across environments
  • License procurement

Root Causes:

  • Fragmented tooling and sources of truth
  • Sequential role‑based handoffs
  • Late technical and business validation
  • Heavy reliance on individuals (BA, QA, reviewers)
  • Control‑driven release model compensating for late risk discovery

Opportunities:

  • Reduce duplicate documentation and re‑entry
  • Improve confidence earlier in the lifecycle
  • Set Agent workflow development with human intervention
  • Automation in QA
  • Test case and execution
  • Approval
  • CAB process automation based on different requirements
  • Infosec certified RAI Approved Code cartridges/APIs central repo
  • Performance testing to be shifted left leveraging Agents
  • Reduce reliance on late‑stage testing and approvals
  • Operate cost monitoring
  • Available profiles match against JD

Additional:

  • Future teams should have a higher composition of domain knowledge roles.
  • Focus on the costing model

Phase 1 KO Slides and Templates- EYWP.pptx

Slide 1: Ideation | Requirements Gathering Consulting: EYWP
Type: columnar

Process Steps:

  • Input:
  • Business / Client requirement raised outside delivery tooling (end users)
  • Feature / Epic definition created upstream (Aha)
  • Output:
  • User Stories created in ADO
  • Acceptance criteria documented (variable quality)
  • Assumptions and constraints scattered across tools
  • Limited traceability back to original problem statement

Repetitive Tasks:

  • Aha → ADO transition
  • PPT → ADO manual re‑entry
  • BPM ↔ Stakeholders multiple clarification loops
  • BPM ↔ Dev post‑handoff clarifications
  • Re‑work when downstream teams reinterpret intent

Pain Points:

  • Waiting: stories blocked pending clarification from BA/BPM
  • Over‑processing: same requirement rewritten multiple times
  • Context loss: original intent diluted across handoffs
  • Non‑utilized talent: engineers involved late, not during ideation
  • Defects (upstream): unclear requirements discovered during dev/test
  • Transportation waste: information moved across tools instead of flowing
Slide 2: Ideation | Requirements Gathering Consulting : EYWP
Type: columnar

Root Causes:

  • Fragmented source of truth
  • Manual translation between systems
  • Sequential, role‑based handoffs
  • Late technical validation
  • Weak feedback loop from delivery

Opportunities:

  • Traditional automation opportunities:
  • Reduce duplicate documentation between Aha, Confluence, and ADO
  • Enforce consistent requirement structure before story creation
  • Earlier involvement of engineering during requirement shaping
  • Standardize acceptance‑criteria quality gates before handoff
  • AI/agentic opportunities:
  • High volume of repetitive requirement rewriting
  • Pattern‑based clarification questions recurring across stories
  • Manual consistency checks between Aha → ADO artifacts
  • Opportunity to reduce BPM cognitive load caused by re‑entry and re‑interpretation

Additional:

  • Business / Client
  • Product Management
  • Architecture/ Engineering
  • QA (indirect, downstream)
  • Most ideation defects are discovered after development starts
  • Waste introduced here amplifies across Build, Test, and Release
  • This stage sets the quality ceiling for the entire PDLC
Slide 3: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop (and Testing)
Slide 4: Build | Plan Consulting: EYWP
Type: columnar

Process Steps:

  • Input:
  • Approved User Stories in ADO (from Ideation / Requirements Gathering)
  • Feature / Epic context (often incomplete or stale)
  • Sprint / PI planning timelines
  • Capacity assumptions (team‑level)
  • Output:
  • Sprint backlog committed in ADO
  • Tasks assigned to developers
  • Dependencies tracked informally
  • Limited traceability back to original intent

Repetitive Tasks:

  • BPM ↔ ENG clarification during planning
  • ENG ↔ BPM dependency discussions repeated mid‑sprint
  • Re‑estimation when hidden work surfaces
  • Carry‑over stories due to mis‑sizing

Pain Points:

  • Waiting: planning delayed due to unclear stories
  • Over‑processing: re‑breaking down poorly shaped stories
  • Inventory: partially planned work carried sprint‑to‑sprint
  • Motion: constant context switching during sprint start
  • Defects (upstream): planning assumptions invalidated during dev
Slide 5: Build | Plan SL/Portfolio: Product X
Type: columnar

Root Causes:

  • Stories enter planning without being truly “ready”
  • Planning is capacity‑driven, not outcome‑driven
  • Manual estimation with low feedback learning
  • Dependencies identified too late
  • Single sprint is treated as planning horizon

Opportunities:

  • Traditional automation opportunities:
  • Enforce stronger “Definition of Ready” before planning
  • Standardize estimation inputs across teams
  • Make dependency identification explicit and visible in tooling
  • AI/agentic opportunities:
  • Repetitive task breakdown patterns
  • Repeated estimation of similar work items
  • Manual dependency detection across teams

Additional:

  • Engineering Leads
  • Developers
  • TPM / Delivery Managers
  • QA (indirect, downstream)
Slide 6: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop (and Testing)
Slide 7: Build | Development Consulting:EYWP
Type: columnar

Process Steps:

  • Input:
  • Sprint backlog committed in ADO
  • User Stories with acceptance criteria (variable quality)
  • Task breakdowns created during planning
  • Dependencies and assumptions (often informal)
  • Output:
  • Open Pull Request in GitHub
  • Code ready for review (quality varies by developer)
  • Tests partially present or missing

Repetitive Tasks:

  • Developer picks up assigned story (ENGG)
  • Local development & implementation (ENGG)
  • Local testing & fixes (ENGG)
  • Pull Request creation (ENGG)
  • PR clarification loop (ENGG ↔ Reviewer / BA)
  • ENG ↔ BA clarifications during development
  • ENG ↔ Reviewer discussions on intent vs implementation
  • Re‑work when acceptance criteria are interpreted differently

Pain Points:

  • Waiting: PRs stalled awaiting review or clarification
  • Defects: logic issues found during review or later testing
  • Over‑processing: repeated fixes due to unclear requirements
  • Motion: developers switching between code, chat, ADO, PRs
  • Non‑utilized talent: engineers resolving requirement ambiguity instead of building
Slide 8: Build | Development Consulting:EYWP
Type: columnar

Root Causes:

  • Development starts with incomplete context
  • PR‑centric workflow concentrates risk
  • Inconsistent use of automation
  • Knowledge is embedded in individuals
  • Late discovery of defects

Opportunities:

  • Traditional automation opportunities:
  • Improve Definition of Ready before development starts
  • Enforce minimum testing expectations per story
  • Standardize PR templates to capture intent and context
  • Reduce reliance on review stage to catch basic issues
  • AI/agentic opportunities:
  • Repetitive code generation and refactoring patterns
  • Recurrent PR review comments across services
  • Manual quality and consistency checks during review
  • Opportunity to shift issue detection earlier than PR review

Additional:

  • Developers
  • PR Reviewers (Senior Devs / Leads)
  • Business Analysts (clarifications)
  • QA (downstream)
  • DevOps (downstream)
Slide 9: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop Testing
Slide 10: Build | Testing Consulting EYWP
Type: columnar

Process Steps:

  • Input:
  • Merged code from GitHub (post‑PR approval)
  • Deployed build in Dev‑Int
  • Sprint scope as implemented (often with deviations from original intent)
  • Limited test documentation from development phase
  • Output:
  • Tested build in Dev‑Int
  • Defects logged and partially resolved
  • Test coverage varies by feature and sprint

Repetitive Tasks:

  • Manual execution of similar test cases every sprint
  • Manual regression testing across environments
  • Manual verification after each fix

Pain Points:

  • Waiting: QA blocked waiting for fixes
  • Defects: issues discovered late in lifecycle
  • Over‑processing: repeated regression testing
  • Motion: switching between ADO, test tools, builds, chat
  • Extra‑processing: re‑testing unchanged areas
Slide 11: Build | Testing Consulting : EYWP
Type: columnar

Root Causes:

  • Testing is primarily manual
  • Defects discovered late
  • Weak linkage between requirements and tests
  • High fix‑verification churn
  • Environment‑driven constraints

Opportunities:

  • Traditional automation opportunities:
  • Improve alignment between acceptance criteria and test cases
  • Reduce manual regression through better test prioritization
  • Shift defect detection earlier in the lifecycle
  • Standardize defect severity and triage practices
  • AI/agentic opportunities:
  • Repetitive test execution patterns
  • Predictable regression scenarios across sprints
  • Manual defect classification and triage effort
  • Opportunity to reduce late‑stage defect discovery

Additional:

  • QA Engineers
  • Developers
  • Business Analysts (clarifications)
  • TPM / Delivery Leads
  • DevOps (environment support)
Slide 12: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop Testing
Slide 13: Build | Release Consulting : EYWP
Type: columnar

Process Steps:

  • Input:
  • QA‑validated build from UAT
  • Open defects resolved or deferred
  • Release notes (manually prepared)
  • Environment availability and release window constraints
  • Output:
  • Code deployed to Production
  • Business sign‑off recorded
  • Release completed within sprint boundary

Repetitive Tasks:

  • QA ↔ Dev for late defect fixes
  • DevOps ↔ QA for environment readiness
  • Business ↔ QA for UAT clarifications
  • Re‑promotion cycles across environments
  • Manual approval requests at each environment
  • Manual coordination of release timing
  • Manual release notes and status updates

Pain Points:

  • Waiting: releases blocked by approval queues
  • Over‑processing: repeated testing across environments
  • Motion: heavy coordination overhead
  • Inventory: completed work waiting for release window
  • Defects: late issues discovered during UAT or Stg
Slide 14: Build | Release Consulting:EYWP
Type: columnar

Root Causes:

  • Release process is approval‑heavy and manual
  • Testing and validation repeated across environments
  • Late discovery of issues
  • Production release constrained by cadence
  • High coordination overhead

Opportunities:

  • Traditional automation opportunities:
  • Reduce redundant testing between environments
  • Improve confidence carry‑over from earlier stages
  • Simplify approval flows where risk is low
  • Reduce dependency on fixed release windows
  • AI/agentic opportunities:
  • Repetitive promotion and validation patterns
  • Manual release readiness checks
  • Manual defect risk assessment during release decisions
  • Opportunity to reduce late‑stage surprises

Additional:

  • Developers
  • DevOps
  • Business / Product Owners
  • Delivery / TPM
  • Release stage amplifies all upstream inefficiencies
  • Conservative release model is a response to late risk discovery
  • Most “slow delivery” perception is visible here, but created earlier
Slide 15: 2.1
Type: generic
  • Templates for ‘Delivery on a Page’
Slide 16: About
Type: generic
  • Template slides
  • Following the work to capture the current process mapping across the PDLC, there is significant value in being able to view E2E delivery on a page.
  • Delivery on a page provides a perspective of all processes and complexity, of the entire delivery process.
  • The ‘current view’ will also be overlaid against the ‘end view’ for comparison
  • Manual work is needed for this next phase, but it is a simple copy-paste exercise.
Slide 17: Delivery Processes on a Page Consulting : EYWP
Type: generic
  • Requirements Gathering
  • Feature / Epic defined in Aha
  • Manual interpretation and elaboration by BA
  • Requirements rewritten across PPT / Confluence → ADO
  • Handoff to delivery with partial context
  • Stories reviewed for readiness during planning
  • Sprint scope negotiated based on capacity
  • Manual task breakdown and estimation in ADO
  • Dependencies identified late or verbally
  • Developers implement stories with partial context
  • Code written with limited AI assist (~25%)
  • PRs created partially through droid in GitHub
  • Requirement gaps surface during PR review
  • Auto‑deploy to Dev‑Int
  • Manual integration & regression testing by QA
  • Defects logged in ADO
  • Fix → redeploy → retest loops
  • Human in the loop for the review and approval before code release across environments:
  • QA → Perf/Stg → UAT → Prod
  • Multiple approval gates
  • Business sign‑off in UAT
  • Production deploy once per sprint
Slide 18: Delivery Processes on a Page Consulting : EYWP
Type: columnar

Repetitive Tasks:

  • Aha → ADO manual translation
  • Requirements rewritten multiple times
  • Manual estimation and task breakdown every sprint
  • Manual test execution and regression
  • Manual environment promotions and approvals

Pain Points:

  • Waiting on clarifications, reviews, and approvals
  • Context loss across BA → Eng → QA → Business
  • Late discovery of defects
  • Rework and repeated validation
  • Inventory of “done but unreleased” work

Root Causes:

  • Fragmented tooling and sources of truth
  • Sequential role‑based handoffs
  • Late technical and business validation
  • Heavy reliance on individuals (BA, QA, reviewers)
  • Control‑driven release model compensating for late risk discovery

Opportunities:

  • Reduce duplicate documentation and re‑entry
  • Improve confidence earlier in the lifecycle
  • Reduce reliance on late‑stage testing and approvals
  • Smooth flow between stages instead of gated transitions

Additional:

  • Most delays visible in Release originate upstream
  • QA and Release absorb quality gaps created earlier
  • Current PDLC optimizes for control over flow

EY Parthenon

12
Pain Points
13
Root Causes
41
Opportunities
16
Repetitive Tasks

Products in Scope: Competitive Edge

Pain Points (12)
  • Unclear or unstable requirements → rework loops
  • Architecture response time → waiting time bottleneck – amplifies loop
  • Missing acceptance criteria → creates rework → defects & slow QA
  • UX late involvement → late cycle churn
  • Manual creation of Epics/Features/Stories → administrative overhead – increased context loss
  • No unified intake → context lost between handoffs
  • BA + Dev + QA ask repeated clarification → wasteful loops
  • Story/feature cycle time longer than expected
  • Bug density higher due to upfront ambiguity
  • Primary constraint metric: number of rework loops per story.
  • Supporting indicators: feature churn and lead time.
  • Baseline captured before AI support introduced.
Root Causes (13)
  • Business alignment gaps → priorities shift, unclear problem statements
  • BA and Scrum teams not empowered to push back → incomplete requirements progress downstream
  • Architecture bandwidth constraint → availability and specialist skill constraint
  • Weak Definition of Ready understanding → inconsistent story quality
  • Lack of business context inside the team → unclear AC, weak tests
  • QA capability gap → poor test case writing, missing regression coverage
  • Tools missing early in flow (e.g., InfoSec tests not in Dev pipeline)
  • Manual, multi‑system working (Email/Wiki/Sharepoint → Figma → Aha! → ADO → Git)
  • No structured intake → repeated clarification loops and context loss
  • RESULT OF ROOT CAUSE
  • Root causes manifest as repeated rework loops.
  • Loops hide real progress and inflate effort.
  • Fixing loops reduces multiple symptoms at once.
Opportunities & AI Ideas (41)
  • AI‑Native Automation Opportunities
  • Auto‑generate problem statements, requirements drafts, epics, features, stories, and acceptance criteria from business input
  • Auto‑summarise business conversations/emails into structured intake forms
  • AI‑assisted UX wireframe extraction from requirements
  • AI agent to recommend architectural patterns for early feasibility
  • Agent‑driven Definition of Ready checks before stories enter planning
  • Automated test case generation from AC + UX flows
  • AI assistant for CAB readiness (pre-filling forms, attaching evidence)
  • Process & Workflow Improvements
  • Introduce a structured intake template for Business → BA
  • Integrate Figma → ADO auto‑sync for UX artefacts
  • Automated “requirements completeness” scoring
  • Agent‑based orchestration to remove repetitive handoffs
  • Real-time story/feature validation for business clarity
  • Structured intake summarization.
  • Requirement and acceptance criteria generation.
  • Definition of Ready validation.
  • Early feasibility guidance.
  • Standard intake format.
  • Single source of requirement truth.
  • Explicit readiness gates.
  • Quick Wins
  • AC & Story Generator: derive acceptance criteria and stories from business input/UX → ADO
  • Requirements Intake Summarizer: turn emails/meetings into structured intake + epics/features
  • DOR Gatekeeper Agent: automated readiness checks before planning (fields, AC quality, links)
  • Strategic Bets
  • Architecture Pattern Advisor: propose patterns/constraints early;
  • Compliance/CAB Pre‑Assembler: prefill forms, attach scans/tests, build evidence pack
  • Risk/Impact Forecaster: predict requirement volatility, downstream bug risk, cycle‑time impact
  • GUIDING PRINCIPLES
  • AI operates between PDLC gates.
  • Humans approve at intake, readiness, and planning.
  • AI prepares, validates, and flags.
  • Humans decide and commit.
  • Incremental Improvement
  • ADO Admin Automation: bulk create/update epics/features/stories; labels, links, templates
  • Meeting → Action Extractor: auto‑create follow‑ups/tasks, tag owners, set due dates
  • Evaluate Later
  • Auto‑Negotiation Agent with Business: fully autonomous clarification with stakeholders
  • End‑to‑End Requirements Simulation: synthetic user flows & cost/benefit simulation at intake
  • UX→ADO Linker: sync Figma frames to stories (IDs, flows, assets) to reduce context loss
Repetitive Tasks (16)
  • BA ↔ Business for clarity (multiple cycles)
  • BA ↔ Architecture for feasibility (slow cycles)
  • BA ↔ UX for aligning flows and screens
  • Dev ↔ BA for missing acceptance criteria
  • Dev ↔ QA for incomplete test details
  • Manual updates of Epics, Features, Stories in ADO
  • Manual syncing between Figma ↔ ADO ↔ BA notes
  • Manual CAB evidence preparation downstream
  • User Story Lead Time - 34 days
  • Percentage of Feature Churn – 54%
  • User Story Cycle Time – 7 days
  • Story Points Planned to be Done – 340%
  • Clarification loops drive time loss.
  • Manual updates drive inconsistency.
  • Late validation drives churn.
  • Each loop increases lead time and cost.
Process Steps (35)
  • Step 1 — Business Idea / Problem Surfaces (BPM)
  • Input: informal business request, problem statement, email, conversation
  • Output: initial idea statement
  • Tools: Email, Teams meetings, ad‑hoc docs, Wiki
  • Participants: BPM, BA, TPdM
  • Issues: no structured intake, variable clarity
  • Step 2 — BA Intake & Clarification (BA)
  • Input: problem statement from business
  • Output: clarified requirement, draft scope
  • Tools: Email, Notes, Teams, Aha!, ADO
  • Participants: BA, BPM
  • Issues: waiting time for business clarity; repeated loops
  • Step 3 — BA + UX Early Shaping (BA)
  • Input: clarified requirement
  • Output: early UX sketches / Figma concepts
  • Tools: Figma
  • Participants: BA, UX
  • Issues: UX not always involved early → late rework
  • Step 4 — Architecture Review & Early Feasibility (Architecture)
  • Input: draft requirement + early UX
  • Output: initial architectural guidance / constraints
  • Tools: Architecture templates, Teams
  • Participants: Architecture, TPdM, BPM, Engineering leads
  • Issues: architecture overloaded → slow response cycles
  • Step 5 — Epic / Feature Definition (BA)
  • Input: requirement + architecture notes + UX sketches
  • Output: ADO Epic / Feature created
  • Tools: ADO Boards
  • Participants: BA, Engineering lead, architecture, TPdM, BPM
  • Issues: incomplete AC; inconsistent DOR
  • Step 6 — User Story Breakdown (Architecture / Engineering)
  • Input: Epic/Feature
  • Output: Stories + Acceptance Criteria
  • Participants: BA + Dev + Architecture
  • Issues: repeated clarification → rework; missing AC
Additional Notes (44)
  • AI Does Not Decrease Workload—It Often Increases It
  • While AI is often touted to lessen employee workloads, allowing them to concentrate on more valuable and engaging activities, recent research suggests that AI tools actually tend to amplify work demands. (source: Harvard Business Review: AI Doesn’t Reduce Work—It Intensifies It by Aruna Ranganathan 
  • HBR’s study showed that this creates creep, cognitive fatigue, burnout, and weakened decision-making.
  • My vision and goal for this pilot is to
  • Improve our employee experience and well being – in line with our values.
  • Implement an AI practice that establishes PDLC guidelines and standards, incorporates deliberate pauses, sequences tasks effectively, and strengthens human oversight.
  • Competitive Edge builds on strong foundations, ADO workflows and CT standard methods.
  • The primary opportunity sits upstream in intake and requirements.
  • Manual friction and unclear inputs drive repeated rework loops.
  • These loops create predictable delay, churn, and downstream quality issues.
  • The pilot targets intake and requirements as a narrow, deep experiment.
  • Primary constraint addressed, repeated rework loops caused by unclear inputs.
  • Proof of value focus, reduce rework loops to lower feature churn and shorten lead time.
  • AI supports analysis, generation, and validation across intake and refinement.
  • Humans retain decision rights and accountability at all approval points.
  • Scope intentionally excludes development and code automation.
  • Product: Competitive Edge
  • Processes Covered:
  • Requirements intake → Refinement → Planning → Development → Code Review → QA → UAT → CAB → Prod
  • Phase 1 focus narrowed to ideation and requirements
  • Focus Area: Requirements → Deployment, beginning with intake & refinement
  • Feature Churn: 54% → strong evidence that upstream clarity is the constraint
  • Focus area selected due to highest concentration of rework loops.
  • Objective is to stabilise inputs before downstream work begins.
  • Phase 1 targets the point where delay, rework, and quality loss originate.
  • Front end stabilisation creates the highest value impact.
  • Clean inputs establish a foundation for later AI enabled PDLC scale.
  • This pilot is a targeted experiment on requirements and intake.
  • AI supports analysis, generation, and validation.
  • Humans own decisions, prioritisation, and approvals.
  • No autonomous decision making.
  • No AI generated production code.
  • No PDLC redesign beyond intake and requirements.
  • AI reduces manual effort and ambiguity.
  • Decision authority remains unchanged.
  • Accountability stays with existing roles.
  • Business / Product Owner: Source of requirements; needs structured intake
  • Business Analysts: Primary owners of requirements; biggest beneficiaries of automation
  • Architecture: Overloaded; critical to early feasibility; candidates for AI pattern assistance
  • UX: Needs earlier involvement; Figma → ADO integration recommended
  • Developers: Need complete AC and clarity; benefit from automated AC + test generation
  • QA: Major uplift potential via AI test suite generation
  • Scrum Master: Gatekeeper for DOR and planning readiness
  • CT Governance / CAB: Target for downstream automation
All Slide Details (Raw)

PDLC Transformation.pptx

Slide 1: Vision for Pilot
Type: narrative

Executive

  • AI Does Not Decrease Workload—It Often Increases It
  • While AI is often touted to lessen employee workloads, allowing them to concentrate on more valuable and engaging activities, recent research suggests that AI tools actually tend to amplify work demands. (source: Harvard Business Review: AI Doesn’t R
  • HBR’s study showed that this creates creep, cognitive fatigue, burnout, and weakened decision-making.
  • My vision and goal for this pilot is to
  • Improve our employee experience and well being – in line with our values.
  • Implement an AI practice that establishes PDLC guidelines and standards, incorporates deliberate pauses, sequences tasks effectively, and strengthens human oversight.
Slide 2: Executive Summary
Type: narrative

Executive

  • Competitive Edge builds on strong foundations, ADO workflows and CT standard methods.
  • The primary opportunity sits upstream in intake and requirements.
  • Manual friction and unclear inputs drive repeated rework loops.
  • These loops create predictable delay, churn, and downstream quality issues.
  • The pilot targets intake and requirements as a narrow, deep experiment.
  • Primary constraint addressed, repeated rework loops caused by unclear inputs.
  • Proof of value focus, reduce rework loops to lower feature churn and shorten lead time.
  • AI supports analysis, generation, and validation across intake and refinement.
  • Humans retain decision rights and accountability at all approval points.
  • Scope intentionally excludes development and code automation.
Slide 3: Scope of Phase 1
Type: narrative

Executive

  • Product: Competitive Edge
  • Processes Covered:
  • Requirements intake → Refinement → Planning → Development → Code Review → QA → UAT → CAB → Prod
  • Phase 1 focus narrowed to ideation and requirements
  • Focus Area: Requirements → Deployment, beginning with intake & refinement
  • Feature Churn: 54% → strong evidence that upstream clarity is the constraint
  • Focus area selected due to highest concentration of rework loops.
  • Objective is to stabilise inputs before downstream work begins.
  • Phase 1 targets the point where delay, rework, and quality loss originate.
  • Front end stabilisation creates the highest value impact.
  • Clean inputs establish a foundation for later AI enabled PDLC scale.
Slide 4: What Pilot is NOT
Type: narrative

Executive

  • This pilot is a targeted experiment on requirements and intake.
  • AI supports analysis, generation, and validation.
  • Humans own decisions, prioritisation, and approvals.
  • No autonomous decision making.
  • No AI generated production code.
  • No PDLC redesign beyond intake and requirements.
Slide 5: Product Development Lifecycle (PDLC)
Type: generic
  • Rework loops or originate upstream and propagate downstream
  • USER PERSONAS
  • Business and Product remain accountable for problem definition.
  • BA owns requirement quality.
  • Architecture owns feasibility decisions.
  • QA owns quality gates.
  • AI assists each role, does not replace ownership.
Slide 6: Ideation | Requirements Gathering SL/Portfolio: Product Competitive Edge
Type: generic
  • Process Steps Include inputs/outputs + systems/tools
  • Repetitive Tasks / Manual Loops / Manual entry
  • Pain Points / Waste in the process
  • Root Cause of Pain Points
  • Opportunities / Ideas (Automation & AI)
  • Stakeholders and Additional Notes
Slide 7: Process Steps
Type: narrative

Process Steps

  • Step 1 — Business Idea / Problem Surfaces (BPM)
  • Input: informal business request, problem statement, email, conversation
  • Output: initial idea statement
  • Tools: Email, Teams meetings, ad‑hoc docs, Wiki
  • Participants: BPM, BA, TPdM
  • Issues: no structured intake, variable clarity
  • Step 2 — BA Intake & Clarification (BA)
  • Input: problem statement from business
  • Output: clarified requirement, draft scope
  • Tools: Email, Notes, Teams, Aha!, ADO
  • Participants: BA, BPM
  • Issues: waiting time for business clarity; repeated loops
  • Step 3 — BA + UX Early Shaping (BA)
  • Input: clarified requirement
  • Output: early UX sketches / Figma concepts
  • Tools: Figma
  • Participants: BA, UX
  • Issues: UX not always involved early → late rework
  • Step 4 — Architecture Review & Early Feasibility (Architecture)
  • Input: draft requirement + early UX
  • Output: initial architectural guidance / constraints
  • Tools: Architecture templates, Teams
  • Participants: Architecture, TPdM, BPM, Engineering leads
  • Issues: architecture overloaded → slow response cycles
  • Step 5 — Epic / Feature Definition (BA)
  • Input: requirement + architecture notes + UX sketches
  • Output: ADO Epic / Feature created
  • Tools: ADO Boards
  • Participants: BA, Engineering lead, architecture, TPdM, BPM
  • Issues: incomplete AC; inconsistent DOR
  • Step 6 — User Story Breakdown (Architecture / Engineering)
  • Input: Epic/Feature
  • Output: Stories + Acceptance Criteria
  • Tools: ADO Boards
  • Participants: BA + Dev + Architecture
  • Issues: repeated clarification → rework; missing AC
Slide 8: Repetitive Tasks / Manual Loops / Manual Handoffs
Type: narrative

Repetitive Tasks

  • BA ↔ Business for clarity (multiple cycles)
  • BA ↔ Architecture for feasibility (slow cycles)
  • BA ↔ UX for aligning flows and screens
  • Dev ↔ BA for missing acceptance criteria
  • Dev ↔ QA for incomplete test details
  • Manual updates of Epics, Features, Stories in ADO
  • Manual syncing between Figma ↔ ADO ↔ BA notes
  • Manual CAB evidence preparation downstream
  • User Story Lead Time - 34 days
  • Percentage of Feature Churn – 54%
  • User Story Cycle Time – 7 days
  • Story Points Planned to be Done – 340%
  • Clarification loops drive time loss.
  • Manual updates drive inconsistency.
  • Late validation drives churn.
  • Each loop increases lead time and cost.
Slide 9: Pain Points / Waste / Bottlenecks
Type: narrative

Pain Points

  • Unclear or unstable requirements → rework loops
  • Architecture response time → waiting time bottleneck – amplifies loop
  • Missing acceptance criteria → creates rework → defects & slow QA
  • UX late involvement → late cycle churn
  • Manual creation of Epics/Features/Stories → administrative overhead – increased context loss
  • No unified intake → context lost between handoffs
  • BA + Dev + QA ask repeated clarification → wasteful loops
  • Story/feature cycle time longer than expected
  • Bug density higher due to upfront ambiguity
Slide 10: Root Cause of Pain Points
Type: narrative

Root Causes

  • Business alignment gaps → priorities shift, unclear problem statements
  • BA and Scrum teams not empowered to push back → incomplete requirements progress downstream
  • Architecture bandwidth constraint → availability and specialist skill constraint
  • Weak Definition of Ready understanding → inconsistent story quality
  • Lack of business context inside the team → unclear AC, weak tests
  • QA capability gap → poor test case writing, missing regression coverage
  • Tools missing early in flow (e.g., InfoSec tests not in Dev pipeline)
  • Manual, multi‑system working (Email/Wiki/Sharepoint → Figma → Aha! → ADO → Git)
  • No structured intake → repeated clarification loops and context loss
  • RESULT OF ROOT CAUSE
  • Root causes manifest as repeated rework loops.
  • Loops hide real progress and inflate effort.
  • Fixing loops reduces multiple symptoms at once.
Slide 11: Opportunities / Ideas (Automation & AI)
Type: narrative

Opportunities

  • AI‑Native Automation Opportunities
  • Auto‑generate problem statements, requirements drafts, epics, features, stories, and acceptance criteria from business input
  • Auto‑summarise business conversations/emails into structured intake forms
  • AI‑assisted UX wireframe extraction from requirements
  • AI agent to recommend architectural patterns for early feasibility
  • Agent‑driven Definition of Ready checks before stories enter planning
  • Automated test case generation from AC + UX flows
  • AI assistant for CAB readiness (pre-filling forms, attaching evidence)
  • Process & Workflow Improvements
  • Introduce a structured intake template for Business → BA
  • Integrate Figma → ADO auto‑sync for UX artefacts
  • Automated “requirements completeness” scoring
  • Agent‑based orchestration to remove repetitive handoffs
  • Real-time story/feature validation for business clarity
  • Structured intake summarization.
  • Requirement and acceptance criteria generation.
  • Definition of Ready validation.
  • Early feasibility guidance.
  • Standard intake format.
  • Single source of requirement truth.
  • Explicit readiness gates.
Slide 12: Heatmap — Pain Points by Step (Current State)
Type: narrative

Pain Points

  • Primary constraint metric: number of rework loops per story.
  • Supporting indicators: feature churn and lead time.
  • Baseline captured before AI support introduced.
Slide 13: Stakeholders & Additional Notes
Type: narrative

Additional

  • AI reduces manual effort and ambiguity.
  • Decision authority remains unchanged.
  • Accountability stays with existing roles.
  • Business / Product Owner: Source of requirements; needs structured intake
  • Business Analysts: Primary owners of requirements; biggest beneficiaries of automation
  • Architecture: Overloaded; critical to early feasibility; candidates for AI pattern assistance
  • UX: Needs earlier involvement; Figma → ADO integration recommended
  • Developers: Need complete AC and clarity; benefit from automated AC + test generation
  • QA: Major uplift potential via AI test suite generation
  • Scrum Master: Gatekeeper for DOR and planning readiness
  • CT Governance / CAB: Target for downstream automation
Slide 14: AI Opportunity Matrix
Type: narrative

Opportunities

  • Quick Wins
  • AC & Story Generator: derive acceptance criteria and stories from business input/UX → ADO
  • Requirements Intake Summarizer: turn emails/meetings into structured intake + epics/features
  • DOR Gatekeeper Agent: automated readiness checks before planning (fields, AC quality, links)
  • Strategic Bets
  • Architecture Pattern Advisor: propose patterns/constraints early;
  • Compliance/CAB Pre‑Assembler: prefill forms, attach scans/tests, build evidence pack
  • Risk/Impact Forecaster: predict requirement volatility, downstream bug risk, cycle‑time impact
  • GUIDING PRINCIPLES
  • AI operates between PDLC gates.
  • Humans approve at intake, readiness, and planning.
  • AI prepares, validates, and flags.
  • Humans decide and commit.
  • Incremental Improvement
  • ADO Admin Automation: bulk create/update epics/features/stories; labels, links, templates
  • Meeting → Action Extractor: auto‑create follow‑ups/tasks, tag owners, set due dates
  • Evaluate Later
  • Auto‑Negotiation Agent with Business: fully autonomous clarification with stakeholders
  • End‑to‑End Requirements Simulation: synthetic user flows & cost/benefit simulation at intake
  • UX→ADO Linker: sync Figma frames to stories (IDs, flows, assets) to reduce context loss

Fabric

41
Pain Points
23
Root Causes
27
Opportunities
22
Repetitive Tasks

Products in Scope: Fabric - Portal, Fabric - Next

Pain Points (41)
  • Release cycles are slow
  • Quality of code is poor due to low test coverage
  • CI/CD processes are not consistent
  • Support challenges
  • Lots of technical debt from years of poorly design components
  • Fragmented user experience , although fabric.ey.com is starting to fix a number of those issues
  • Development environments are not consistent
  • DEV/QA/UAT/PROD are all hand built, so very inconsistent
  • There is no clear identification of what code is where based on the completed work
  • There is no audit of release
  • Some code cannot be rolled back as not tagged releases
  • Manual Testing
  • Sign off by Infosec is always manual slows down release fequency
  • Change in requirements or Conflict in requirements
  • Communication gaps between business and tech teams
  • Prioritization – All urgent
  • Under estimation
  • Scope changes in between sprints
  • Dependency – saiting for other teams
  • Tech debt
  • Integration issues
  • Environment inconsistency
  • Knowledge silos
  • Automation test failures
  • Deployment failure / rollback
  • Environment difference
  • Lack of automation
  • Last minute bugs
  • Changing requirements
  • Manual testing burden
  • Poor communication
  • Technical debt
  • Deployment complexity
  • Unclear requirements
  • Estimation errors
  • Bug backlogs
  • Pressure for speed
  • Inadequate planning
  • No feedback loops
  • Knowledge management
  • Tool overload
Root Causes (23)
  • Low skill level of product management
  • Low skill level of engineering leadership and general engineering talent
  • Limited awareness of cloud native solutions
  • Limited knowledge of modern CI/CD approaches, ( this has recently been addressed with new Fabric Developer Workflow)
  • PI process is more waterfall in nature and slow and cumbersome to execute , inconsistency between teams
  • Platform needs defined automation in the form of Landing Zones
  • Define all UX consistently with Motif and XDA team
  • Manual testing
  • Overly manual process for release into production
  • Time consuming can take a fully day to release
  • Lots of engineers required to coordinate with DevOps engineers to complete production deployment
  • Not built matured software
  • Resistence to change
  • Quality issues
  • Tech debt
  • Lack of teting automation
  • Ownership and accountability
  • Ignore security , performance and scalability
  • Long release cycles
  • Not taking proper retro and actions
  • Lack of devOps automation
  • Fear of touching working code / code quality
  • Communication gap
Opportunities & AI Ideas (27)
  • Traditional automation opportunities:
  • All infrastructure should be defined into standard Landing Zones for common compute types from Kubernetes, Container Apps to DataBricks
  • All code should be built and deployed using KaaS Single and Multi Tenant solutions
  • All code should have full range of automated tests
  • AI/agentic opportunities:
  • Figma designs should use Motif MCP Server
  • All projects should be analyses with Factory.ai DROID
  • All Unit Testing and integration testing generated with AI tooling
  • All APIs generated with AI tooling
  • Build automation test suites for code, APIs and User Experiences using modern frameworks
  • Automate all testing activities and remove the QA team , developers should hand over full automation tests as part of definition of done.
  • Consistent environments
  • Full integrated CI/CD release process with semantically tagged version of code
  • Automate all InfoSec evidence requirements gathering
  • Automate all of the infosec sign off using AI tools
  • Run set of agents to validate release and key features
  • Market analysis at scale
  • Competitor analysis , Trend prediction, Innovation suggestions
  • Stakeholders conversation to requirements
  • Innovation suggestions
  • Ambiguity detection, auto generation of user stories,smart prioritization
  • Risk preiction,Resource optimization,planning assistant
  • Architecture recommendation
  • Code Generation,auto code review, refactor suggestions
  • Auto documentation
  • Auto generate test cases, bug prediction,root cause analysis, test data generation
  • Safer releases, reduce downtime, faster incident response,auto release notes
Repetitive Tasks (22)
  • Too many meetings
  • Lack of firm requirements defined by product teams
  • Engineering making engineering decisions
  • Architects previously were just for sign off had no influence on the design of the Epic
  • Fabric was made up of fragemented pieces
  • QA process is mostly manual for UX
  • QA for APIs is generally automated with Postman
  • Infosec sign off is manual
  • GIS is manual
  • Reporting is pulling information from ADO
  • Manual Testing of the product is performed by separate QA testing team
  • Code deployments is often problematic, requires a bridge call with 20 engineers on them to help identify issues, often related to inconsistent testing environments
  • Testing
  • Status meeting
  • Bug fixing and retesting
  • Document update
  • Environment maintenance
  • Code reviews
  • Conflicts resolve if any
  • Backlog creation
  • Backlog grooming
  • Demo
Process Steps (28)
  • Input:
  • Products defined in Aha
  • Hand off to Architect and Design
  • PI Process is run once a month to cover 2 sprints
  • Output:
  • ADO Stories
  • Word, Miro documents
  • Requirements Gathering
  • Develop and Testing
  • Hande off to engineering to define Features, Stories and Tasks in ADO
  • Development is managed in GHE
  • CI & CD process is fragmented and not consistent
  • No semantic versions of code built, tagged
  • CI builds code into ZIP for UX and Docker Image for backend and stored in Artifactory
  • SonarQube, Checkmarx, Mend used to scan code
  • Source Code, Running product
  • Developers work from ADO issues
  • Develop code in 2 week sprints
  • Code is deployed into Dev environment and moved to QA environment for QA testings
  • Code
  • Running Solution
  • QA builds test plan from Epics and User Stories
  • Manual tests are executed against the code
  • Test report on quality
  • Bug issues are created in ADO
  • Code is tracked in a release branch
  • It is promoted from UAT to Production once a month
  • Fabric.ey.com
Additional Notes (31)
  • Platform Experience Team for EY Fabric
  • DevOps compentancy
  • Delivery leads, Engineering leads
  • Speed without sacrificing quality - Automate repetitive tasks
  • Proactive problem detection - Find issues before they become crises
  • Data-driven decisions - Replace gut feelings with insights
  • Continuous learning - Gets better over time from your team's patterns
  • /7 assistance - Always available, never tired
  • Consistency - No human error in repetitive tasks
  • Scale expertise - Make junior developers more productive
  • Reduced cognitive load - Let AI handle routine decisions
  • Better resource utilization - Optimize what you already have
  • Faster time-to-market - Ship quality software faster
  • Requirements Gathering
  • Collaborative Involvement
  • Cross-Functional Communication
  • All relevant stakeholders participate in discussions to gather comprehensive requirements
  • Regular meetings facilitate effective communication between front-end, back-end, DevOps, and testing teams
  • Plan
  • Structured Planning Meetings
  • Clear Roadmap
  • Regular planning meetings help prioritize features and align on goals
  • A well-defined roadmap guides the development process and sets expectations for all teams
  • Build
  • Use of GitHub Actions workflows streamlines the build process and reduces manual errors.
  • Integrated Code Quality Checks
  • Automated code scanning during the build process identifies potential issues early
  • Test
  • Continuous Feedback Loop: Regular feedback from testing teams helps refine features and improve quality
  • Release
  • Approval Processes: Clear approval workflows ensure that only validated features are released to production.
All Slide Details (Raw)

Phase 1 KO Slides and Templates - Fabric.pptx

Slide 1: Ideation | Requirements Gathering SL/Portfolio: Product EY Fabric
Type: columnar

Process Steps:

  • Input:
  • Products defined in Aha
  • Hand off to Architect and Design
  • PI Process is run once a month to cover 2 sprints
  • Output:
  • ADO Stories
  • Word, Miro documents

Repetitive Tasks:

  • Too many meetings
  • Lack of firm requirements defined by product teams
  • Engineering making engineering decisions
  • Architects previously were just for sign off had no influence on the design of the Epic
  • Fabric was made up of fragemented pieces

Pain Points:

  • Release cycles are slow
  • Quality of code is poor due to low test coverage
  • CI/CD processes are not consistent
  • Support challenges
  • Lots of technical debt from years of poorly design components
  • Fragmented user experience , although fabric.ey.com is starting to fix a number of those issues
Slide 2: Ideation | Requirements Gathering SL/Portfolio: Product EY Fabric
Type: columnar

Root Causes:

  • Low skill level of product management
  • Low skill level of engineering leadership and general engineering talent
  • Limited awareness of cloud native solutions
  • Limited knowledge of modern CI/CD approaches, ( this has recently been addressed with new Fabric Developer Workflow)
  • PI process is more waterfall in nature and slow and cumbersome to execute , inconsistency between teams

Opportunities:

  • Traditional automation opportunities:
  • All infrastructure should be defined into standard Landing Zones for common compute types from Kubernetes, Container Apps to DataBricks
  • All code should be built and deployed using KaaS Single and Multi Tenant solutions
  • All code should have full range of automated tests
  • AI/agentic opportunities:
  • Figma designs should use Motif MCP Server
  • All projects should be analyses with Factory.ai DROID
  • All Unit Testing and integration testing generated with AI tooling
  • All APIs generated with AI tooling

Additional:

  • Platform Experience Team for EY Fabric
  • DevOps compentancy
Slide 3: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop (and Testing)
Slide 4: Build | Plan SL/Portfolio: Product EY Fabric
Type: columnar

Process Steps:

  • Input:
  • Hande off to engineering to define Features, Stories and Tasks in ADO
  • Development is managed in GHE
  • CI & CD process is fragmented and not consistent
  • No semantic versions of code built, tagged
  • CI builds code into ZIP for UX and Docker Image for backend and stored in Artifactory
  • SonarQube, Checkmarx, Mend used to scan code
  • Output:
  • Source Code, Running product

Repetitive Tasks:

  • QA process is mostly manual for UX
  • QA for APIs is generally automated with Postman
  • Infosec sign off is manual
  • GIS is manual

Pain Points:

  • Development environments are not consistent
  • DEV/QA/UAT/PROD are all hand built, so very inconsistent
Slide 5: Build | Plan SL/Portfolio: Product EY Fabric
Type: columnar

Root Causes:

  • Platform needs defined automation in the form of Landing Zones
  • Define all UX consistently with Motif and XDA team

Opportunities:

  • Traditional automation opportunities:
  • All infrastructure should be defined into standard Landing Zones for common compute types from Kubernetes, Container Apps to DataBricks
  • All code should be built and deployed using KaaS Single and Multi Tenant solutions
  • All code should have full range of automated tests
  • AI/agentic opportunities:
  • Figma designs should use Motif MCP Server
  • All projects should be analyses with Factory.ai DROID
  • All Unit Testing and integration testing generated with AI tooling
  • All APIs generated with AI tooling
Slide 6: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop (and Testing)
Slide 7: Build | Development SL/Portfolio: Product X
Type: columnar

Process Steps:

  • Input:
  • Developers work from ADO issues
  • Develop code in 2 week sprints
  • Code is deployed into Dev environment and moved to QA environment for QA testings
  • Output:
  • Code
  • Running Solution

Repetitive Tasks:

  • Reporting is pulling information from ADO

Pain Points:

  • There is no clear identification of what code is where based on the completed work
  • There is no audit of release
  • Some code cannot be rolled back as not tagged releases
Slide 8: Build | Development SL/Portfolio: Product X
Type: columnar

Opportunities:

  • Traditional automation opportunities:
  • All infrastructure should be defined into standard Landing Zones for common compute types from Kubernetes, Container Apps to DataBricks
  • All code should be built and deployed using KaaS Single and Multi Tenant solutions
  • All code should have full range of automated tests
  • AI/agentic opportunities:
  • Figma designs should use Motif MCP Server
  • All projects should be analyses with Factory.ai DROID
  • All Unit Testing and integration testing generated with AI tooling
  • All APIs generated with AI tooling
Slide 9: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop Testing
Slide 10: Build | Testing SL/Portfolio: Product EY Fabric
Type: columnar

Process Steps:

  • Input:
  • QA builds test plan from Epics and User Stories
  • Manual tests are executed against the code
  • Output:
  • Test report on quality
  • Bug issues are created in ADO

Repetitive Tasks:

  • Manual Testing of the product is performed by separate QA testing team

Pain Points:

  • Manual Testing
Slide 11: Build | Testing SL/Portfolio: Product EY Fabric
Type: columnar

Root Causes:

  • Manual testing

Opportunities:

  • Traditional automation opportunities:
  • Build automation test suites for code, APIs and User Experiences using modern frameworks
  • AI/agentic opportunities:
  • Automate all testing activities and remove the QA team , developers should hand over full automation tests as part of definition of done.
Slide 12: Process Mapping & Analysis
Type: narrative

Process Steps

  • Requirements Gathering
  • Develop Testing
Slide 13: Build | Release SL/Portfolio: Product EY Fabric
Type: columnar

Process Steps:

  • Input:
  • Code is tracked in a release branch
  • It is promoted from UAT to Production once a month
  • Output:
  • Running solution
  • Fabric.ey.com

Repetitive Tasks:

  • Code deployments is often problematic, requires a bridge call with 20 engineers on them to help identify issues, often related to inconsistent testing environments

Pain Points:

  • Sign off by Infosec is always manual slows down release fequency
Slide 14: Build | Release SL/Portfolio: Product EY Fabric
Type: columnar

Root Causes:

  • Overly manual process for release into production
  • Time consuming can take a fully day to release
  • Lots of engineers required to coordinate with DevOps engineers to complete production deployment

Opportunities:

  • Traditional automation opportunities:
  • Consistent environments
  • Full integrated CI/CD release process with semantically tagged version of code
  • Automate all InfoSec evidence requirements gathering
  • AI/agentic opportunities:
  • Automate all of the infosec sign off using AI tools
  • Run set of agents to validate release and key features

Additional:

  • Delivery leads, Engineering leads
Slide 15: 2.1
Type: generic
  • Templates for ‘Delivery on a Page’
Slide 16: About
Type: generic
  • Template slides
  • Following the work to capture the current process mapping across the PDLC, there is significant value in being able to view E2E delivery on a page.
  • Delivery on a page provides a perspective of all processes and complexity, of the entire delivery process.
  • The ‘current view’ will also be overlaid against the ‘end view’ for comparison
  • Manual work is needed for this next phase, but it is a simple copy-paste exercise.
Slide 17: Amend page as necessary
Type: generic
  • Requirements Gathering
  • Discuss on business problem and opportunities
  • Generate product/application ideas and solutions
  • Evaluate feasibility and value
  • Define high level vision & goals
  • Identify the stakeholders and users
  • Stakeholder interviews
  • Document Functional and Non Functional requirements
  • Create features, stories with acceptance criteria
  • Prioritize the requirements
  • Document business rules and constraints
  • Create Project timeline and milestones
  • Break down work into Sprints/Iterations
  • Estimate effort & resource allocation
  • Define architecture and design
  • Identify risks and mitigation  strategies
  • Set up development       environment
  • Define tools
  • Define testing strategy
  • Writing code by following standards
  • Implement features
  • Code review, Unit testing, manual/automation testing
  • CI/CD
  • Documentation
  • Tracking daily status
  • Deploy to QA
  • Execution of unit testing, Integration testing and system testing
  • Perform functional testing
  • Regression testing
  • Performance testing
  • User acceptance testing
  • Infosec review & security testing
  • Bug fixing & re testing
  • Automate testing as much as possible
  • Sign off from QA
  • Tracking daily status
  • Deploy to UAT​
  • Smoke testing , regression testing, validation
  • Sign off from QA & Infosec & product
  • Documentation – user guide and training
  • PROD deployment
  • POST release support
  • Gather feedback from the user for future iterations
Slide 18: Delivery Processes on a Page SL/Portfolio: Product X
Type: columnar

Pain Points:

  • Change in requirements or Conflict in requirements
  • Communication gaps between business and tech teams
  • Prioritization – All urgent
  • Under estimation
  • Scope changes in between sprints
  • Dependency – saiting for other teams
  • Tech debt
  • Integration issues
  • Environment inconsistency
  • Knowledge silos
  • Manual testing
  • Automation test failures
  • Deployment failure / rollback
  • Environment difference
  • Lack of automation
  • Last minute bugs

Repetitive Tasks:

  • Testing
  • Status meeting
  • Bug fixing and retesting
  • Document update
  • Environment maintenance
  • Code reviews
  • Conflicts resolve if any
  • Backlog creation
  • Backlog grooming
  • Demo

Root Causes:

  • Not built matured software
  • Resistence to change
  • Quality issues
  • Tech debt
  • Lack of teting automation
  • Ownership and accountability
  • Ignore security , performance and scalability
  • Long release cycles
  • Not taking proper retro and actions
  • Lack of devOps automation
  • Fear of touching working code / code quality
  • Communication gap

Opportunities:

  • Market analysis at scale
  • Competitor analysis , Trend prediction, Innovation suggestions
  • Stakeholders conversation to requirements
  • Innovation suggestions
  • Ambiguity detection, auto generation of user stories,smart prioritization
  • Risk preiction,Resource optimization,planning assistant
  • Architecture recommendation
  • Code Generation,auto code review, refactor suggestions
  • Auto documentation
  • Auto generate test cases, bug prediction,root cause analysis, test data generation
  • Safer releases, reduce downtime, faster incident response,auto release notes

Additional:

  • Speed without sacrificing quality - Automate repetitive tasks
  • Proactive problem detection - Find issues before they become crises
  • Data-driven decisions - Replace gut feelings with insights
  • Continuous learning - Gets better over time from your team's patterns
  • /7 assistance - Always available, never tired
  • Consistency - No human error in repetitive tasks
  • Scale expertise - Make junior developers more productive
  • Reduced cognitive load - Let AI handle routine decisions
  • Better resource utilization - Optimize what you already have
  • Faster time-to-market - Ship quality software faster
Slide 19: Delivery Processes on a Page SL/Portfolio: Product X
Type: table

Pain Points:

  • Changing requirements
  • Manual testing burden
  • Poor communication
  • Technical debt
  • Deployment complexity
  • Unclear requirements
  • Knowledge silos
  • Estimation errors
  • Integration issues
  • Bug backlogs
Slide 20: Delivery Processes on a Page SL/Portfolio: Product X
Type: table

Pain Points:

  • Pressure for speed
  • Lack of automation
  • Inadequate planning
  • No feedback loops
  • Technical debt
  • Knowledge management
  • Tool overload
Slide 21: 3
Type: generic
  • Current Strengths & Effective Practices
Slide 22: About
Type: generic
  • Template slides
  • Capturing what works well is essential because it ensures the transformation builds on the strong elements of the current PDLC, not just the gaps.
  • While Phase 1 focuses heavily on identifying bottlenecks, waste, and root causes, a balanced process analysis also highlights the strengths, proven practices, and value add steps that already support good delivery outcomes.
Slide 23: Current Strengths & Effective Practices
Type: table

Additional:

  • Requirements Gathering
  • Collaborative Involvement
  • Cross-Functional Communication
  • All relevant stakeholders participate in discussions to gather comprehensive requirements
  • Regular meetings facilitate effective communication between front-end, back-end, DevOps, and testing teams
  • Plan
  • Structured Planning Meetings
  • Clear Roadmap
  • Regular planning meetings help prioritize features and align on goals
  • A well-defined roadmap guides the development process and sets expectations for all teams
  • Build
  • Use of GitHub Actions workflows streamlines the build process and reduces manual errors.
  • Integrated Code Quality Checks
  • Automated code scanning during the build process identifies potential issues early
  • Test
  • Continuous Feedback Loop: Regular feedback from testing teams helps refine features and improve quality
  • Release
  • Approval Processes: Clear approval workflows ensure that only validated features are released to production.

Tax

24
Pain Points
18
Root Causes
33
Opportunities
24
Repetitive Tasks

Products in Scope: Payroll, GTP EYXP, TTA Suite (ITTS), EYMP

Pain Points (24)
  • Late injects cause QA disruption and risk install integrity… need stricter definition of done
  • Lack of test and infosec impact awareness on many bugs after feature DOD; little reaction time and difficult to remediate before install
  • CAB deliverables and process can be more streamlined
  • Lengthy initial performance test setup time for major installs
  • Environment build transitions are prone to misconfiguration and errors that burn valuable test time
  • Lengthy production install procedures with auto and manual steps
  • Non-vertical story structure (FE/BE split)
  • Lack of feature flagging
  • Misaligned planning & dependencies (e.g Fabric & Motif)
  • Story-scoped vs scenario-scoped testing
  • Time taken to create Figma designs
  • Repeating manual tasks across different regions increases the risk of human error and inconsistency.
  • Coordinating configurations, environments, and dependencies across geographic areas often leads to failures.
  • Manual provisioning is unreliable and prone to mistakes, lacking a standardized approach.
  • Conducting manual testing in Production, UAT, and QA/DEV environments results in inconsistencies.
  • Lack of documentation and knowledge transfer across teams
  • Lack of recording/ transcription capabilities while gathering the requirements
  • Long feedback loops
  • Release process takes too long
  • Lack of clear, complete business requirements upfront. Business teams often provide high‑level or vague requirements without essential details such as what the functionality should achieve or how it should work
  • Frequent shifts in priorities even with a carefully planned PI – typically caused by emerging client demands following RFP for critical features to seal the deal
  • Requirements are not tracked from start to end, so user stories miss details. This causes rework and many clarification cycles with PM, engineers, QA, and SMEs.
  • Feature docs (specs/requirements/history) are spread out, so delivery and support are slower and it’s hard to reuse info for guides/training materials.
  • PI Plannings: Because planning needs to “fill” the quarter with actionable items, late-arriving requirements get added close to planning cutoffs. estimates and the plan change during the quarter.
Root Causes (18)
  • Ambiguous or incomplete business requirements
  • Acceptance criteria needs work
  • No feature flagging meant unfinished code could ship
  • Absence of Automated Infrastructure – Recurring tasks are not streamlined or automated
  • Configuration Management Disorder – Configuration controls are minimal or nonexistent
  • Dependency Challenges – Provisioning is complex, and visibility is limited
  • Environment Drift – Configurations lack version control, causing inconsistencies
  • Complex Data Movement – ETL processes are manual and lack a supporting framework
  • Database Maintenance Shortcomings – Automated processes are missing
  • Reliance on Manual Methods – Processes are not standardized or codified
  • EY restrictions/ Focus on metrics that reduce teamwork flexibility
  • Inconsistencies between architecture documents and reality
  • Lack of test automation
  • Not following Shift Left guidelines
  • Detailed discussions for requirements happen in meetings and sometimes clarified in email. This leads to details missed in user stories.
  • Weak “ready” checks (Definition of Ready) and no early tech check before we commit.
  • Planning is driven by dates that have been promised to clients during sales or RFP discussions. This puts pressure on the time needed for requirements detail. Creates a healthy conflict between client promise and delivery reality .
  • Process issues that do not allow people using online tools (Qualys/Mend) directly.
Opportunities & AI Ideas (33)
  • Architecture and Infsec involved early in feature creation; explicit gate
  • Update ADO user story (and bug) template to identify infosec and testing impact
  • Automated user story code review and PR approval
  • Infosec automated code inspection and certification
  • Earlier release scope freeze
  • Expanded data recovery and synthetic data creation to reduce test data setup time
  • Continuous unit test automation framework across complete api/microservice tier
  • Utilize AI w/Figma to build out initial business requirements and refinements
  • Leverage SLA tool to support user story/requirements development
  • Leverage DROID & Factory.ai tools for development of QA Automation testing
  • Enforce feature flag usage across EYXP
  • Improve dependency linking in Azure DevOps
  • Shift to scenario-based end-to-end testing
  • Enhance BUAT accountability & criteria
  • Develop clearer, scenario-driven business requirements
  • Enable the recording/transcription capabilities for BAs  (BA)
  • Use AI agents to help with requirement analysis, resolve functional dependencies (auto recognition) (BA)
  • Use AI agents to create under ADO:  Epic, Features and US's (BA)
  • Use AI to reduce technical debt (based on analysis)
  • Use AI tools to automate the flow using QA AI agents
  • Use AI to provide proposals of Fixes for code analysis outcomes
  • Auto-generate release notes and run books from natural language
  • Use AI agent to assist with delivery process especially pipelines
  • Use AI to analyze current architecture and identify gaps
  • Use AI tools to increase unit test coverage
  • Use AI tools to assist with different type of documentation
  • Use AI tools to analyse usage of infrastructure and propose cost savings
  • Strengthen the “Definition of Ready” and Clarification phase with checklist + a short discovery checklist; add a clear clarification step before writing user stories. This can enable early prototyping using AI tools and validate the result with business owners. (critical for code generation)
  • Add an AI reviewer to flag requirements that are ready for PI plannings and flag the items with pending clarifications.
  • AI check of specs /requirements to find missing details or conflicts and create questions before user stories are created.
  • Alternative to PI planning - Plan per feature; and plan deliver using Feature flags.
  • AI help to draft a delivery phased plan.
  • A searchable feature knowledge base to give context for code generation and to help create training/user guides/support info (with human review).
Repetitive Tasks (24)
  • Feature churn (although it’s gotten better)... Clarifying requirements, refactoring to smaller acceptance criteria, and estimation refinement
  • infosec testing lacks sufficient automation
  • Low Automation & Overreliance on Manual Testing
  • Decouple from GTP deployment process
  • Accessibility compliance
  • Manual process of gathering release documents from different parties/systems
  • Manual updates for each processes-BIA, PIA, TTAR etc.
  • Submitting manually forecasting files
  • Recurring meetings to capture needs
  • Manual sorting, prioritization, and refinement of backlog items
  • Recurring story point estimation and capacity planning
  • Manual configuration of Prod, UAT, QA/DEV environments
  • Repetitive manual test execution (regression, smoke, security testing)
  • Manual verification across environments
  • Manual review of logs for errors and performance issues
  • Manual notes and general guidelines
  • Manual provisioning of Infrastructure
  • Manual creation and maintenance of test data.
  • Manual rollbacks
  • Manual setup of dev environments
  • Manually checking logs
  • Manual security/compliance reports still happen (Qualys/Mend reports are exported and emailed) even though people can get the reports from the tools.
  • Infosec review for every release – although we have an efficient process, it is quite repetitive
  • TTAR and PIA processes are not efficient
Additional Notes (4)
  • Leverage retrospectives to iteratively improve the process as required
  • Reduced rework via clear requirements & aligned planning
  • Stronger governance across the product lifecycle
  • Increased stakeholder confidence
All Slide Details (Raw)

Delivery Processes On a Page - EYMP.pptx

Slide 1: Delivery Processes on a Page (current state) SL/Portfolio: Product EYMP
Type: generic
  • QA (Feature teams)
  • Final Regression
  • Performance
  • Infosec
  • Service-line requirement workshops
  • Identify features
  • Prototype discussions
  • Identifying and prioritize Tech debts, Infosec features
  • Finalize the product features and AC
  • Prioritize & assign features
  • Internal sign off on requirements
  • PI Planning activities
  • week sprints for Product Work on development (Sprint planning)
  • Product Owner accepts sprint commitment
  • ADR (Architectural Decision Record) for the new feature implementation
  • Definition of Done (DOD)
  • Closing all planned stories Feature complete DOD date
  • Product Owners – signoff on features and have System demo
  • KB updates
  • KT sessions
  • Support sign off
  • QA sign off
  • Business Sign off
  • Infosec Sign off
  • xDLC compliance (for major releases)
  • Deployment plan/T-minus
  • CAB review and acceptance
  • Deployment/ Go live
  • Hypercare period
  • Monitor product
Slide 2: Delivery Processes on a Page SL/Portfolio: Product EYMP
Type: columnar

Repetitive Tasks:

  • Feature churn (although it’s gotten better)... Clarifying requirements, refactoring to smaller acceptance criteria, and estimation refinement
  • infosec testing lacks sufficient automation

Pain Points:

  • Late injects cause QA disruption and risk install integrity… need stricter definition of done
  • Lack of test and infosec impact awareness on many bugs after feature DOD; little reaction time and difficult to remediate before install
  • CAB deliverables and process can be more streamlined
  • Lengthy initial performance test setup time for major installs
  • Environment build transitions are prone to misconfiguration and errors that burn valuable test time
  • Lengthy production install procedures with auto and manual steps

Opportunities:

  • Architecture and Infsec involved early in feature creation; explicit gate
  • Update ADO user story (and bug) template to identify infosec and testing impact
  • Automated user story code review and PR approval
  • Infosec automated code inspection and certification
  • Earlier release scope freeze
  • Expanded data recovery and synthetic data creation to reduce test data setup time
  • Continuous unit test automation framework across complete api/microservice tier

Delivery Processes On a Page - EYXP.pptx

Slide 1: Delivery Processes on a Page SL/Portfolio: Product EYXP
Type: generic
  • Requirements Gathering
  • Engagement team collaboration with GBPM and BPMs
  • UX designs created from GBPM and BPM feedback
  • TPO creates stories from UX designs
  • TPO organizes product backlog with GBPM and BPMs
  • TPO creates features for J2PP
  • Cross-train feature readouts
  • User story creation by BA
  • SAFE PI planning to organize, prioritize, and plan releases
  • Iterative user story estimation and refinement each sprint
  • Spike stories created for architecture
  • Intake items created for DevOps/infrastructure
  • IAC pipelines created
  • CI/CD pipeline changes approved
  • Development in local feature branches
  • Unit testing
  • Code review request gated by quality scans
  • SCA / SAST
  • Deployment to DEV/QA/UAT/Prod envs
  • Testing support throughout various phases
  • Bug fixes
  • Test plan created for multiphase release
  • Test cases created iteratively each sprint
  • User story testing each sprint
  • Handoff from Dev to QA
  • Defects created and either resolved or converted to bugs
  • Regression testing prior to release
  • UAT testing prior to release
  • Daily triage of bugs from all environments
  • Release-specific pipelines copied from root release pipelines
  • Initial releases created and deployed to QA
  • Release branch created
  • Daily release check in calls
  • QA regression
  • UAT and InfoSec testing
  • Release branch locked
  • Deployments gated
  • Release notes created
  • Release go/no go call
  • Stage deploy
  • Production deploy
Slide 2: Delivery Processes on a Page SL/Portfolio: Product EYXP
Type: columnar

Repetitive Tasks:

  • Low Automation & Overreliance on Manual Testing
  • Decouple from GTP deployment process
  • Accessibility compliance

Pain Points:

  • Non-vertical story structure (FE/BE split)
  • Lack of feature flagging
  • Misaligned planning & dependencies (e.g Fabric & Motif)
  • Story-scoped vs scenario-scoped testing
  • Time taken to create Figma designs

Root Causes:

  • Ambiguous or incomplete business requirements
  • Acceptance criteria needs work
  • No feature flagging meant unfinished code could ship

Opportunities:

  • Utilize AI w/Figma to build out initial business requirements and refinements
  • Leverage SLA tool to support user story/requirements development
  • Leverage DROID & Factory.ai tools for development of QA Automation testing
  • Enforce feature flag usage across EYXP
  • Improve dependency linking in Azure DevOps
  • Shift to scenario-based end-to-end testing
  • Enhance BUAT accountability & criteria
  • Develop clearer, scenario-driven business requirements

Additional:

  • Leverage retrospectives to iteratively improve the process as required
  • Reduced rework via clear requirements & aligned planning
  • Stronger governance across the product lifecycle
  • Increased stakeholder confidence

Delivery Processes on a Page -TTA.pptx

Slide 1: Delivery Processes on a Page SL/Portfolio: Product TTA Suite
Type: generic
  • Requirements Gathering
  • BA & PM organizes product backlog with input from Business and Tech Lead
  • Dev Team i& BA estimate SP for each Feature/US
  • PI Planning held quarterly to prioritize backlog items
  • Features/User Stories planned each sprint by BA with input from Business, Tech Lead & Dev Team Weekly Backlog refinement conducted with Business (covers gap between PI Planning and Sprint Planning)
  • Solution Architecture Document and clarifications for major features handled by Solution Architect and Tech Lead with BA
  • User Story assignments for items that don’t require major design decided by the team; spikes created for unclear tasks by Developers, Tech Lead, or QA
  • PM work on project’s budget, forecast and team structure
  • PM works on licensing for the project and handled the release process is consistent with CT standards
  • Internal Business Product Owner Team(s) demand
  • Internal Business Product Owners communicate requirements to Business Analyst & UI/UX
  • Business Analyst leads requirements gathering meetings
  • Business Analyst collaborates with Tech Lead and UI/UX Team for functional & technical requirements
  • Features and User Stories are drafted by the Business Analyst with input from Business, UI/UX team and Tech Lead
  • Project Manager coordinates delivery and release planning
  • Devs and DevOps follow existing design; changes made only if needed
  • Infrastructure Architecture Document created by Infrastructure Architect
  • Coding starts using GitFlow; work items assigned per sprint
  • Unit Testing performed by Devs (not full coverage)
  • Code scans with AquaSec, Qualys, Mend, Checkmarx; InfoSec review before release
  • Unplanned items (bugs/tickets) prioritized by PM with team based on severity
  • Pull Requests for each commit; 2 approvals required for Code Review
  • Deployment to higher environments via CAB approval; L3 Support deploys to production
  • Configuration updates, SQL scripts handled by L3 Support
  • Release Notes documented; iteration occurs as needed
  • Test Plans created by QA for the user stories
  • User story assigned to QA via ADO for testing
  • QA works directly with user story; questions clarified with BA or Dev team
  • Test preparation
  • QA tests and logs defects in ADO
  • Triage of unplanned items (e.g., production bugs) performed by whole team with Business before assigning to Developers
  • Pre-release to UAT environment after team agreement; handled via DevOps pipeline
  • QA performs pre-testing; occasionally Business tests (UAT) as well
  • Devs/DevOps resolve missing configuration or setup issues
  • Code freeze enforced; InfoSec review performed post-freeze
  • Known issues communicated to Business; decisions made on fix timing or release postponement
  • Change Requests / Transition management handled as needed
  • Release executed via DevOps pipeline; Pull Requests merged into main
  • Compliance being handled by PM (PIA, BIA, TTAR etc.)
Slide 2: Delivery Processes on a Page SL/Portfolio: Product TTA Suite
Type: columnar

Repetitive Tasks:

  • Manual process of gathering release documents from different parties/systems
  • Manual updates for each processes-BIA, PIA, TTAR etc.
  • Submitting manually forecasting files
  • Recurring meetings to capture needs
  • Manual sorting, prioritization, and refinement of backlog items
  • Recurring story point estimation and capacity planning
  • Manual configuration of Prod, UAT, QA/DEV environments
  • Repetitive manual test execution (regression, smoke, security testing)
  • Manual verification across environments
  • Manual review of logs for errors and performance issues
  • Manual notes and general guidelines
  • Manual provisioning of Infrastructure
  • Manual creation and maintenance of test data.
  • Manual rollbacks
  • Manual setup of dev environments
  • Manually checking logs

Pain Points:

  • Repeating manual tasks across different regions increases the risk of human error and inconsistency.
  • Coordinating configurations, environments, and dependencies across geographic areas often leads to failures.
  • Manual provisioning is unreliable and prone to mistakes, lacking a standardized approach.
  • Conducting manual testing in Production, UAT, and QA/DEV environments results in inconsistencies.
  • Lack of documentation and knowledge transfer across teams
  • Lack of recording/ transcription capabilities while gathering the requirements
  • Long feedback loops
  • Release process takes too long

Root Causes:

  • Absence of Automated Infrastructure – Recurring tasks are not streamlined or automated
  • Configuration Management Disorder – Configuration controls are minimal or nonexistent
  • Dependency Challenges – Provisioning is complex, and visibility is limited
  • Environment Drift – Configurations lack version control, causing inconsistencies
  • Complex Data Movement – ETL processes are manual and lack a supporting framework
  • Database Maintenance Shortcomings – Automated processes are missing
  • Reliance on Manual Methods – Processes are not standardized or codified
  • EY restrictions/ Focus on metrics that reduce teamwork flexibility
  • Inconsistencies between architecture documents and reality
  • Lack of test automation
  • Not following Shift Left guidelines

Opportunities:

  • Enable the recording/transcription capabilities for BAs  (BA)
  • Use AI agents to help with requirement analysis, resolve functional dependencies (auto recognition) (BA)
  • Use AI agents to create under ADO:  Epic, Features and US's (BA)
  • Use AI to reduce technical debt (based on analysis)
  • Use AI tools to automate the flow using QA AI agents
  • Use AI to provide proposals of Fixes for code analysis outcomes
  • Auto-generate release notes and run books from natural language
  • Use AI agent to assist with delivery process especially pipelines
  • Use AI to analyze current architecture and identify gaps
  • Use AI tools to increase unit test coverage
  • Use AI tools to assist with different type of documentation
  • Use AI tools to analyse usage of infrastructure and propose cost savings

Delivery Processes On a Page - Payroll.pptx

Slide 1: Delivery Processes on a Page SL/Portfolio: Product Payroll
Type: generic
  • Requirements Gathering
  • We know our applications very well and we’re driving some requirements on our own. These requirements are based on our own observations and are focused for example on improving user experience, improving system performance, improving code logic etc.
  • Global request template (anybody from the Business who has access can raise a requirement and explain the need which is later evaluated by Business PdM
  • Dedicated weekly sessions to discuss requirements between the POs and BAs (CT & Business)
  • Incidents / PROD issues and UAT for every release
  • Meetings and discussions with the wider Business community – as we cooperate with the Business and clients closely, we attend various sessions devoted to our products where we provide guidelines and discuss how things work.
  • Product Owners run requirement sessions and present the requirement.
  • Team discusses the technical aspects.
  • Spike user stories are proposed to investigate the technical approach and document how the feature will be built.
  • Architect updates the design and starts reviewing security aspects.
  • More discussions with Product and BA: how to split the work and what technical user stories we need.
  • Product Owners, technical managers, and developer leads complete missing technical details in the user stories.
  • After discovery is complete, the team estimates.
  • User stories are assigned to iterations based on the plan and dependencies.
  • Developer analyses the user story and check with BA in case of clarifications required.
  • Developer uses AI tools to generate code structure/components, then reviews and adjusts the generated code if does not align with the requirements.
  • Developer pushes code to the repository and creates a pull request (PR) using a PR template (can be populated with AI tools).
  • Automatic Agents (Github Copilot) and workflow gates verify the quality and alerts in case of invalid code.
  • Team reviews the PR and merges into the develop branch.
  • Automatic gates run (unit tests and quality checks).
  • Release candidate version is tested in QA.
  • Release branch is deployed to UAT.
  • Further bug fixing is done based on QA/UAT results.
  • InfoSec is involved during the UAT phase.
  • Final release candidate is prepared.
  • Deployment pipelines are prepared based on the release branch.
  • We run extra quality checks to confirm the version has safe references to packages.
  • Release candidate is deployed on the date defined in the change request raised earlier by project managers.
Slide 2: Delivery Processes on a Page SL/Portfolio: Product Payroll
Type: columnar

Repetitive Tasks:

  • Manual security/compliance reports still happen (Qualys/Mend reports are exported and emailed) even though people can get the reports from the tools.
  • Infosec review for every release – although we have an efficient process, it is quite repetitive
  • TTAR and PIA processes are not efficient

Pain Points:

  • Lack of clear, complete business requirements upfront. Business teams often provide high‑level or vague requirements without essential details such as what the functionality should achieve or how it should work
  • Frequent shifts in priorities even with a carefully planned PI – typically caused by emerging client demands following RFP for critical features to seal the deal
  • Requirements are not tracked from start to end, so user stories miss details. This causes rework and many clarification cycles with PM, engineers, QA, and SMEs.

Root Causes:

  • Detailed discussions for requirements happen in meetings and sometimes clarified in email. This leads to details missed in user stories.
  • Weak “ready” checks (Definition of Ready) and no early tech check before we commit.
  • Planning is driven by dates that have been promised to clients during sales or RFP discussions. This puts pressure on the time needed for requirements detail. Creates a healthy conflict between client promise and delivery reality .
  • Process issues that do not allow people using online tools (Qualys/Mend) directly.

Opportunities:

  • Strengthen the “Definition of Ready” and Clarification phase with checklist + a short discovery checklist; add a clear clarification step before writing user stories. This can enable early prototyping using AI tools and validate the result with busin
  • Add an AI reviewer to flag requirements that are ready for PI plannings and flag the items with pending clarifications.
  • AI check of specs /requirements to find missing details or conflicts and create questions before user stories are created.
  • Alternative to PI planning - Plan per feature; and plan deliver using Feature flags.
Slide 3: Delivery Processes on a Page SL/Portfolio: Product Payroll
Type: columnar

Repetitive Tasks:

  • Manual security/compliance reports still happen (Qualys/Mend reports are exported and emailed) even though people can get the reports from the tools.

Pain Points:

  • Feature docs (specs/requirements/history) are spread out, so delivery and support are slower and it’s hard to reuse info for guides/training materials.
  • PI Plannings: Because planning needs to “fill” the quarter with actionable items, late-arriving requirements get added close to planning cutoffs. estimates and the plan change during the quarter.

Opportunities:

  • AI help to draft a delivery phased plan.
  • A searchable feature knowledge base to give context for code generation and to help create training/user guides/support info (with human review).

Delivery Processes On a Page - EYMP.pptx

Slide 1: Delivery Processes on a Page (current state) SL/Portfolio: Product EYMP
Type: generic
  • QA (Feature teams)
  • Final Regression
  • Performance
  • Infosec
  • Service-line requirement workshops
  • Identify features
  • Prototype discussions
  • Identifying and prioritize Tech debts, Infosec features
  • Finalize the product features and AC
  • Prioritize & assign features
  • Internal sign off on requirements
  • PI Planning activities
  • week sprints for Product Work on development (Sprint planning)
  • Product Owner accepts sprint commitment
  • ADR (Architectural Decision Record) for the new feature implementation
  • Definition of Done (DOD)
  • Closing all planned stories Feature complete DOD date
  • Product Owners – signoff on features and have System demo
  • KB updates
  • KT sessions
  • Support sign off
  • QA sign off
  • Business Sign off
  • Infosec Sign off
  • xDLC compliance (for major releases)
  • Deployment plan/T-minus
  • CAB review and acceptance
  • Deployment/ Go live
  • Hypercare period
  • Monitor product
Slide 2: Delivery Processes on a Page SL/Portfolio: Product EYMP
Type: columnar

Repetitive Tasks:

  • Feature churn (although it’s gotten better)... Clarifying requirements, refactoring to smaller acceptance criteria, and estimation refinement
  • infosec testing lacks sufficient automation

Pain Points:

  • Late injects cause QA disruption and risk install integrity… need stricter definition of done
  • Lack of test and infosec impact awareness on many bugs after feature DOD; little reaction time and difficult to remediate before install
  • CAB deliverables and process can be more streamlined
  • Lengthy initial performance test setup time for major installs
  • Environment build transitions are prone to misconfiguration and errors that burn valuable test time
  • Lengthy production install procedures with auto and manual steps

Opportunities:

  • Architecture and Infsec involved early in feature creation; explicit gate
  • Update ADO user story (and bug) template to identify infosec and testing impact
  • Automated user story code review and PR approval
  • Infosec automated code inspection and certification
  • Earlier release scope freeze
  • Expanded data recovery and synthetic data creation to reduce test data setup time
  • Continuous unit test automation framework across complete api/microservice tier

Delivery Processes on a Page -TTA.pptx

Slide 1: Delivery Processes on a Page SL/Portfolio: Product TTA Suite
Type: generic
  • Requirements Gathering
  • BA & PM organizes product backlog with input from Business and Tech Lead
  • Dev Team i& BA estimate SP for each Feature/US
  • PI Planning held quarterly to prioritize backlog items
  • Features/User Stories planned each sprint by BA with input from Business, Tech Lead & Dev Team Weekly Backlog refinement conducted with Business (covers gap between PI Planning and Sprint Planning)
  • Solution Architecture Document and clarifications for major features handled by Solution Architect and Tech Lead with BA
  • User Story assignments for items that don’t require major design decided by the team; spikes created for unclear tasks by Developers, Tech Lead, or QA
  • PM work on project’s budget, forecast and team structure
  • PM works on licensing for the project and handled the release process is consistent with CT standards
  • Internal Business Product Owner Team(s) demand
  • Internal Business Product Owners communicate requirements to Business Analyst & UI/UX
  • Business Analyst leads requirements gathering meetings
  • Business Analyst collaborates with Tech Lead and UI/UX Team for functional & technical requirements
  • Features and User Stories are drafted by the Business Analyst with input from Business, UI/UX team and Tech Lead
  • Project Manager coordinates delivery and release planning
  • Devs and DevOps follow existing design; changes made only if needed
  • Infrastructure Architecture Document created by Infrastructure Architect
  • Coding starts using GitFlow; work items assigned per sprint
  • Unit Testing performed by Devs (not full coverage)
  • Code scans with AquaSec, Qualys, Mend, Checkmarx; InfoSec review before release
  • Unplanned items (bugs/tickets) prioritized by PM with team based on severity
  • Pull Requests for each commit; 2 approvals required for Code Review
  • Deployment to higher environments via CAB approval; L3 Support deploys to production
  • Configuration updates, SQL scripts handled by L3 Support
  • Release Notes documented; iteration occurs as needed
  • Test Plans created by QA for the user stories
  • User story assigned to QA via ADO for testing
  • QA works directly with user story; questions clarified with BA or Dev team
  • Test preparation
  • QA tests and logs defects in ADO
  • Triage of unplanned items (e.g., production bugs) performed by whole team with Business before assigning to Developers
  • Pre-release to UAT environment after team agreement; handled via DevOps pipeline
  • QA performs pre-testing; occasionally Business tests (UAT) as well
  • Devs/DevOps resolve missing configuration or setup issues
  • Code freeze enforced; InfoSec review performed post-freeze
  • Known issues communicated to Business; decisions made on fix timing or release postponement
  • Change Requests / Transition management handled as needed
  • Release executed via DevOps pipeline; Pull Requests merged into main
  • Compliance being handled by PM (PIA, BIA, TTAR etc.)
Slide 2: Delivery Processes on a Page SL/Portfolio: Product TTA Suite
Type: columnar

Repetitive Tasks:

  • Manual process of gathering release documents from different parties/systems
  • Manual updates for each processes-BIA, PIA, TTAR etc.
  • Submitting manually forecasting files
  • Recurring meetings to capture needs
  • Manual sorting, prioritization, and refinement of backlog items
  • Recurring story point estimation and capacity planning
  • Manual configuration of Prod, UAT, QA/DEV environments
  • Repetitive manual test execution (regression, smoke, security testing)
  • Manual verification across environments
  • Manual review of logs for errors and performance issues
  • Manual notes and general guidelines
  • Manual provisioning of Infrastructure
  • Manual creation and maintenance of test data.
  • Manual rollbacks
  • Manual setup of dev environments
  • Manually checking logs

Pain Points:

  • Repeating manual tasks across different regions increases the risk of human error and inconsistency.
  • Coordinating configurations, environments, and dependencies across geographic areas often leads to failures.
  • Manual provisioning is unreliable and prone to mistakes, lacking a standardized approach.
  • Conducting manual testing in Production, UAT, and QA/DEV environments results in inconsistencies.
  • Lack of documentation and knowledge transfer across teams
  • Lack of recording/ transcription capabilities while gathering the requirements
  • Long feedback loops
  • Release process takes too long

Root Causes:

  • Absence of Automated Infrastructure – Recurring tasks are not streamlined or automated
  • Configuration Management Disorder – Configuration controls are minimal or nonexistent
  • Dependency Challenges – Provisioning is complex, and visibility is limited
  • Environment Drift – Configurations lack version control, causing inconsistencies
  • Complex Data Movement – ETL processes are manual and lack a supporting framework
  • Database Maintenance Shortcomings – Automated processes are missing
  • Reliance on Manual Methods – Processes are not standardized or codified
  • EY restrictions/ Focus on metrics that reduce teamwork flexibility
  • Inconsistencies between architecture documents and reality
  • Lack of test automation
  • Not following Shift Left guidelines

Opportunities:

  • Enable the recording/transcription capabilities for BAs  (BA)
  • Use AI agents to help with requirement analysis, resolve functional dependencies (auto recognition) (BA)
  • Use AI agents to create under ADO:  Epic, Features and US's (BA)
  • Use AI to reduce technical debt (based on analysis)
  • Use AI tools to automate the flow using QA AI agents
  • Use AI to provide proposals of Fixes for code analysis outcomes
  • Auto-generate release notes and run books from natural language
  • Use AI agent to assist with delivery process especially pipelines
  • Use AI to analyze current architecture and identify gaps
  • Use AI tools to increase unit test coverage
  • Use AI tools to assist with different type of documentation
  • Use AI tools to analyse usage of infrastructure and propose cost savings