Skip to content
mimi

AI Analytics Engineer, AVS Digital Transformation

GE HealthCare

On-site Full-time Senior $114k – $172k/yr Yesterday

About the role

Below is a complete, ready‑to‑customize application package for the Data Engineer – AVS Digital Transformation role at GE HealthCare.

It includes:

  1. Targeted Cover Letter – a concise, achievement‑focused narrative that mirrors the language of the posting and showcases the exact experience GE is looking for.
  2. Resume (PDF‑style layout) – a results‑oriented, keyword‑rich resume that will pass both recruiter screening and ATS parsing.
  3. Quick “Interview‑Ready” Cheat Sheet – talking‑point prompts and sample answers for the three core responsibility areas (AI‑Ready Data Foundation, Analytics & Self‑Service, AI & Advanced Analytics Adoption).

Feel free to copy‑paste, edit the personal details, and adjust any numbers to reflect your own experience.


1️⃣ Cover Letter (PDF‑ready)

[Your Name]
[Street Address] • [City, State ZIP] • [Phone] • [Email] • LinkedIn: linkedin.com/in/[your‑handle]

April 5, 2026

Hiring Committee – Advanced Visualization Solutions
GE HealthCare
[Office Address – if known]

Dear Hiring Committee,

I am excited to apply for the **Data Engineer – AVS Digital Transformation** position (Req # ____). With **8+ years** designing and operating enterprise‑grade data platforms in Azure and AWS, and a proven track record of delivering **high‑performance Power BI semantic models** that enable self‑service analytics and AI‑ready data pipelines, I am uniquely positioned to help GE HealthCare’s AVS segment accelerate its next‑generation analytics and generative‑AI initiatives.

### Why I’m a strong fit

| GE HealthCare Requirement | My Relevant Experience & Impact |
|---------------------------|---------------------------------|
| **Curated semantic layers & AI‑ready data foundation** | • Designed a **dimensional model** for a global imaging device portfolio (≈ 2 B rows) on Azure Synapse, delivering **99.9 % data accuracy** and **lineage captured in Azure Purview**. <br>• Built **SQL‑based ELT pipelines** in Azure Data Factory and AWS Glue that reduced raw‑to‑analytics latency from 48 h to < 2 h. |
| **Power BI enterprise‑grade modeling** | • Led a team of 4 analysts to create **composite Power BI models** (dataflows + aggregations) for executive dashboards used by 1,200+ users, achieving **sub‑second query times** on > 500 M rows. <br>• Instituted a **DAX & M best‑practice framework** and CI/CD pipeline (Azure DevOps) that cut model‑deployment time by 70 %. |
| **AI & advanced analytics adoption** | • Integrated **Microsoft Fabric Copilot Studio** with Power BI to enable natural‑language querying for clinical staff, increasing self‑service query volume by 45 %. <br>• Delivered an **anomaly‑detection model** (Python‑scikit‑learn) for equipment‑downtime prediction, reducing unplanned downtime by 22 % after deployment via Azure ML. |
| **Platform & governance expertise** | • Implemented **data‑quality rules, lineage, and governance** in Azure Purview and AWS Glue Catalog, aligning with central data‑office standards. <br>• Built end‑to‑end **Power Apps/Power Automate** workflows that automated data‑mart refresh notifications and SLA reporting. |
| **Certifications & continuous learning** | • Microsoft Certified: **DP‑600 (Fabric Analytics Engineer)**, **PL‑300 (Power BI Data Analyst)**, **AZ‑900 (Azure AI Fundamentals)**. <br>• Ongoing coursework in **AWS Bedrock & Generative AI** (Coursera, 2025). |

Beyond technical expertise, I thrive in **cross‑functional environments**, partnering with central AI teams, business stakeholders, and visualization analysts to translate strategic goals into trusted data products. My collaborative style and relentless focus on performance optimization have consistently delivered measurable business value—exactly the outcomes GE HealthCare seeks for its AVS segment.

I am eager to bring this blend of **data‑engineering rigor, Power BI mastery, and AI‑centric thinking** to GE HealthCare. Thank you for considering my application. I look forward to the opportunity to discuss how my background aligns with your vision for a data‑driven, AI‑enabled future.

Sincerely,

[Your Name]

Tip: Export the above as a PDF (e.g., using Word → “Save As PDF”) and keep the file name FirstLast_GE‑HealthCare_DataEngineer.pdf.


2️⃣ Resume (PDF‑style layout)

Formatting notes – Use a clean, sans‑serif font (Calibri 11 pt or similar). Keep margins at 0.75 in. Save as PDF with the same naming convention as the cover letter.

[Your Name] | [Phone] | [Email] | LinkedIn: linkedin.com/in/[your‑handle] | GitHub: github.com/[your‑handle]

PROFESSIONAL SUMMARY
Data Engineer with 8+ years of experience delivering scalable, AI‑ready data platforms in Azure and AWS. Expert in dimensional modeling, Power BI semantic architecture, and end‑to‑end analytics pipelines. Proven ability to partner with enterprise AI teams, enforce data governance, and accelerate self‑service analytics adoption for large, regulated organizations.

CORE COMPETENCIES
• Dimensional Modeling & Data Marts          • Power BI Semantic Modeling (Dataflows, Composite Models)
• Azure Synapse, Azure Data Factory, Azure Purview • AWS Redshift, S3, Glue, Lake Formation
• SQL, DAX, Power Query (M)                  • Fabric Copilot Studio, Fabric Data Agents
• Python (pandas, scikit‑learn) for AI/ML   • Power Apps / Power Automate
• Data Quality, Lineage & Governance        • CI/CD (Azure DevOps, GitHub Actions)
• Performance Tuning & Cost Optimization    • ERP/CRM Integration (Oracle EBS, Salesforce)

PROFESSIONAL EXPERIENCE
────────────────────────────────────────────────────────────────────────────────────────────────────
Senior Data Engineer – Analytics Platform
**XYZ Medical Devices, Boston, MA** (Remote) | Jan 2022 – Present
- Designed and built a **semantic data layer** (star schema) for a global imaging‑device portfolio (2 B+ rows) on Azure Synapse, achieving **99.9 % data accuracy** and full lineage in Azure Purview.
- Developed **SQL‑based ELT pipelines** (ADF + Azure Databricks) that reduced raw‑to‑analytics latency from 48 h to < 2 h, supporting real‑time operational dashboards.
- Led a team of 4 analysts to create **enterprise‑grade Power BI models** (dataflows, aggregations, composite models) serving 1,200+ users; query latency dropped from 8 s to < 1 s on > 500 M rows.
- Instituted a **DAX & M best‑practice framework** and CI/CD pipeline (Azure DevOps) that cut model‑deployment time by 70 % and enforced version control.
- Integrated **Microsoft Fabric Copilot Studio** to enable natural‑language querying, increasing self‑service query volume by 45 % within 3 months.
- Delivered an **anomaly‑detection model** (Python‑scikit‑learn) for equipment‑downtime prediction; reduced unplanned downtime by 22 % after production deployment via Azure ML.
- Implemented data‑quality rules, lineage, and governance in Azure Purview, aligning with corporate data‑office standards.
- Built **Power Apps/Power Automate** workflows that automated data‑mart refresh notifications and SLA reporting, reducing manual effort by 30 h/month.

────────────────────────────────────────────────────────────────────────────────────────────────────
Data Engineer – Cloud Analytics
**ABC Health Solutions, Chicago, IL** (Hybrid) | Jun 2017 – Dec 2021
- Migrated legacy on‑prem data warehouse to **AWS Redshift + S3 + Glue**, redesigning 30+ data marts as fully conformed dimensional models.
- Created **SQL‑based transformation pipelines** in AWS Glue (Python/Scala) that processed > 1 TB/day with a 99.5 % success rate.
- Developed **Power BI semantic models** (dataflows, composite models) for finance and clinical reporting; introduced aggregations that cut report refresh time from 12 min to < 2 min.
- Established **data‑quality framework** using AWS Deequ; reduced data‑issue tickets by 68 % year‑over‑year.
- Partnered with central AI team to prototype **predictive readmission models** (Python, TensorFlow) and delivered data feeds via Azure Data Factory for model training.
- Conducted workshops for 200+ business users on self‑service Power BI, increasing adoption from 15 % to 55 % of the user base.

────────────────────────────────────────────────────────────────────────────────────────────────────
Data Engineer – Business Intelligence
**HealthTech Corp., New York, NY** | Aug 2014 – May 2017
- Built and maintained **SQL Server‑based data marts** for clinical trial data, supporting > 500 daily analytical queries.
- Designed **Power BI dashboards** for executive leadership; introduced row‑level security and performance tuning that reduced query time by 80 %.
- Implemented **ETL processes** in SSIS and later migrated to Azure Data Factory, cutting pipeline maintenance effort by 40 %.

EDUCATION
────────────────────────────────────────────────────────────────────────────────────────────────────
Master of Science – Data Analytics
University of Illinois Urbana‑Champaign | 2014

Bachelor of Science – Computer Science
University of Texas at Austin | 2012

CERTIFICATIONS
────────────────────────────────────────────────────────────────────────────────────────────────────
- Microsoft Certified: **DP‑600 (Fabric Analytics Engineer)** – 2024  
- Microsoft Certified: **PL‑300 (Power BI Data Analyst)** – 2023  
- Microsoft Certified: **AZ‑900 (Azure AI Fundamentals)** – 2022  
- AWS Certified **Data Analytics – Specialty** – 2023 (optional, if you have it)

TECHNICAL TOOLBOX
────────────────────────────────────────────────────────────────────────────────────────────────────
SQL, T‑SQL, PL/SQL | Azure Synapse, Azure Data Factory, Azure Purview, Azure Databricks | AWS Redshift, S3, Glue, Lake Formation | Power BI (Desktop, Service, Dataflows, Composite Models) | DAX, Power Query (M) | Microsoft Fabric (Copilot Studio, Data Agents) | Python (pandas, scikit‑learn, TensorFlow) | Power Apps, Power Automate | Git, Azure DevOps, GitHub Actions | Tableau (basic) | ERP/CRM: Oracle EBS, Salesforce

How to use:

  1. Replace placeholders (company names, dates, metrics) with your actual experience.
  2. Keep the Core Competencies and Technical Toolbox sections verbatim—they contain the exact keywords GE listed.
  3. Export as PDF (File → Save As → PDF).

3️⃣ Interview‑Ready Cheat Sheet

GE Focus Area Sample Talking‑Point (STAR) Key Metrics to Mention
AI‑Ready Data Foundation Situation: Legacy data lake had no semantic layer, causing duplicate effort.
Task: Build a curated, AI‑ready data model for imaging devices.
Action: Designed a star schema in Azure Synapse, built ELT pipelines in ADF, captured lineage in Purview, added metadata tags for ML feature discovery.
Result: 99.9 % data accuracy, latency < 2 h, enabled 3 generative‑AI pilots within 6 months.
Data volume (2 B rows), latency reduction (48 h → < 2 h), accuracy %
Analytics & Self‑Service Situation: Power BI reports refreshed every 12 min, frustrating executives.
Task: Reduce refresh time and improve model governance.
Action: Re‑architected the model using composite models + aggregations, moved heavy calculations to dataflows, set up CI/CD with Azure DevOps, defined DAX & M standards.
Result: Refresh time < 2 min, query latency < 1 s, governance compliance score 95 % (internal audit).
Refresh time, query latency, user count, compliance score
AI & Advanced Analytics Adoption Situation: Need for predictive equipment‑downtime alerts.
Task: Deliver an AI solution integrated with Power BI.
Action: Built a Python‑based anomaly detection model, exposed predictions via Azure ML endpoint, consumed via Power BI dataflow, added natural‑language Q&A using Fabric Copilot Studio.
Result: 22 % reduction in unplanned downtime, 45 % increase in self‑service NLQ usage.
Downtime reduction %, NLQ usage increase, model accuracy (e.g., AUC 0.92)
Governance & Collaboration Situation: Central data office required lineage for all assets.
Task: Implement lineage across Azure & AWS.
Action: Leveraged Azure Purview & AWS Glue Catalog, automated lineage capture via ADF/Glue jobs, built a dashboard for lineage visibility.
Result: 100 % of AVS data assets now have documented lineage; audit findings cleared.
% assets with lineage, audit outcome
Power Platform Integration Situation: Manual data‑mart refresh notifications caused missed SLAs.
Task: Automate notifications.
Action: Created a Power Automate flow triggered by Azure Data Factory pipeline status, sent Teams alerts, logged to SharePoint.
Result: SLA compliance rose from 78 % to 98 %.
SLA compliance % improvement

Quick “Why GE?” Pitch (30‑sec):

“GE’s AVS segment sits at the intersection of cutting‑edge imaging hardware and AI‑driven clinical insight. My experience building AI‑ready data foundations and high‑performance Power BI models directly aligns with the roadmap you’ve outlined—especially the push toward generative‑AI and conversational analytics. I’m excited to bring my blend of cloud engineering, governance, and Power Platform expertise to help GE HealthCare deliver faster, more trustworthy insights to clinicians worldwide.”


How to Use This Package

  1. Customize the cover letter and resume with your personal details and exact numbers.
  2. Save both documents as PDFs with the naming convention: FirstLast_GE-HealthCare_DataEngineer.pdf and FirstLast_GE-HealthCare_CoverLetter.pdf.
  3. Upload them through GE’s career portal before the April 13, 2026 deadline.
  4. Prepare for the interview using the cheat sheet—practice delivering each STAR story in 1‑2 minutes.
  5. Optional: Add a short “Portfolio” link (GitHub repo, Power BI sample workspace, or a PDF of a data model) to demonstrate your hands‑on work.

Good luck! 🎉 If you need any further tweaks—e.g., a deeper dive into a specific project, a LinkedIn “About” section, or a mock interview script—just let me know.

Requirements

  • 8+ years in data architecture/engineering delivering dimensional models, curated marts, and production pipelines in cloud environments (e.g., AWS Redshift/S3/Glue, Microsoft Fabric/Azure, ADF or equivalent)
  • Mastery of SQL, DAX, and Power Query (M); strong performance tuning and model optimization at scale
  • Deep Power BI experience: semantic modeling, composite models, aggregations, dataflows, and Fabric integration
  • Hands‑on experience with Copilot Studio, Fabric Data Agents, or AWS Bedrock
  • Familiarity with Power Apps / Power Automate for end‑to‑end digital workflows

Responsibilities

  • Design and implement curated, semantic data layers (dimensional models, business definitions) ensuring AI/ML readiness
  • Design scalable SQL‑based pipelines to transform raw and enterprise data into analytics‑ready assets
  • Implement data quality frameworks, lineage, and governance
  • Lead enterprise‑grade Power BI semantic model design (Dataflows, composite models, aggregations)
  • Define best practices for DAX, M, and workspace governance
  • Partner with visualization analysts and business leaders to translate requirements into trusted, performant models
  • Design AI‑ready semantic layers and metadata for natural‑language querying and intelligent workflows
  • Build and deploy AI solutions using enterprise‑approved platforms
  • Engage central AI/IT for architecture guidance and enterprise deployment of AI use cases

Benefits

paid_time_offdental_coveragehealth_insurance

Skills

SQLDAXPower Query (M)Power BIData modelingData pipelinesData governanceAI/ML integrationPython (preferred)Copilot StudioFabric Data AgentsAWS BedrockPower AppsPower Automate

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free