Jamie - Data Engineer (Trovo Health)
The Alyst Group
About the role
About Trovo
Trovo Health is building the AI-powered care team platform for infinitely scalable clinical capacity. We radically increase access and improve quality of care by combining AI agents with clinical experts to take on high-impact clinical operations and care management activities for healthcare organizations.
We’re growing rapidly and are backed by Oak HC/FT, investors in leading healthcare and technology companies such as Ambience Healthcare, Devoted Health, VillageMD, CareBridge, Main Street Health, Maven Clinic, and more.
About the role
We are looking for a highly driven Data Engineer to help design, build, and scale the data foundations that power Trovo Health’s AI agents, including owning critical client integrations. You will work closely with the product, engineering, and AI/ML teams to ensure Trovo’s data architecture is scalable, reliable, secure, and actionable. This is a broad-scope, high-impact role for a technical builder who thrives in ambiguity and wants meaningful ownership over modern data infrastructure in an early-stage healthcare startup.
Responsibilities
- Build core data infrastructure: Design, implement, and maintain scalable, data pipelines and architectures across ingestion, transformation, and storage layers
- Own client integrations and data ingestion: Structure and maintain durable integrations and ETL workflows to reliably onboard and normalize client data.
- Design robust data models: Build and maintain clean, well-documented data models that support reporting, analytics, and downstream automation.
- Enable product and analytics outcomes: Ensure data flows accurately from source systems to products, analytics, and AI/ML use cases.
- Drive data quality and reliability: Implement monitoring, testing, and observability to ensure high data quality and system uptime.
Requirements
- Strong technical foundation: 4-6+ years of experience in Data Engineering or Software Engineering with a data focus; strong proficiency in Python and SQL.
- Modern data stack experience: Hands‑on experience with cloud data platforms (AWS and/or GCP), data warehouses (Snowflake, Redshift, or BigQuery), orchestration tools (Airflow or similar), and transformation tools (dbt preferred).
- Healthcare data chops: Familiarity integrating with healthcare data standards and systems (e.g., FHIR/HL7, EHR APIs), including working with messy clinical/claims data.
- Data modeling expertise: Experience designing data models for different sources, workflows, and business processes.
- Clear communication: Ability to explain complex technical concepts to both technical and non‑technical stakeholders.
- NYC‑based: You are based in New York and excited to be in‑office 3+ days per week.
Compensation
Target compensation for this role is $200-$250k, plus equity and a generous benefits package.
Equal Opportunity
Trovo Health is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Employment Type
FULL
Requirements
- Strong proficiency in Python and SQL
- Hands-on experience with cloud data platforms (AWS and/or GCP), data warehouses (Snowflake, Redshift, or BigQuery), orchestration tools (Airflow or similar), and transformation tools (dbt preferred)
- Familiarity integrating with healthcare data standards and systems (e.g., FHIR/HL7, EHR APIs), including working with messy clinical/claims data
- Experience designing data models for different sources, workflows, and business processes
- Ability to explain complex technical concepts to both technical and non-technical stakeholders
Responsibilities
- Design, implement, and maintain scalable, data pipelines and architectures across ingestion, transformation, and storage layers
- Structure and maintain durable integrations and ETL workflows to reliably onboard and normalize client data
- Build and maintain clean, well-documented data models that support reporting, analytics, and downstream automation
- Ensure data flows accurately from source systems to products, analytics, and AI/ML use cases
- Implement monitoring, testing, and observability to ensure high data quality and system uptime
Benefits
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free