Skip to content
mimi

Data Engineer - AWS / Databricks

Acuity, Inc.

Reston · On-site Full-time 1w ago

About the role

Overview

Acuity Inc. is seeking a highly skilled Data Engineer to join our Engineering Team, helping drive the design and delivery of AWS cloud-scale data platforms for federal clients. This role requires knowledge and/or experience with Spark, Delta Lake, and distributed data pipelines on Databricks. The ideal candidate brings both engineering and strategic insight into enterprise data modernization.

Are you ready to use your expertise in the areas of IT Modernization, Data Enablement, and Hyperautomation to make a real difference? Join Acuity, Inc., a technology consulting firm that supports federal agencies. We combine industry partnerships and long-term federal experience with innovative technical leadership to support our customers’ critical missions.

Responsibilities

  • Build and maintain scalable PySpark-based data pipelines in Databricks notebooks to support ingestion, transformation, and enrichment of structured and semi-structured data.
  • Design and implement Delta Lake tables optimized for ACID compliance, partition pruning, schema enforcement, and query performance across large datasets.
  • Develop ETL and ELT workflows that integrate multiple source systems into a centralized, query-optimized data warehouse architecture.
  • Leverage Spark SQL and DataFrame APIs to implement business rules, dimensional joins, and aggregation logic aligned to warehouse modeling best practices.
  • Collaborate with data architects and engineers to implement cloud-native data solutions on AWS using S3, Glue, RDS, and IAM for secure, scalable storage and access control.
  • Optimize pipeline performance through intelligent partitioning, caching, broadcast joins, and adaptive query tuning.
  • Deploy and version data engineering assets using Git-integrated development workflows and automate deployment with CI/CD tools such as GitLab or Jenkins.
  • Monitor pipeline health, job execution, and cluster utilization using native Databricks tools and AWS CloudWatch, identifying bottlenecks and optimizing cost-performance tradeoffs.
  • Conduct technical discovery and mapping of legacy source systems, identifying required transformations and designing end-to-end data flows.
  • Implement governance practices including metadata tagging, data quality validation, audit logging, and lineage tracking using platform-native features and custom logic.
  • Support ad hoc data access requests, develop reusable data assets, and maintain shared notebooks that meet operational reporting and analytics needs across teams.

Qualifications

  • 2+ years of experience in data engineering and Agile analytics
  • 2+ years of experience creating software for retrieving, parsing and processing structured and unstructured data
  • 1 to 2 years of experience building scalable ETL and ELT workflows for reporting and analytics
  • 1 or more years experience building enterprise data engineering solutions in the cloud, with preferred experience with cloud native technologies from AWS and Databricks
  • Experience with data quality, validation frameworks, and storage optimization strategies
  • BA or BS degree

Clearance Requirement

  • Must be US Citizen with an ability to obtain and maintain US Suitability

About Acuity

Acuity, Inc. is a leading management and technology consulting firm that specializes in serving the federal government. Our innovative, collaborative and rewarding work environment has earned repeat honors from the Washington Business Journal’s Best Places to Work and SmartCEO Corporate Culture awards.

Why Choose Acuity?

  • Innovative Excellence: Recognized by The Washington Post's Top Workplaces and a nine-time recipient of the Washington Business Journal's Best Places to Work.
  • Competitive Compensation: We value our employees and offer highly competitive compensation and benefits packages.
  • Personal Growth: Tailored training, mentorship, and cutting‑edge resources to help you thrive.
  • Recognition and Visibility: Opportunities to stand out with strong customer feedback and growth channels.
  • Collaborative Culture: A culture where teamwork and every voice matter.
  • Diversity and Inclusion: A workforce that celebrates diversity and treats every employee with dignity and respect.

Join Acuity, where your talents are valued, your growth is nurtured, and your impact is amplified. Together, let's shape the future of digital strategy and technology consulting.

Equal Employment Opportunity

We are an equal employment opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, national origin, disability status, protected veteran status or any other characteristic protected by law.

Requirements

  • Experience with data quality, validation frameworks, and storage optimization strategies

Responsibilities

  • Build and maintain scalable PySpark-based data pipelines in Databricks notebooks to support ingestion, transformation, and enrichment of structured and semi-structured data.
  • Design and implement Delta Lake tables optimized for ACID compliance, partition pruning, schema enforcement, and query performance across large datasets.
  • Develop ETL and ELT workflows that integrate multiple source systems into a centralized, query-optimized data warehouse architecture.
  • Leverage Spark SQL and DataFrame APIs to implement business rules, dimensional joins, and aggregation logic aligned to warehouse modeling best practices.
  • Collaborate with data architects and engineers to implement cloud-native data solutions on AWS using S3, Glue, RDS, and IAM for secure, scalable storage and access control.
  • Optimize pipeline performance through intelligent partitioning, caching, broadcast joins, and adaptive query tuning.
  • Deploy and version data engineering assets using Git-integrated development workflows and automate deployment with CI/CD tools such as GitLab or Jenkins.
  • Monitor pipeline health, job execution, and cluster utilization using native Databricks tools and AWS CloudWatch, identifying bottlenecks and optimizing cost-performance tradeoffs.
  • Conduct technical discovery and mapping of legacy source systems, identifying required transformations and designing end-to-end data flows.
  • Implement governance practices including metadata tagging, data quality validation, audit logging, and lineage tracking using platform-native features and custom logic.
  • Support ad hoc data access requests, develop reusable data assets, and maintain shared notebooks that meet operational reporting and analytics needs across teams.

Benefits

health insurancedental insurancevision insurance

Skills

AWSDatabricksDelta LakeETLGitGlueIAMJenkinsLambdaRDSS3SparkSpark SQL

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free