JV
AWS Developer With Python Dev Experience
Jobs via Dice
Reston · Hybrid Full-time Senior Today
About the role
About
Location: Reston (hybrid model – 2‑3 days per week onsite)
Employment Type: W2 with benefits (no C2C)
Responsibilities
- Strong Python development to build a big data pipeline for data processing and analysis.
- Work with Hadoop EMR, Spark, PySpark, Hive, AWS, Terraform (end‑to‑end automation), GitHub, and CI/CD.
- Collaborate within a team of developers to assist with development.
- Advise on best possible solutions and tools to use.
Nice to Have
- AWS AI background (specific tools not specified).
- Vulnerability management experience (Tenable, Wiz, Kenna).
- Financial / mortgage industry background.
Qualifications
- 10+ years hands‑on Python development experience for big data applications.
- Extensive experience implementing scalable data processing pipelines using Hadoop/EMR, Spark, and Hive.
- Strong development experience with AWS services such as Lambda, EventBridge, Step Functions, Redshift, S3, and Glue.
- Hands‑on experience making API calls, writing SQL queries, and performing data manipulation/transformation, analysis, and visualization.
- Solid understanding and implementation experience with CI/CD pipelines; familiar with GitLab/Terraform.
- Critical thinking and strong analytical skills to drive solutions and alternative plans.
- Ability to collaborate effectively across a matrixed organization and form positive partnerships.
- Strong analytical and problem‑solving skills.
Skills Required
- 10+ years Python development for big data applications.
- 5+ years development experience with AWS platforms/services (EMR, Lambda, EventBridge, Step Functions, Redshift, S3, Glue).
- 5+ years experience making API calls, writing SQL queries, working with Spark, and handling data manipulation/transformation, analysis, and visualization.
- Hands‑on experience with CI/CD pipeline tools such as GitLab/Terraform.
Benefits
- Group medical, dental, vision, life, short‑term and long‑term disability insurance.
- 21 days accrued paid time off.
- 401(k) plan.
- Tuition reimbursement.
- Performance bonuses.
- Paid overtime and other W2 consultant benefits.
Requirements
- Python (Big Data Pipeline)
- AWS
- Hadoop, Spark, Hive
- EMR
- Strong Python development to build a big data pipeline for data processing and analysis
- Need strong experience in Hadoop EMR, Spark, PySpark, Hive, AWS, terraform (end to end automation), GitHub, CICD
- Need to work within a team of developers to assist with development
- Need to be open to collaborating and advising on best possible solutions and tools to use
- AI (Some AWS AI background is nice to have
- Vulnerability management background (Tenable, Wiz, Kenna)
- Financial / Mortgage based background
- Qualifications 10+ years hands-on Python development experience for big data application
- Extensive working experience in implementing scalable and efficient data processing pipelines using big data technologies, such as Hadoop/EMR, Spark, Hive
- Strong development experience in AWS platforms/services such as Lambda, Eventbridge, Step Functions, Redshift, S3, Glue
- Extensive hands-on development experience in making API calls, SQL queries, data manipulation/transformation, analysis and visualization
- Strong experience in data analysis/mining, hands-on experience on analyzing large datasets and extracting meaningful insights
- Solid understanding and hand-on implementation experience in CI/CD pipeline
- Familiar with tools such as Gitlab/Terraform
- Critical thinking and strong analytical skills to drive solutions, alternative plans, and conclusions Collaborates effectively and forms positive partnerships with team members in groups and levels across a matrixed organization Possess strong analytical and problem-solving skills
- Skills Required 10+ years of Python development experience for big data application Required 5+ years of development experience in AWS platforms/services such as EMR, Lambda, Eventbridge, Step Functions, Redshift, S3, Glue Required 5+ years of development experience in making API calls, SQL queries, Spark, data manipulation/transformation, analysis and visualization
- Hands on experience with CI/CD pipeline tools such as Gitlab/Terraform
Responsibilities
- Strong Python development to build a big data pipeline for data processing and analysis
- Need strong experience in Hadoop EMR, Spark, PySpark, Hive, AWS, terraform (end to end automation), GitHub, CICD
- Need to work within a team of developers to assist with development
- Need to be open to collaborating and advising on best possible solutions and tools to use
Benefits
dental_coveragepaid_time_offhealth_insurance
Skills
AWSCICDEMRGitGitHubGitlabHadoopHiveLambdaPySparkPythonRedshiftS3SparkStep FunctionsTerraform
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free