Skip to content
mimi

Senior Big Data Software Engineer

Fashionunited

Washington · On-site Full-time Senior 2d ago

About the role

As Senior Software Engineer, you will collaborate closely with design, product and engineering experts to tackle real-world challenges and deliver innovative solutions that elevate Kohl's retail offerings.

What You'll Do

  • Lead the development of high-quality applications that are robust, observable and measurable using extreme programming (XP) practices and a user-centric approach.
  • Participate in the entire application lifecycle in collaboration with designers, product managers, and other engineers on the product team.
  • Leverage critical thinking, experimentation, data, and industry best practices to implement desired business outcomes.
  • Facilitate group discussions and team ceremonies and develop a shared context.
  • Give and receive feedback that's empathetic, actionable and specific.
  • Practice emergent architecture with sane defaults and build software that is easy to use and easy to modify.
  • Establish and lead product engineering and software standards.
  • Ideate a new product from a user perspective, starting with one or more problem spaces and ending with a stackranked list of feasible solutions to test.
  • Research and stay up to date on tech market trends and practices.
  • Lead technical initiatives not only on the team but also across the department.
  • Additional tasks may be assigned.

Senior Big Data Software Engineer

  • Develop, automate, and maintain batch and streaming ETL pipelines using Apache Airflow, Apache Spark, Python, and Scala.
  • Build and manage cloud-based data ecosystems on GCP (BigQuery, Bigtable, Dataproc, Pub/Sub, Cloud Storage, IAM, VPC).
  • Design and optimize SQL and NoSQL data models for data lakes and warehouses (BigQuery, MongoDB, Snowflake).
  • Write complex SQL queries for advanced data transformation, aggregation, and analytics optimization within BigQuery or equivalent platforms.
  • Apply modern Test-Driven Development (TDD) methodologies for big data pipelines, ensuring test automation across Airflow workflows, Spark jobs, and transformation logic.
  • Apply data mesh and dataasaproduct principles to enable reusable and domaindriven datasets.
  • Implement real time ingestion with Kafka Connect and process streaming data using Spark Streaming, Apache Flink, or similar technologies.
  • Optimize data performance, scalability, and cost efficiency across GCP components.
  • Ensure compliance with PCI and PII data with standards such as GDPR, PCI DSS, SOX, and CCPA.
  • Integrate GenAI tools such as OpenAI, Gemini, and Anthropic LLMs for intelligent data quality and analytics enhancement.
  • Collaborate with stakeholders, data scientists, and full stack engineers to deliver trusted, documented, and reusable data products.

What Skills You Have

Required

  • 4+ years of experience in software development
  • Understanding of application design patterns, eventdriven architecture, database, schemas and testing strategies
  • Indepth knowledge and experience with continuous integration, continuous deployment and testdriven development

Preferred

  • Bachelors Degree or equivalent in MIS, Computer Science or related field
  • Experience with largescale application troubleshooting and performance tuning
  • Exposure working with major cloud platforms (GCP, AWS, or Azure)
  • Familiarity and experience with XP (Extreme Programming)

Skills

Apache AirflowApache SparkBigQueryCloud StorageDataprocFlinkGCPGeminiGenAIIAMKafka ConnectLLMsMongoDBOpenAIPCI DSSPythonPub/SubScalaSnowflakeSpark StreamingSQLTDDVPCXP

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free