Skip to content
mimi

Senior Software Engineer

AppFolio

Washington · flexible Full-time Senior $138k – $173k/yr Today

About the role

About

We are hiring a Senior Software Engineer in our Platform Data Query team to operate, maintain, scale, and enhance Appfolio's data streaming and data access systems. Must have experience with modern data lake architectures, as you will be directly working with Iceberg data lakes, Trino, and real‑time streaming using Apache Flink and Kafka. Our data powers customer‑facing dashboards, reports, BI integrations, and AI‑powered agents. Appfolio supports a significant part of the U.S. real estate market, and our data unlocks insights for customers and serves as the basis for new tools and capabilities that deliver value. The Platform Data Query system provides uniform, robust, and flexible access to data in Appfolio, powering a variety of applications that enhance the lives and businesses of property managers. This role is pivotal to the ongoing operation, scaling, and enhancement of that system, ultimately unlocking tremendous potential for the real estate industry.

Responsibilities

  • Build a deep understanding of our data structure and systems to maintain, scale, and extend the existing architecture.
  • Maintain, optimize, and scale our robust data access layer on top of our Iceberg data lake, owning under‑the‑hood optimizations such as data compaction for performance and storage efficiency.
  • Design, build, and operate a robust API on top of our data tech stack, ensuring secure data access and seamless integration for downstream applications and platform services.
  • Collaborate with Product to understand operational needs, troubleshoot issues, and design technical add‑ons or enhancements to existing solutions.
  • Work in an agile fashion to turn scaling challenges and feature enhancements into thinly sliced deliverables and execute quickly while limiting work in progress.
  • Hold a high bar of engineering excellence, adopt best practices, provide and receive in‑depth code reviews, and participate in healthy debate. Evangelize expertise among teammates and the organization.
  • Ensure data flowing through pipelines is tested with appropriate unit and integration tests to guarantee correct data reaches customers.
  • Deliver well‑instrumented solutions; queries and dashboards are easily accessible and regularly used to drive decisions and measure progress.
  • Participate enthusiastically in a high‑performing, empowered team with mutual trust and respect, taking ownership of problem spaces, learning from failures, and celebrating successes.
  • Operate, optimize, and scale systems responsible for high‑concurrency access to large data sets, requiring hands‑on execution and deep knowledge of data access and query optimization with distributed engines like Trino and AWS Athena. Identify gaps, deficiencies, and inefficiencies, and propose and implement solutions.

Requirements

  • Experience operating, scaling, and enhancing data pipelines at a company with large data sets using Apache Flink and Kafka, especially with multi‑tenant data in an agile SaaS environment.
  • Foundational experience operating, tuning, and maintaining Iceberg data lakes, including deep knowledge of table maintenance and data compaction strategies.
  • Experience working on platform teams or maintaining platform services whose customers are other internal teams.
  • Proven experience working across all levels of the development stack.
  • Proficiency with object‑oriented languages (Python, Ruby, JavaScript, Java, C#, etc.).
  • Strong SQL proficiency and deep knowledge of data access/query optimization, able to optimize performance and cost efficiency at scale using distributed engines like Trino and AWS Athena.
  • Familiarity with core architecture principles of at‑scale systems.
  • Strong familiarity with public cloud infrastructure, particularly AWS (including native tools like AWS Glue, AWS S3, and AWS Athena).
  • Strong familiarity with Agile software development processes: Scrum or Kanban.
  • Creativity and proactivity in solving complex scaling and operational problems; love to learn new tech while leveraging existing technology effectively.
  • Commitment to long‑term maintainability of the codebase; advocate for refactoring and code cleanliness, identifying and resolving code‑smells through sensible refactoring.

Additional Skills and Knowledge

  • 5+ years of experience working in software engineering teams.
  • Comfortable working with remote team members.
  • Ability to think pragmatically and balance business outcomes with technical goals.
  • Ability to establish strong working relationships with peers across other platform development teams.

Compensation & Benefits

  • Base salary range: $138,400 – $173,000 (actual salary determined by skills, education, experience, etc.).
  • Base pay is one component of a Total Rewards package; additional benefits and bonuses may apply based on role and employment type.
  • Regular full‑time employees are eligible for benefits

Requirements

  • Experience operating, scaling, and enhancing data pipelines at a company with large data sets using Apache Flink and Kafka, especially with multi-tenant data in an agile SaaS environment.
  • Foundational experience operating, tuning, and maintaining Iceberg data lakes, including deep knowledge of table maintenance and data compaction strategies.
  • Experience working on platform teams or maintaining platform services, whose customers are other internal teams.
  • Proven experience working across all levels of the development stack.
  • Proven experience with object-oriented languages (Python, Ruby, JS, Java, C#, etc.).
  • Strong SQL proficiency and deep knowledge of data access/query optimization, requiring the ability to optimize query performance and cost efficiency at scale using distributed engines like Trino and AWS Athena.
  • Familiarity with core architecture principles of at-scale systems.
  • Strong familiarity with public cloud infrastructure, particularly AWS (including native tools like AWS Glue, AWS S3, and AWS Athena).
  • Strong familiarity with Agile software development processes: Scrum or Kanban.
  • Creativity and proactivity - an ability to solve complex scaling and operational problems.
  • You love to learn about and use new tech, but understand the value of continuing to leverage and optimize existing technology when it gets the job done.
  • You care about the long-term maintainability of the codebase and advocate for refactoring and code cleanliness.
  • You can identify and resolve code-smells through sensible refactoring.

Responsibilities

  • Build a deep understanding of our data structure and systems - enabling you and your team to maintain, scale, and add on to the existing architecture.
  • Maintain, optimize, and scale our robust data access layer on top of our Iceberg data lake, taking ownership of under-the-hood optimizations like data compaction for performance and storage efficiency.
  • Design, build, and operate a robust API on top of our data tech stack, ensuring secure data access and seamless integration for downstream applications and platform services.
  • Collaborate with Product to understand current operational needs, troubleshoot issues, and design technical add-ons or enhancements to our existing solutions.
  • Work in a truly agile fashion to turn scaling challenges and feature enhancements into thinly sliced deliverables and execute quickly against them while limiting work in progress.
  • Hold a high bar of engineering excellence and always look for ways to raise it. Adopt our engineering best practices, provide and receive in-depth code reviews, and participate in healthy debate as a team. Evangelize your own expertise and experience among your teammates and the rest of the organization.
  • Together with your team, you ensure the data flowing through our data pipelines is tested with appropriate unit and integration tests to ensure the correct data makes it to our customers.
  • Together with your team, your deliverables are always well-instrumented. Queries and dashboards are easily accessible and regularly used to drive decisions as well as measure progress.
  • Enthusiastically participate in a high-performing, empowered team with high levels of mutual trust and respect. Along with the team, you will take ownership of your problem space - reflecting and growing from our failures and celebrating our successes.
  • Operate, optimize, and scale systems responsible for high concurrency access to large data sets, requiring hands-on execution and deep knowledge of data access and query optimization with distributed query engines like Trino and AWS Athena. Identify gaps, deficiencies, and inefficiencies in the system. Propose and implement solutions.

Benefits

health_insurance

Skills

AWS AthenaAWS GlueAWS S3Apache FlinkIcebergJavaJSKafkaPythonRubySQLTrino

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free