Principal Consultant– Data Engineer
Genpact
About the role
About Genpact
Ready to shape the future of work?
At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry‑first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large‑scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges.
If you thrive in a fast‑moving, tech‑driven environment, love solving real‑world problems, and want to be part of a team that’s shaping the future, this is your moment.
Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting‑edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook.
Role
Principal Consultant – Data Engineer
This role supports business enablement, which includes understanding of business trends, providing data‑driven solutions at scale. The hire will be responsible for developing, expanding and optimizing our data pipeline architecture, as well as optimizing data flow and collaboration from cross‑functional teams.
The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from ground up either on‑prem or in cloud (AWS/Azure). The data engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture.
Core data engineering work experience in Life Sciences/Healthcare/CPG for minimum 8+ years.
Work location: Bangalore
Responsibilities
- Good years of professional experience in creating and maintaining optimal data pipeline architecture. Assemble large, complex data sets that meet functional/non‑functional business requirements.
- Experience working on warehousing systems, and an ability to contribute towards implementing end‑to‑end, loosely coupled/decoupled technology solutions for data ingestion and processing, data storage, data access, and integration with business‑user centric analytics/business intelligence frameworks.
- Advanced working SQL knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Design, develop, and maintain scalable and resilient ETL/ELT pipelines for handling large volumes of complex data.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS/Azure big data toolsets.
- Architecting and implementing data governance and security for data platforms on cloud.
- Cloud certification will be an advantage but not a mandate for this role.
- Experience with relational SQL and NoSQL databases, including Postgres and MongoDB.
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with data pipeline and workflow management tools: Airflow, Luigi, etc.
- Experience with AWS Cloud services or Azure cloud services.
- Experience with scripting languages: Python or Java.
- Understanding with stream‑processing systems: Spark‑Streaming, etc.
- Strong project management and organizational skills.
- Ability to comprehend business needs, convert them into BRD & TRD (Business/Technical requirement document), develop implementation roadmap and execute on time.
- Effectively respond to requests for ad‑hoc analyses.
- Good verbal and written communication skills.
- Ownership of tasks assigned without supervisory follow‑up.
- Proactive planner and can work independently to manage own responsibilities.
- Personal drive and positive work ethic to deliver results within tight deadlines and in demanding situations.
Qualifications
Minimum qualifications
- Master’s or bachelor’s in engineering – BE/B‑Tech, BCA, MCA, BSc/MSc.
- Master’s in science or related.
Why join Genpact?
- Be a transformation leader – Work at the cuttin (text as provided)
Requirements
- Minimum qualifications Master’s or bachelor’s in engineering - BE/B- Tech, BCA, MCA, BSc/MSc
- Master’s in science or related
Responsibilities
- Assemble large, complex data sets that meet functional/non-functional business requirements
- Design, develop, and maintain scalable and resilient ETL/ELT pipelines for handling large volumes of complex data
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS/Azure big data toolsets
- Architecting and implementing data governance and security for data platforms on cloud
- Effectively respond to requests for ad hoc analyses.
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free