Sr. Consultant, Data Engineer
IBM
About the role
**Introduction** At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, lets talk. • *Your role and responsibilities** We are looking for a Sr. Consultant, Data Engineer to join our growing team of experts. This position will work in the design and development of Snowflake Data Cloud solutions. The work includes data ingestion pipelines, data architecture, data governance and security. You are an experienced data pipeline builder and migrations who enjoys optimizing data systems and building them from the ground up. You will develop database architectures, data warehouses and will ensure optimal data delivery architecture is consistent throughout ongoing customer projects. On this role you will be leading technical teams. The right candidate will be excited by the prospect of working for a start-up company to support our customers' next generation of data initiatives. As of April 2025, Hakkoda has been acquired by IBM and will be integrated in the IBM organization. Your recruitment process will be managed by IBM. IBM will be the hiring entity This role can be performed from anywhere in the US. • *Required technical and professional expertise** • Bachelor's degree in engineering, computer science or equivalent area. • Expertise in evaluating, selecting, and integrating ingestion technologies to solve complex data challenges. • Leadership in architectural decisions for high-throughput data ingestion frameworks, including real-time data processing and analytics. • Mentorship of junior engineers in best practices for data ingestion, performance tuning, and troubleshooting. • 5+yrs in related technical roles, data management, database development, ETL,Data - - Warehouses and pipelines. • Experience designing and developing data warehouses (Teradata, Oracle Exadata, Netezza, SQL Server, Spark) • Experience building ETL / ELT ingestion pipelines with tools like DataStage, Informatica, Matillion • SQL scripting • Cloud experience on AWS (Azure, GCP are nice to have as well) • Python Scripting, Scala is required. • Ability to prepare reports and present to internal and customer stakeholders • Track record of sound problem solving skills and action oriented mindset • Strong interpersonal skills including assertiveness and ability to build strong client relationships • Ability to work in Agile teams • Experience hiring, developing and managing a technical team • *Preferred technical and professional experience** - Advanced Snowflake Platform Knowledge: Experience with advanced Snowflake features, including data sharing, data pipelines, and data security. Ability to design and implement complex data and AI usecases on Snowflake platforms. - Cloud Architecture Expertise: Experience with designing and implementing scalable and secure cloud architectures for data and AI applications. Knowledge of cloud migration, deployment, and management best practices. - Data Engineering Best Practices: Experience with implementing data engineering best practices, including data modeling, data warehousing, and data governance. Ability to optimize data and AI solutions for performance and scalability. IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Requirements
- The work includes data ingestion pipelines, data architecture, data governance and security
- You are an experienced data pipeline builder and migrations who enjoys optimizing data systems and building them from the ground up
- *Required technical and professional expertise**
- Bachelor's degree in engineering, computer science or equivalent area
- Expertise in evaluating, selecting, and integrating ingestion technologies to solve complex data challenges
- Experience designing and developing data warehouses (Teradata, Oracle Exadata, Netezza, SQL Server, Spark)
- Experience building ETL / ELT ingestion pipelines with tools like DataStage, Informatica, Matillion
- SQL scripting
- Cloud experience on AWS (Azure, GCP are nice to have as well)
- Python Scripting, Scala is required
- Track record of sound problem solving skills and action oriented mindset
- Strong interpersonal skills including assertiveness and ability to build strong client relationships
- Ability to work in Agile teams
- Experience hiring, developing and managing a technical team
- Advanced Snowflake Platform Knowledge: Experience with advanced Snowflake features, including data sharing, data pipelines, and data security
- Ability to design and implement complex data and AI usecases on Snowflake platforms
- Cloud Architecture Expertise: Experience with designing and implementing scalable and secure cloud architectures for data and AI applications
- Knowledge of cloud migration, deployment, and management best practices
- Data Engineering Best Practices: Experience with implementing data engineering best practices, including data modeling, data warehousing, and data governance
- Ability to optimize data and AI solutions for performance and scalability
Responsibilities
- To code
- To consult
- To think along with clients and sell
- To make markets
- Consultant, Data Engineer to join our growing team of experts
- This position will work in the design and development of Snowflake Data Cloud solutions
- You will develop database architectures, data warehouses and will ensure optimal data delivery architecture is consistent throughout ongoing customer projects
- On this role you will be leading technical teams
- The right candidate will be excited by the prospect of working for a start-up company to support our customers' next generation of data initiatives
- This role can be performed from anywhere in the US
- Leadership in architectural decisions for high-throughput data ingestion frameworks, including real-time data processing and analytics
- Mentorship of junior engineers in best practices for data ingestion, performance tuning, and troubleshooting
- 5+yrs in related technical roles, data management, database development, ETL,Data - - Warehouses and pipelines
- Ability to prepare reports and present to internal and customer stakeholders
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free