Vice President, Responsible AI Data Scientist
Accordion India
About the role
About the role
We are seeking a Vice President, Responsible AI Data Scientist who combines technical expertise in Data Science and Machine Learning with a strong advisory lens to help drive trustworthy AI solution development across Accordion's practices. In this role you will serve as an internal subject matter expert, embedding responsible AI principles and controls across the AI development lifecycle to drive validated, explainable, compliant, and auditable AI solutions. You will translate governance requirements into actionable and testable playbooks that teams across the business can apply consistently. This position is critical to co‑designing AI solutions with embedded Responsible AI practices aligned to Accordion's AI Governance principles. The role is based in New York City or Chicago, hybrid with two remote days per week, and is not eligible for immigration sponsorship.
What You'll Do
Governance & Technical Standards
- Translate AI governance, regulatory, and compliance requirements into testable operational Responsible AI playbooks with quantifiable standards for transparency, explainability, accuracy, fairness, privacy, and accountability.
- Develop frameworks and guidance supporting systematic evaluation, testing, and risk mitigation that enable auditability, lineage, and transparent decision records.
- Build evaluation frameworks that guide data scientists and engineering teams in developing, testing, and monitoring AI systems; define mitigation strategies, detection methodologies, and acceptability thresholds.
- Develop and implement testing principles for bias detection, model robustness evaluation, privacy preservation, and AI/privacy regulation alignment across different AI architectures and data types.
- Establish benchmarks and governance processes aligned with industry standards (e.g., NIST AI RMF, EU AI Act, ISO 42001) that enable auditable AI transparency and explainability against applicable regulations and Accordion standards.
Testing & Validation
- Collaborate with data science and engineering teams across AI solution practices to embed responsible AI controls throughout the development lifecycle.
- Develop repeatable testing and validation playbooks and evaluation frameworks for use across practice areas.
- Pressure test AI solutions for accuracy, reliability, and trustworthiness, including output anomaly detection, logging, and observability mechanisms.
- Develop methods to produce model cards and support audit trails for outputs across Accordion's business practice pillars and client solutions.
Cross‑Functional Advisory
- Act as the firm's internal subject matter expert on Responsible AI topics including algorithmic fairness, transparency, privacy preservation, safety protocols, and risk mitigation strategies across the AI lifecycle.
- Partner with Legal/Privacy/Risk, Technology, and D&A teams to translate regulatory requirements into actionable and measurable controls and governance structures.
- Collaborate with teams developing AI solutions on risk‑informed design decisions — advising on model selection, testing approaches, and appropriate guardrails.
- Establish Responsible AI frameworks and operational playbooks that demonstrate alignment with evolving AI regulations (e.g., EU AI Act, GDPR, CCPA) and sector‑specific requirements.
- Monitor emerging AI regulations, industry standards, and responsible AI methodologies — translating insights into actionable internal design guidance.
- Where applicable, support higher‑risk or higher‑profile engagements and AI product development requiring rigorous testing and evaluation.
You Have
- 5+ years of hands‑on experience in data and ML, including developing, deploying, and evaluating solutions using statistics, machine learning, NLP, and data visualization.
- Bachelor’s degree in Computer Science, Data Science, Information Systems, Mathematics, Engineering, Statistics, or a related quantitative field.
- Proven technical experience in regulated or complex industries with demonstrated collaboration across Legal/Risk/Compliance, Security, and AI/ML engineering teams.
- Track record of operationalizing responsible AI through technical development frameworks, testing protocols, and quantitative evaluation that embed fairness, transparency, and accountability into production AI systems.
- Strong programming proficiency in Python, R, or similar languages, with experience in ML frameworks and responsible AI tooling.
- Understanding of AI and privacy regulations and standards — including EU AI Act, NIST AI RMF, GDPR, CCPA, and ISO 42001 — and the ability to translate regulatory mandates into technical controls.
- Exceptional communication skills with the ability to convey complex technical and regulatory considerations to non‑technical stakeholders, including leadership and cross‑functional teams.
Preferred Qualifications
- Advanced degree (Master’s) in Computer Science, Data Science, Statistics, Information Systems, Engineering, or a related quantitative discipline.
- Consulting experience at a leading management or professional services firm.
- Direct experience working with Legal/Privacy/Compliance teams on AI, data governance, or emerging technology matters.
- Familiarity with responsible AI tooling, red‑team platforms, or model evaluation frameworks.
You Are
- Energized by ensuring AI systems that solve complex business challenges are built responsibly — with fairness, accuracy, transparency, explainability, and privacy by design embedded from the start.
- A cross‑functional collaborator who navigates matrixed organizational dynamics, working seamlessly with Legal/Risk/Compliance, Technology/Engineering, and business teams to establish AI principles, governance playbooks, and technical roadmaps.
- A clear communicator who can translate AI governance concepts and technical risk findings into language that resonates with non‑technical stakeholders and leadership.
- A proactive problem‑solver who thrives on staying ahead of AI governance expectations and emerging regulations — and can turn frameworks into practical controls.
- An adaptive contributor who embraces the pace of consulting, working across diverse sectors and practice areas.
Compensation & Benefits
- Annual salary range: $160,000 – $175,000 USD plus benefits.
- Eligible for bonuses based on individual and company performance.
- Actual compensation packages are determined by evaluating factors such as geographic location, skill set, years and depth of experience, education, certifications, cost of labor, and internal equity.
Equal Opportunity Employer
Accordion is an Equal Opportunity Employer. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law.
Please note: Accordion does not accept unsolicited resumes from third‑party recruiters unless engaged for a specified opening and aligned with our inclusive diversity values.
Requirements
- 5+ years of hands-on experience in data and ML, including developing, deploying, and evaluating solutions using a range of skillsets (e.g., statistics, machine learning, NLP, data visualization)
- Bachelor's degree in Computer Science, Data Science, Information Systems, Mathematics, Engineering, Statistics, or a related quantitative field
- Proven technical experience in regulated or complex industries with demonstrated collaboration across Legal/Risk/Compliance, Security, and AI/ML engineering teams
- Track record of operationalizing responsible AI through technical development frameworks, testing protocols, and quantitative evaluation that embed fairness, transparency, and accountability into production AI systems
- Strong programming proficiency in Python, R, or similar languages, with experience in ML frameworks and responsible AI tooling
- Understanding of AI and privacy regulations and standards — including EU AI Act, NIST AI RMF, GDPR, CCPA, and ISO 42001 — and the ability to translate regulatory mandates into technical controls
- Exceptional communication skills with the ability to convey complex technical and regulatory considerations to non-technical stakeholders, including leadership and cross-functional teams
Responsibilities
- Translate AI governance, regulatory, and compliance requirements into testable operational Responsible AI playbooks with quantifiable standards for transparency, explainability, accuracy, fairness, privacy, and accountability — tailored to Accordion's AI solutions and business model
- Develop frameworks and guidance supporting systematic evaluation, testing, and risk mitigation that enable auditability, lineage, and transparent decision records
- Build evaluation frameworks that guide data scientists and engineering teams in developing, testing, and monitoring AI systems requirements; define mitigation strategies, detection methodologies, and acceptability thresholds
- Develop and implement testing principles for bias detection, model robustness evaluation, privacy preservation, and AI/privacy regulation alignment across different AI architectures and data types
- Establish benchmarks and governance processes aligned with industry standards (e.g., NIST AI RMF, EU AI Act, ISO 42001) that enable auditable AI transparency and explainability against applicable regulations and Accordion standards
- Collaborate with data science and engineering teams across our AI solution practices to embed responsible AI controls throughout the development lifecycle
- Develop repeatable testing and validation playbooks and evaluation frameworks for use across practice areas
- Pressure test AI solutions for accuracy, reliability, and trustworthiness, including output anomaly detection, logging, and observability mechanisms
- Develop methods to produce model cards and support audit trails for outputs across Accordion's business practice pillars and Client solutions
- Act as the firm's internal subject matter expert on Responsible AI topics including algorithmic fairness, transparency, privacy preservation, safety protocols, and risk mitigation strategies across the AI lifecycle
- Partner with Legal/Privacy/Risk, Technology, and D&A teams to translate regulatory requirements into actionable and measurable controls and governance structures
- Collaborate with teams developing AI solutions on risk-informed design decisions — advising on model selection, testing approaches, and appropriate guardrails
- Establish Responsible AI frameworks and operational playbooks that demonstrate alignment with evolving AI regulations (e.g., EU AI Act, GDPR, CCPA) and sector-specific requirements
- Monitor emerging AI regulations, industry standards, and responsible AI methodologies — translating insights into actionable internal design guidance
- Where applicable, support higher-risk or higher-profile engagements and AI product development where rigorous testing and evaluation of AI solutions is required
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free