Design, Development, Testing, and Deployment: Drive the development of scalable Data & AI warehouse applications by leveraging software engineering best practices such as automation, version control, and CI/CD. Implement comprehensive testing strategies to ensure reliability and optimal performance. Manage deployment processes end-to-end, effectively addressing configuration, environment, and security concerns.
- Engineering and Analytics: Transform Data Warehouse and AI use case requirements into robust data models and efficient pipelines, ensuring data integrity by applying statistical quality controls and advanced AI methodologies.
API & Microservice Development: Design and build secure, scalable APIs and microservices for seamless data integration across warehouse platforms, ensuring usability, strong security, and adherence to best practices.
Platform Scalability & Optimization: Assess and select the most suitable technologies for cloud and on-premises data warehouse deployments, implementing strategies to ensure scalability, robust performance monitoring, and cost-effective operations. - Lead : Lead the execution of complex data engineering and AI projects to solve critical business problems and deliver impactful results.
- Technologies: Leverage deep expertise in Data & AI technologies (such as Spark, Kafka, Databricks, and Snowflake), programming (including Java, Scala, Python, and SQL), API integration patterns (like HTTP/REST and GraphQL), and leading cloud platforms (Azure, AWS, GCP) to design and deliver data warehousing solutions.
Shift timing (if any): 12:30 PM to 9:30 PM IST
Location / Additional Location (if any): Bangalore, Hyderabad
Overall Experience:
- Typically requires a minimum 15 years of progressive experience in data engineering, data architecture, or related fields.
- Demonstrated experience leading complex data projects, managing teams, and delivering end-to-end data solutions in large or matrixed organizations is highly valued.
- Experience with cloud data platforms, big data technologies, and implementing best practices in data governance and DevOps is strongly preferred.
Primary / Mandatory skills:
- Delivery: Proven experience in managing and delivering complex data engineering and AI solutions for major business challenges.
- Data Architecture & Modeling:
Expertise in designing scalable, high-performance data architectures (e.g., data warehouses, data lakes, data marts) and creating robust data models. - ETL/ELT Development:
Advanced skills in building, optimizing, and maintaining data pipelines using modern ETL/ELT tools (e.g., Informatica, Talend, dbt, Azure Data Factory). - Cloud Platforms:
Proficiency with cloud data services and platforms such as AWS, Azure, or Google Cloud (e.g., Redshift, Snowflake, Databricks, BigQuery). - Programming Languages:
Strong coding ability in SQL and at least one general-purpose language (e.g., Python, Scala, Java). - Big Data Technologies:
Experience with distributed data processing frameworks (e.g., Spark, Hadoop) and real-time streaming tools (e.g., Kafka). - Data Governance & Quality:
Knowledge of data governance practices, data lineage, data cataloging, and implementing data quality checks. - CI/CD & Automation:
Experience in automating data workflows, version control (e.g., Git), and deploying CI/CD pipelines for data applications. - Analytics & AI/ML Integration:
Ability to support advanced analytics and integrate machine learning pipelines with core data platforms. - Leadership & Collaboration:
Proven track record in leading teams, mentoring engineers, and collaborating with business, analytics, and IT stakeholders. - Problem Solving & Communication:
Strong analytical, troubleshooting, and communication skills to translate business needs into technical solutions.