Jobs
Interviews

64 Unity Catalog Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Data Scientist with a focus on Predictive Analytics and expertise in Databricks, your primary responsibilities will involve designing and implementing predictive models for various applications such as forecasting, churn analysis, and fraud detection. You will utilize tools like Python, SQL, Spark MLlib, and Databricks ML to deploy these models effectively. Your role will also include building end-to-end machine learning pipelines on the Databricks Lakehouse platform, encompassing data ingestion, feature engineering, model training, and deployment. It will be essential to optimize model performance through techniques like hyperparameter tuning, AutoML, and leveraging MLflow for tracking. Collaboration with engineering teams will be a key aspect of your job to ensure the operationalization of models, both in batch and real-time scenarios, using Databricks Jobs or REST APIs. You will be responsible for implementing Delta Lake to support scalable and ACID-compliant data workflows, as well as enabling CI/CD for machine learning pipelines using Databricks Repos and GitHub Actions. In addition to your technical duties, troubleshooting Spark Jobs and resolving issues within the Databricks Environment will be part of your routine tasks. To excel in this role, you should possess 3 to 5 years of experience in predictive analytics, with a strong background in regression, classification, and time-series modeling. Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark is crucial for success in this position. Familiarity with tools like MLflow, Feature Store, and Unity Catalog for governance purposes will be advantageous. Industry experience in Life Insurance or Property & Casualty (P&C) is preferred, and holding a certification as a Databricks Certified ML Practitioner would be considered a plus. Your technical skill set should include proficiency in Python, PySpark, MLflow, and Databricks AutoML. Expertise in predictive modeling techniques such as classification, clustering, regression, time series analysis, and NLP is required. Familiarity with cloud platforms like Azure or AWS, Delta Lake, and Unity Catalog will also be beneficial for this role.,

Posted 2 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As a Senior Data Modeller, you will be responsible for leading the design and development of conceptual, logical, and physical data models for enterprise and application-level databases. Your expertise in data modeling, data warehousing, and data governance, particularly in cloud environments, Databricks, and Unity Catalog, will be crucial for the role. You should have a deep understanding of business processes related to master data management in a B2B environment and experience with data governance and data quality concepts. Your key responsibilities will include designing and developing data models, translating business requirements into structured data models, defining and maintaining data standards, collaborating with cross-functional teams to implement models, analyzing existing data systems for optimization, creating entity relationship diagrams and data flow diagrams, supporting data governance initiatives, and ensuring compliance with organizational data policies and security requirements. To be successful in this role, you should have at least 12 years of experience in data modeling, data warehousing, and data governance. Strong familiarity with Databricks, Unity Catalog, and cloud environments (preferably Azure) is essential. Additionally, you should possess a background in data normalization, denormalization, dimensional modeling, and schema design, along with hands-on experience with data modeling tools like ERwin. Experience in Agile or Scrum environments, proficiency in integration, databases, data warehouses, and data processing, as well as a track record of successfully selling data and analytics software to enterprise customers are key requirements. Your technical expertise should cover Big Data, streaming platforms, Databricks, Snowflake, Redshift, Spark, Kafka, SQL Server, PostgreSQL, and modern BI tools. Your ability to design and scale data pipelines and architectures in complex environments, along with excellent soft skills including leadership, client communication, and stakeholder management will be valuable assets in this role.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad

Remote

We are seeking a skilled Azure Data Engineer with strong Power BI capabilities to design, build, and maintain enterprise data lakes on Azure, ingest data from diverse sources, and develop insightful reports and dashboards. This role requires hands-on experience in Azure data services, ETL processes, and BI visualization to support data-driven decision-making. Key Responsibilities Design and implement end-to-end data pipelines using Azure Data Factory (ADF) for batch ingestion from various enterprise sources. Build and maintain a multi-zone Medallion Architecture data lake in Azure Data Lake Storage Gen2 (ADLS Gen2), including raw staging with metadata tracking, silver layer transformations (cleansing, enrichment, schema standardization), and gold layer curation (joins, aggregations). Perform data processing and transformations using Azure Databricks (PySpark/SQL) and ADF, ensuring data lineage, traceability, and compliance. Integrate data governance and security using Databricks Unity Catalog, Azure Active Directory (Azure AD), Role-Based Access Control (RBAC), and Access Control Lists (ACLs) for fine-grained access. Develop and optimize analytical reports and dashboards in Power BI, including KPI identification, custom visuals, responsive designs, and export functionalities to Excel/Word. Conduct data modeling, mapping, and extraction during discovery phases, aligning with functional requirements for enterprise analytics. Collaborate with cross-functional teams to define schemas, handle API-based ingestion (REST/OData), and implement audit trails, logging, and compliance with data protection policies. Participate in testing (unit, integration, performance), UAT support, and production deployment, ensuring high availability and scalability. Create training content and provide knowledge transfer on data lake implementation and Power BI usage. Monitor and troubleshoot pipelines, optimizing for batch processing efficiency and data quality. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. 5+ years of experience in data engineering, with at least 3 years focused on Azure cloud services. Proven expertise in Azure Data Factory (ADF) for ETL/orchestration, Azure Data Lake Storage Gen2 (ADLS Gen2) for data lake management, and Azure Databricks for Spark-based transformations. Strong proficiency in Power BI for report and dashboard development, including DAX, custom visuals, data modeling, and integration with Azure data sources (e.g., DirectQuery or Import modes). Hands-on experience with Medallion Architecture (raw/silver/gold layers), data wrangling, and multi-source joins. Familiarity with API ingestion (REST, OData) from enterprise systems. Solid understanding of data governance tools like Databricks Unity Catalog, Azure AD for authentication, and RBAC/ACLs for security. Proficiency in SQL, PySpark, and data modeling techniques for dimensional and analytical schemas. Experience in agile methodologies, with the ability to deliver phased outcomes. Preferred Skills Certifications such as Microsoft Certified: Azure Data Engineer Associate (DP-203) or Power BI Data Analyst Associate (PL-300). Knowledge of Azure Synapse Analytics, Azure Monitor for logging, and integration with hybrid/on-premises sources. Experience in domains like energy, mobility, or enterprise analytics, with exposure to moderate data volumes. Strong problem-solving skills, with the ability to handle rate limits, pagination, and dynamic data in APIs. Familiarity with tools like Azure DevOps for CI/CD and version control of pipelines/notebooks. What We Offer Opportunity to work on cutting-edge data transformation projects. Competitive salary and benefits package. Collaborative environment with access to advanced Azure tools and training. Flexible work arrangements and professional growth opportunities. If you are a proactive engineer passionate about building scalable data solutions and delivering actionable insights, apply now. Role & responsibilities Preferred candidate profile

Posted 2 weeks ago

Apply

6.0 - 11.0 years

20 - 35 Lacs

Hyderabad

Remote

Databricks Administrator Azure/AWS | Remote | 6+ Years Job Description: We are looking for an experienced Databricks Administrator to manage and optimize our Databricks environment on AWS . You will be responsible for setting up and maintaining workspaces, clusters, access control, and integrations, while ensuring security, performance, and governance. Key Responsibilities: Databricks Administration: Manage Databricks workspaces, clusters, and jobs across AWS. User & Access Management: Control user roles, permissions, and workspace-level security. Unity Catalog & Data Governance: Set up and manage Unity Catalog, implement data governance policies. Security & Network Configuration: Configure encryption, authentication, VPCs, private links, and networking on AWS. Integration & Automation: Integrate with cloud services, BI tools, and automate processes using Python, Terraform, and Git. Monitoring & CI/CD: Implement monitoring (CloudWatch, Prometheus, etc.), and manage CI/CD pipelines using GitLab, Jenkins, or similar. Collaboration: Work closely with data engineers, analysts, and DevOps teams to support data workflows. Must-Have Skills: Strong experience with Databricks on AWS Unity Catalog setup and governance best practices AWS network/security configuration (VPC, IAM, KMS) Experience with CI/CD tools (Git, Jenkins, etc.) Terraform and Infrastructure as Code (IaC) Scripting knowledge in Python or Shell Email : Hrushikesh.akkala@numerictech.com Phone /Whatsapp : 9700111702 For immediate response and further opportunities, connect with me on LinkedIn: https://www.linkedin.com/in/hrushikesh-a-74a32126a/

Posted 2 weeks ago

Apply

5.0 - 9.0 years

10 - 20 Lacs

Hyderabad, Ahmedabad, Bengaluru

Work from Office

Sr. Data Analytics Engineer Power mission-critical decisions with governed insights Ajmera Infotech builds planet-scale software for NYSE-listed clients, driving decisions that can’t afford to fail. Our 120-engineer team specializes in highly regulated domains—HIPAA, FDA, SOC 2—and delivers production-grade systems that turn data into strategic advantage. Why You’ll Love It End-to-end impact — Build full-stack analytics from lakehouse pipelines to real-time dashboards. Fail-safe engineering — TDD, CI/CD, DAX optimization, Unity Catalog, cluster tuning. Modern stack — Databricks, PySpark, Delta Lake, Power BI, Airflow. Mentorship culture — Lead code reviews, share best practices, grow as a domain expert. Mission-critical context — Help enterprises migrate legacy analytics into cloud-native, governed platforms. Compliance-first mindset — Work in HIPAA-aligned environments where precision matters. Key Responsibilities Build scalable pipelines using SQL, PySpark, Delta Live Tables on Databricks. Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. Deliver robust Power BI solutions —dashboards, semantic layers, paginated reports, DAX. Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. Document everything —from pipeline logic to RLS rules—in Git-controlled formats. Collaborate cross-functionally to convert product analytics needs into resilient BI assets. Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills 5+ years in analytics engineering, with 3+ in production Databricks/Spark contexts. Advanced SQL (incl. windowing), expert PySpark , Delta Lake , Unity Catalog . Power BI mastery —DAX optimization, security rules, paginated reports. SSRS-to-Power BI migration experience (RDL logic replication). Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). Communication skills to bridge technical and business audiences. Nice-to-Have Skills Databricks Data Engineer Associate cert. Streaming pipeline experience (Kafka, Structured Streaming). dbt , Great Expectations , or similar data quality frameworks. BI diversity—experience with Tableau, Looker, or similar platforms. Cost governance familiarity (Power BI Premium capacity, Databricks chargeback). Benefits & Call-to-Action Ajmera offers competitive compensation, flexible schedules, and a deeply technical culture where engineers lead the narrative. If you’re driven by reliable, audit-ready data products and want to own systems from raw ingestion to KPI dashboards— apply now and engineer insights that matter.

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant/ Data Engineer. In this role, you will collaborate closely with cross-functional teams, including developers, business analysts, and stakeholders, to deliver high-quality software solutions that enhance operational efficiency and support strategic business objectives. Responsibilities . Provide technical leadership and architectural guidance on Data engineer projects. . Design and implement data pipelines, data lakes, and data warehouse solutions using the Data engineer. . Optimize Spark-based data workflows for performance, scalability, and cost-efficiency. . Ensure robust data governance and security, including the implementation of Unity Catalog. . Collaborate with data scientists, business users, and engineering teams to align solutions with business goals. . Stay updated with evolving Data engineer features, best practices, and industry trends. . Proven expertise in Data engineering, including Spark, Delta Lake, and Unity Catalog. . Strong background in data engineering, with hands-on experience in building production-grade data pipelines and lakes. . Proficient in Python (preferred) or Scala for data transformation and automation. . Strong command of SQL and Spark SQL for data querying and processing. . Experience with cloud platforms such as Azure, AWS, or GCP. . Familiarity with DevOps/DataOps practices in data pipeline development. . Knowledge of Profisee or other Master Data Management (MDM) tools is a plus. . Certifications in Data Engineering or Spark. . Experience with Delta Live Tables, structured streaming, or metadata-driven frameworks . Development of new reports and updating the existing reports as requested by customers. . Automate the respective reports by the creation of config files. . Validate the premium in the reports against the IMS application to ensure there are no discrepancies by creation of config file. . Validation of all the reports that run on a monthly basis and to analyze the respective reports if there is any discrepancy in Qualifications we seek in you! Minimum Qualifications . BE/ B Tech/ MCA Preferred Qualifications/ Skills . Excellent analytical, problem-solving, communication and interpersonal skills . Able to work effectively in a fast-paced, sometimes stressful environment, and deliver production quality software within tight schedules . Must be results-oriented, self-motivated and have the ability to thrive in a fast-paced environment . Strong Specialty Insurance domain & IT knowledge Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

10.0 - 13.0 years

5 - 10 Lacs

Hyderabad, Telangana, India

On-site

Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks. Mandatory Skills AWS, Databricks

Posted 3 weeks ago

Apply

10.0 - 12.0 years

10 - 12 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities: Workspace Management: Create and manage Databricks workspaces, ensuring proper configuration and access control. User & Identity Management: Administer user roles, permissions, and authentication mechanisms. Cluster Administration: Configure, monitor, and optimize Databricks clusters for efficient resource utilization. Security & Compliance: Implement security best practices, including data encryption, access policies, and compliance adherence. Performance Optimization: Troubleshoot and resolve performance issues related to Databricks workloads. Integration & Automation: Work with cloud platforms (AWS, Azure, GCP) to integrate Databricks with other services. Monitoring & Logging: Set up monitoring tools and analyze logs to ensure system health. Data Governance: Manage Unity Catalog and other governance tools for structured data access. Collaboration: Work closely with data engineers, analysts, and scientists to support their workflows. Qualifications: Proficiency in Python or Scala for scripting and automation. Knowledge of cloud platforms (AWS). Familiarity with Databricks Delta Lake and MLflow. Understanding of ETL processes and data warehousing concepts. Strong problem-solving and analytical skills.

Posted 3 weeks ago

Apply

12.0 - 14.0 years

6 - 11 Lacs

Bengaluru, Karnataka, India

On-site

Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks.

Posted 4 weeks ago

Apply

12.0 - 14.0 years

6 - 11 Lacs

Hyderabad, Telangana, India

On-site

Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks.

Posted 4 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Ready to build the future with AI? At Genpact, we don't just keep up with technology we set the pace. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of whats possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant- Databricks Developer AWS! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications • Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have excellent coding skills either Python or Scala, preferably Python. • Must have experience in Data Engineering domain . • Must have implemented at least 2 project end-to-end in Databricks. • Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration • Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. • Must have good understanding to create complex data pipeline • Must have good knowledge of Data structure & algorithms. • Must be strong in SQL and sprak-sql. • Must have strong performance optimization skills to improve efficiency and reduce cost. • Must have worked on both Batch and streaming data pipeline. • Must have extensive knowledge of Spark and Hive data processing framework. • Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. • Must be strong in writing unit test case and integration test • Must have strong communication skills and have worked on the team of size 5 plus • Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. • Good to have Databricks SQL Endpoint understanding. • Good To have CI/CD experience to build the pipeline for Databricks jobs. • Good to have if worked on migration project to build Unified data platform. • Good to have knowledge of DBT. • Good to have knowledge of docker and Kubernetes. br /> Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries. Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 4 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Ready to build the future with AI? At Genpact, we don't just keep up with technology we set the pace. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of whats possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant- Databricks Developer AWS! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications • Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have excellent coding skills either Python or Scala, preferably Python. • Must have experience in Data Engineering domain . • Must have implemented at least 2 project end-to-end in Databricks. • Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration • Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. • Must have good understanding to create complex data pipeline • Must have good knowledge of Data structure & algorithms. • Must be strong in SQL and sprak-sql. • Must have strong performance optimization skills to improve efficiency and reduce cost. • Must have worked on both Batch and streaming data pipeline. • Must have extensive knowledge of Spark and Hive data processing framework. • Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. • Must be strong in writing unit test case and integration test • Must have strong communication skills and have worked on the team of size 5 plus • Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. • Good to have Databricks SQL Endpoint understanding. • Good To have CI/CD experience to build the pipeline for Databricks jobs. • Good to have if worked on migration project to build Unified data platform. • Good to have knowledge of DBT. • Good to have knowledge of docker and Kubernetes. br /> Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries. Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 4 weeks ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience . Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 4 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Must have skills: Azure data factory, Azure data bricks, Python and Pyspark Expert with database technologies and ETL tools. Hands-on experience on designing and developing scripts for custom ETL processes and automation in Azure data factory, Azure databricks , Python, Pyspark etc. Good knowledge of AZURE, AWS, GCP Cloud platform services stack Hands-on experience on designing and developing scripts for custom ETL processes and automation in Azure data factory, Azure databricks , Delta lake, Databricks workflows orchestration, Python, Pyspark etc. Good Knowledge on Unity Catalog implementation. Good Knowledge on integration with other tools like - DBT, other transformation tools. Good knowledge on Unity Catalog integration with Snowlflake Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework . Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant - Databricks Platform Admin! Databricks Platform Admin is more to look at configuration issues, upgrade, patches, security, Databricks platform integration with AWS, IDM and other technologies, etc. Responsibilities Experience as the Databricks account owner, managing workspaces, AWS accounts, audit logs, and high-level usage monitoring Good understanding of Unity catalog. DBR Workspace and Configuration setups. Support the administrative duties such as user access provisioning, unity catalog and object creation. Compute administration. Experience optimizing usage for performance and cost Experience of building infrastructure as a code (Terraform/ Cloud formation template) Experience as Databricks workspace admin, managing workspace users and groups including single sign-on, provisioning, access control, and workspace storage. Experience managing S3 access across a large user base Experience managing cluster and jobs configuration options Experience with Databricks security and privacy setup Experience troubleshooting end user and platform-level issues Experience delivering client presentations and demos Ability to multitask and reprioritize tasking on the fly according to the needs of a growing platform and its stakeholders Qualifications Minimum qualifications Overall experience required . Experience as the Databricks account owner, managing workspaces, AWS accounts, audit logs, and high-level usage monitoring Good understanding of Unity catalog Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

4.0 - 7.0 years

15 - 20 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Must have excellent coding skills either Python or Scala, preferably Python. Must have at least 5+ years of experience in Data Engineering domain with total of 7+ years. Must have implemented at least 2 project end-to-end in Databricks. Musthave at least 2+ years of experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Noida, Pune, Bengaluru

Hybrid

Work Mode: Hybrid (3 days WFO) Locations: Bangalore, Noida, Pune, Mumbai, Hyderabad (Candidates must be in Accion cities to collect assets and attend in-person meetings as required). Key Requirements: Technical Skills: Databricks Expertise: 5+ years of hands-on experience in data engineering/ETL using Databricks on AWS/Azure cloud infrastructure. Proficiency in Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT), MLflow, and Databricks SQL. Experience with Databricks CI/CD tools (e.g., BitBucket, GitHub Actions, Databricks CLI). Data Warehousing & Engineering: Strong understanding of data warehousing concepts (Dimensional, SCD2, Data Vault, OBT, etc.). Proven ability to implement highly performant data ingestion pipelines from multiple sources. Experience integrating end-to-end Databricks pipelines to ensure data quality and consistency. Programming: Strong proficiency in Python and SQL. Basic working knowledge of API or stream-based data extraction processes (e.g., Salesforce API, Bulk API). Cloud Technologies: Preferred experience with AWS services (e.g., S3, Athena, Glue, Lambda). Power BI: 3+ years of experience in Power BI and data warehousing for root cause analysis and business improvement opportunities. Additional Skills: Working knowledge of Data Management principles (quality, governance, security, privacy, lifecycle management, cataloging). Nice to have: Databricks certifications and AWS Solution Architect certification. Nice to have: Experience with building data pipelines from business applications like Salesforce, Marketo, NetSuite, Workday, etc. Responsibilities: Develop, implement, and maintain highly efficient ETL pipelines on Databricks. Perform root cause analysis and identify opportunities for data-driven business improvements. Ensure quality, consistency, and governance of all data pipelines and repositories. Work in an Agile/DevOps environment to deliver iterative solutions. Collaborate with cross-functional teams to meet business requirements. Stay updated on the latest Databricks and AWS features, tools, and best practices. Work Schedule: Regular: 11:00 AM to 8:00 PM. Flexibility is required for project-based overlap. Interested candidates should share their resumes with the following details: Current CTC Expected CTC Preferred Location: Bangalore, Noida, Pune, Mumbai, Hyderabad Notice Period Contact Information:

Posted 1 month ago

Apply

4.0 - 6.0 years

7 - 11 Lacs

Hyderabad, Chennai

Work from Office

Job Title : Data Scientist Location State : Tamil Nadu,Telangana Location City : Hyderabad, Chennai Experience Required : 4 to 6 Year(s) CTC Range : 7 to 11 LPA Shift: Day Shift Work Mode: Onsite Position Type: C2H Openings: 2 Company Name: VARITE INDIA PRIVATE LIMITED About The Client: Client is an Indian multinational technology company specializing in information technology services and consulting. Headquartered in Mumbai, it is a part of the Tata Group and operates in 150 locations across 46 countries. About The Job: Requirements: 5+ years in predictive analytics, with expertise in regression, classification, time-series modeling. Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark. Familiarity with MLflow, Feature Store, and Unity Catalog for governance. Industry experience in Life Insurance or P&C. Skills: Python, PySpark , MLflow, Databricks AutoML. Predictive MoClienting (Classification , Clustering , Regression, timeseries and NLP). Cloud platform (Azure/AWS) , Delta Lake, Unity Catalog. Certifications; Databricks Certified ML Practitioner (Optional) Essential Job Functions: Design and deploy predictive models (e.g., forecasting, churn analysis, fraud detection) using Python/SQL, Spark MLlib, and Databricks ML. Build end-to-end ML pipelines (data ingestion, feature engineering, model training, deployment) on Databricks Lakehouse. Optimize model performance via hyperparameter tuning, AutoML, and MLflow tracking. Collaborate with engineering teams to operationalize models (batch/real-time) using Databricks Jobs or REST APIs. Implement Delta Lake for scalable, ACID-compliant data workflows. Enable CI/CD for ML pipelines using Databricks Repos and GitHub Actions. Troubleshoot issues in Spark Jobs and Databricks Environment. Qualifications: Skill Required: Data Science, Python for Data Science Experience Range in Required Skills: 4-6 Years How to Apply: Interested candidates are invited to submit their resume using the apply online button on this job post. About VARITE: VARITE is a global staffing and IT consulting company providing technical consulting and team augmentation services to Fortune 500 Companies in USA, UK, CANADA and INDIA. VARITE is currently a primary and direct vendor to the leading corporations in the verticals of Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. Equal Opportunity Employer: VARITE is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, veteran status, or disability status. Unlock Rewards: Refer Candidates and Earn. If you're not available or interested in this opportunity, please pass this along to anyone in your network who might be a good fit and interested in our open positions. VARITE offers a Candidate Referral program, where you'll receive a one-time referral bonus based on the following scale if the referred candidate completes a three-month assignment with VARITE. Exp Req - Referral Bonus 0 - 2 Yrs. - INR 5,000 2 - 6 Yrs. - INR 7,500 6 + Yrs. - INR 10,000

Posted 1 month ago

Apply

4.0 - 9.0 years

0 - 3 Lacs

Pune, Chennai, Bengaluru

Work from Office

Experience - 4 years - 12 years Location - Pune / Mumbai / Chennai / Bangalore Notice Period - Immediate - 45 Days Proven experience as a Databricks Developer or similar role, with a strong focus on Pyspark programming. Expertise in the Databricks platform, including Databricks Delta, Spark and Unity Catalog. Proficiency in Python for data manipulation, analysis, and automation. Strong understanding of data engineering concepts, data modeling, ETL processes, and data integration techniques. Excellent communication skills and ability to collaborate effectively with cross-functional teams. Stay updated with the latest advancements in Databricks, Python, and SQL technologies to continuously improve data solutions. Familiarity with Azure cloud platforms and their data services. Ability to work in an agile development environment and adapt to changing requirements.

Posted 1 month ago

Apply

5.0 - 7.0 years

14 - 16 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Job Title: Data/ML Platform Engineer Location: Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office) Notice Period: ImmediateiSource Services is hiring for one of their client for the position of Data/ML Platform Engineer. As a Data Engineer you will be relied on to independently develop and deliver high-quality features for our new ML Platform, refactor and translate our data products and finish various tasks to a high standard. Youll be part of the Data Foundation Team, which focuses on creating and maintaining the Data Platform for Marktplaats. 5 years of hands-on experience in using Python, Spark,Sql. Experienced in AWS Cloud usage and management. Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow). Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch. Experience with orchestrators such as Airflow and Kubeflow. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes). Fundamental understanding of Parquet, Delta Lake and other data file formats. Proficiency on an IaC tool such as Terraform, CDK or CloudFormation. Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst Location - Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office)

Posted 1 month ago

Apply

8.0 - 13.0 years

30 - 45 Lacs

Hyderabad

Work from Office

Role : Were looking for a skilled Databricks Solution Architect who will lead the design and implementation of data migration strategies and cloud-based data and analytics transformation on the Databricks platform. This role involves collaborating with stakeholders, analyzing data, defining architecture, building data pipelines, ensuring security and performance, and implementing Databricks solutions for machine learning and business intelligence. Key Responsibilities: Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. Design, implement, and optimize scalable, high-performance data architectures using Databricks. Build and manage data pipelines and workflows within Databricks. Ensure that best practices for security, scalability, and performance are followed. Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. Oversee the technical aspects of the migration process, from planning through to execution. Create documentation of the architecture, migration processes, and solutions. Provide training and support to teams post-migration to ensure they can leverage Databricks. Preferred candidate profile: Experience: 7+ years of experience in data engineering, cloud architecture, or related fields. 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. Experience in end-to-end data migration projects involving large-scale data infrastructure. Familiarity with ETL tools, data lakes, and data warehousing solutions. Skills: Expertise in Databricks architecture and best practices for data processing. Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. Proficiency in Databricks Asset Bundles Expertise in design and development of migration frameworks using Databricks Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. Familiarity with data governance, security, and compliance in cloud environments. Solid understanding of cloud-native data solutions and services.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Hi, We at HCL are looking for Databricks engineers with exp in AWS. Location: Noida, Bangalore,Pune, Hyderabad and Chennai Please share the details below. Total years of exp: Exp in Databricks: Exp in AWS: Exp in Unity Catalog: Exp in Collibra: Current CTC: Expected CTC: Notice Period: Current location: Preferred location: Reach us on srikanth.domala@hcltech.com Primary & Secondary Skill Databricks Pyspark, Python and Collibra ( Primary) Unity catalog ETL AWS JD (Detailed) Design of data solutions on Databricks including delta lake, data warehouse, data marts and other data solutions to support the analytics needs of the organization. Proficiency in using Collibra Data Governance Center, Data Catalog, and Collibra Connect for data management and governance. Apply best practices during design in data modeling (logical, physical) and ETL pipelines (streaming and batch) using cloud-based services especially Python & Pyspark Design, develop and manage the pipelining (collection, storage, access), data engineering (data quality, ETL, Data Modelling) and understanding (documentation, exploration) of the data. Interact with stakeholders regarding data landscape understanding, conducting discovery exercises, developing proof of concepts, and demonstrating it to stakeholders. Implement data quality frameworks and standards using Collibra to ensure the integrity and accuracy of data Excellent collaboration skills to work effectively with cross-functional teams Strong verbal and written communication skills

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Job Title: Consultant / Senior Consultant - Azure Data Engineering Location: India - Gurgaon preferred Industry: Insurance Analytics & AI Vertical Role Overview: We are seeking a hands-on Consultant / Senior Consultant with strong expertise in Azure-based data engineering to support end-to-end development and delivery of data pipelines for our insurance clients. The ideal candidate will have a deep understanding of Azure Data Factory, ADLS, Databricks (preferably with DLT and Unity Catalog), SQL, and Python and be comfortable working in a dynamic, client-facing environment. This is a key offshore role requiring both technical execution and solution-oriented thinking to support modern data platform initiatives. Collaborate with data scientists, analysts, and stakeholders to gather requirements and define data models that effectively support business requirements Demonstrate decision-making, analytical and problem-solving abilities Strong verbal and written communication skills to manage client discussions Familiar with working on Agile methodologies - daily scrum, sprint planning, backlog refinement Key Responsibilities & Skillsets: o Design and develop scalable and efficient data pipelines using Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS). o Build and maintain Databricks notebooks for data ingestion, transformation, and quality checks, using Python and SQL. o Work with Delta Live Tables (DLT) and Unity Catalog (preferred) to improve pipeline automation, governance, and performance. o Collaborate with data architects, analysts, and onshore teams to translate business requirements into technical specifications. o Troubleshoot data issues, ensure data accuracy, and apply best practices in data engineering and DevOps. o Support the migration of legacy SQL pipelines to modern Python-based frameworks. o Ensure adherence to data security, compliance, and performance standards, especially within insurance domain constraints. o Provide documentation, status updates, and technical insights to stakeholders as required. o Excellent communication skills and stakeholder management Required Skills & Experience: 3-7 years of strong hands-on experience in data engineering with a focus on Azure cloud technologies. Proficient in Azure Data Factory, Databricks, ADLS Gen2, and working knowledge of Unity Catalog. Strong programming skills in both SQL, Python especially within Databricks Notebooks. Pyspark expertise is good to have. Experience in Delta Lake / Delta Live Tables (DLT) is a plus. Good understanding of ETL/ELT concepts, data modeling, and performance tuning. Exposure to Insurance or Financial Services data projects is highly preferred. Strong communication and collaboration skills in an offshore delivery model. Required Skills & Experience: Experience working in Agile/Scrum teams Familiarity with Azure DevOps, Git, and CI/CD practices Certifications in Azure Data Engineering (e.g., DP-203) or Databricks

Posted 1 month ago

Apply

10.0 - 18.0 years

15 - 30 Lacs

Pune, Bengaluru

Work from Office

Role & responsibilities AWS with Databricks infra lead Experienced in setting up the Unity Catalog s Setting out how the group is to consume the model serving processes, Developing MLflow routines, Experienced ML models, Have used Gen AI features with guardrails, experimentation, and monitoring

Posted 1 month ago

Apply

3.0 - 8.0 years

4 - 9 Lacs

Ahmedabad

Hybrid

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Governance Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Job Summary: We are seeking a highly skilled and motivated Governance Tool Specialist to join our team with 4 years of experience. The candidate will be responsible for the implementation, configuration, and management of our governance tools. This role requires a deep understanding of data governance principles, excellent technical skills, and the ability to work collaboratively with various stakeholders. Optional - Experienced Data Quality Specialist with extensive expertise in using Alex Solutions tools to ensure data accuracy, consistency, and reliability. Proficiency in data profiling, cleansing, validation, and governance. Key Responsibilities: Data Governance: • Implement and configure Alex Solutions governance tools to meet client requirements. • Collaborate with clients to understand their data governance needs and provide tailored solutions. • Provide technical support and troubleshooting for governance tool issues. • Conduct training sessions and workshops to educate clients on the use of governance tools. • Develop and maintain documentation for governance tool configurations and processes. • Monitor and report on the performance and usage of governance tools. • Stay up-to-date with the latest developments in data governance and related technologies. • Work closely with the product development team to provide feedback and suggestions for tool enhancements. Data Quality: • Utilized Alex Solutions' data quality tools to develop and implement processes, standards, and guidelines that ensure data accuracy and reliability. • Conducted comprehensive data profiling using Alex Solutions, identifying and rectifying data anomalies and inconsistencies. • Monitored data quality metrics through Alex Solutions, providing regular reports on data quality issues and improvements to stakeholders. • Collaborated with clients to understand their data quality needs and provided tailored solutions using Alex Solutions. • Implemented data cleansing, validation, and enrichment processes within the Alex Solutions platform to maintain high data quality standards. • Developed and maintained detailed documentation for data quality processes and best practices using Alex Solutions' tools. Preferred Skills: Must Have Skills: Alex Solutions Good to Have: Unity Catalog, Microsoft Purview, Data Quality tool Secondary Skills: Informatica, Collibra Experience with data cataloging, data lineage, data quality and metadata management. • Knowledge of regulatory requirements related to data governance (e.g., GDPR, CCPA). • Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud). • Certification in data governance or related fields. • Proven experience with data governance, data quality tools and technologies. • Strong understanding of data governance principles and best practices. • Proficiency in SQL, data modeling, and database management. • Excellent problem-solving and analytical skills. • Strong communication and interpersonal skills.

Posted 1 month ago

Apply

3.0 - 7.0 years

22 - 25 Lacs

Bengaluru

Hybrid

Role & responsibilities 3-6 years of experience in Data Engineering Pipeline Ownership and Quality Assurance, with hands-on expertise in building, testing, and maintaining data pipelines. Proficiency with Azure Data Factory (ADF), Azure Databricks (ADB), and PySpark for data pipeline orchestration and processing large-scale datasets. Strong experience in writing SQL queries and performing data validation, data profiling, and schema checks. Experience with big data validation, including schema enforcement, data integrity checks, and automated anomaly detection. Ability to design, develop, and implement automated test cases to monitor and improve data pipeline efficiency. Deep understanding of Medallion Architecture (Raw, Bronze, Silver, Gold) for structured data flow management. Hands-on experience with Apache Airflow for scheduling, monitoring, and managing workflows. Strong knowledge of Python for developing data quality scripts, test automation, and ETL validations. Familiarity with CI/CD pipelines for deploying and automating data engineering workflows. Solid data governance and data security practices within the Azure ecosystem. Additional Requirements: Ownership of data pipelines ensuring end-to-end execution, monitoring, and troubleshooting failures proactively. Strong stakeholder management skills, including follow-ups with business teams across multiple regions to gather requirements, address issues, and optimize processes. Time flexibility to align with global teams for efficient communication and collaboration. Excellent problem-solving skills with the ability to simulate and test edge cases in data processing environments. Strong communication skills to document and articulate pipeline issues, troubleshooting steps, and solutions effectively. Experience with Unity Catalog or willingness to learn. Preferred candidate profile Immediate Joiner's

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies