Jobs
Interviews

102 Unity Catalog Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Dear Candidate, We have an urgent opening with one of Multinational Company for Bengaluru location. Interested candidate can share the resume on Deepaksharma@thehrsolutions.in OR WhatsApp on 8882505093 Experience : 5.5+ Years Profile : Mlops Notice period : Only immediate joiners OR Already serving. Please find the below job description : MLOPS Senior Engineer - Azure ML + Azure Databricks at BLR Location 5.5 years of experience in Al Domain & 3+ years in MLOPS (preferably in a large-scale enterprise) Mandatory Skill Experience in developing MLOps framework cutting ML lifecycle: model development, training, evaluation, deployment, monitoring including Model Governance Expert in Azure Databricks, Azure ML, Unity Catalog Hands-on experience with Azure DevOps, MLOPS CI/CD Pipelines, Python, Git, Docker Experience in developing standards and practices for MLOPS life cycle Nice to Have Skill Strong understanding of data privacy, compliance, and responsible Al Azure Data Factory (ADF)

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Lead Data Engineer specializing in Databricks, you will play a crucial role in designing, developing, and optimizing our next-generation data platform. Your responsibilities will include leading a team of data engineers, offering technical guidance, mentorship, and ensuring the scalability and high performance of data solutions. You will be expected to lead the design, development, and implementation of scalable and reliable data pipelines using Databricks, Spark, and other relevant technologies. It will also be part of your role to define and enforce data engineering best practices, coding standards, and architectural patterns. Additionally, providing technical guidance and mentorship to junior and mid-level data engineers, conducting code reviews, and ensuring the quality, performance, and maintainability of data solutions will be key aspects of your job. Your expertise in Databricks will be essential as you architect and implement data solutions on the Databricks platform, including Databricks Lakehouse, Delta Lake, and Unity Catalog. Optimizing Spark workloads for performance and cost efficiency on Databricks, developing and managing Databricks notebooks, jobs, and workflows, and proficiently using Databricks features such as Delta Live Tables (DLT), Photon, and SQL Analytics will be part of your daily tasks. In terms of pipeline development and operations, you will need to develop, test, and deploy robust ETL/ELT pipelines for data ingestion, transformation, and loading from various sources like relational databases, APIs, and streaming data. Implementing monitoring, alerting, and logging for data pipelines to ensure operational excellence, as well as troubleshooting and resolving complex data-related issues, will also fall under your responsibilities. Collaboration and communication are crucial aspects of this role as you will work closely with cross-functional teams, including product managers, data scientists, and software engineers. Clear communication of complex technical concepts to both technical and non-technical stakeholders is vital. Staying updated with industry trends and emerging technologies in data engineering and Databricks will also be expected. Key Skills required for this role include extensive hands-on experience with the Databricks platform, including Databricks Workspace, Spark on Databricks, Delta Lake, and Unity Catalog. Strong proficiency in optimizing Spark jobs, understanding Spark architecture, experience with Databricks features like Delta Live Tables (DLT), Photon, and Databricks SQL Analytics, and a deep understanding of data warehousing concepts, dimensional modeling, and data lake architectures are essential for success in this position.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Sr. Data Analytics Engineer at Ajmera Infotech Private Limited (AIPL) in Bengaluru/Bangalore, you will play a crucial role in building planet-scale software for NYSE-listed clients, enabling decisions that are critical and must not fail. With 5-9 years of experience, you will join a 120-engineer team specializing in highly regulated domains such as HIPAA, FDA, and SOC 2. The team delivers production-grade systems that transform data into a strategic advantage. You will have the opportunity to make end-to-end impact by building full-stack analytics solutions ranging from lake house pipelines to real-time dashboards. Fail-safe engineering practices such as TDD, CI/CD, DAX optimization, Unity Catalog, and cluster tuning will be part of your daily routine. You will work with a modern stack including technologies like Databricks, PySpark, Delta Lake, Power BI, and Airflow. As part of a mentorship culture, you will lead code reviews, share best practices, and grow as a domain expert. You will operate in a mission-critical context, helping enterprises migrate legacy analytics into cloud-native, governed platforms with a compliance-first mindset in HIPAA-aligned environments. Your key responsibilities will include building scalable pipelines using SQL, PySpark, and Delta Live Tables on Databricks, orchestrating workflows with Databricks Workflows or Airflow, designing dimensional models with Unity Catalog and Great Expectations validation, delivering robust Power BI solutions, migrating legacy SSRS reports to Power BI, optimizing compute and cost, and collaborating cross-functionally to convert product analytics needs into resilient BI assets. To excel in this role, you must have 5+ years of experience in analytics engineering, with at least 3 years in production Databricks/Spark contexts. Advanced skills in SQL, PySpark, Delta Lake, Unity Catalog, and Power BI are essential. Experience in SSRS-to-Power BI migration, Git, CI/CD, cloud platforms like Azure/AWS, and strong communication skills are also required. Nice-to-have skills include certifications like Databricks Data Engineer Associate, experience with streaming pipelines, data quality frameworks like dbt and Great Expectations, familiarity with BI platforms like Tableau and Looker, and cost governance knowledge. Ajmera offers competitive compensation, flexible hybrid schedules, and a deeply technical culture where engineers lead the narrative. If you are passionate about building reliable, audit-ready data products and want to take ownership of systems from raw ingestion to KPI dashboards, apply now and engineer insights that matter.,

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 18 Lacs

Navi Mumbai

Remote

Position Title: Senior Databricks Administrator Contract Duration: 1-2 Year (Contract-to-Hire based on performance) Interview Process: 2 Technical Rounds Location: Navi Mumbai (Finicity) Open to remote candidates Preference for candidates based in Navi Mumbai, Mumbai, or Pune who can commute to the office 2 -3 times per month Shift Timing: 7:00 AM 4:00 PM IST 11:00 AM 8:00 PM IST 2:00 PM 11:00 PM IST 6:00 PM – 3:30 AM IST 10:30 PM – 7:30 AM IST Job Description: 6+ years of experience managing Databricks on AWS or other cloud platforms. Strong knowledge of Databricks architecture, Unity Catalog, and cluster setup. Experience with IAM policies, SCIM integration, and access workflows. Skilled in monitoring, cost control, and governance of large Databricks setups. Hands-on with Terraform and CI/CD tools. Familiar with ETL tools and workflow orchestration (e.g., Airflow, Databricks Jobs). Responsibilities: 1. Databricks Platform Management Set up and manage Databricks workspaces for development, testing, and production. Apply rules to control how resources are used and keep costs in check. Manage Unity Catalog for organizing data and controlling access. Connect Databricks with identity providers like Okta or AWS SSO for user access. Set up CI/CD pipelines to automate code and workflow deployments. 2. Security & Compliance Set up role-based access controls (RBAC) and data permissions. Ensure the platform meets data privacy laws like GDPR and Open Banking. Manage secure access to data using tokens and secrets. Work with security teams to follow compliance rules. 3. Monitoring & Support Keep an eye on system performance, job runs, and user activity. Help fix issues like job failures or system errors (L2/L3 support). Maintain documentation, automation scripts, and alerts for system health. 4. Automation & Best Practices Use Terraform (IaC) to automate environment setup. Define naming rules and tagging standards for resources. Promote reusable templates and shared configurations. Lead training and knowledge-sharing sessions. If interested, drop your profile at nusrath.begum@priglobal.com along with the following details: Total Experience: Current CTC: Expected CTC: Notice Period:

Posted 2 months ago

Apply

10.0 - 17.0 years

32 - 45 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Who we are Tiger Analytics is a global leader in AI and analytics, helping Fortune 1000 companies solve their toughest challenges. We offer full-stack AI and analytics services & solutions to empower businesses to achieve real outcomes and value at scale. We are on a mission to push the boundaries of what AI and analytics can do to help enterprises navigate uncertainty andmove forward decisively. Our purpose is to provide certainty to shape a better tomorrow. Our team of 4000+ technologists and consultants are based in the US, Canada, the UK, India, Singapore and Australia, working closely with clients across CPG, Retail, Insurance, BFS, Manufacturing, Life Sciences, and Healthcare. Many of our team leaders rank in Top 10 and 40 Under 40 lists, exemplifying our dedication to innovation and excellence. We are a Great Place to Work-Certified (2022-24), recognized by analyst firms such as Forrester, Gartner, HFS, Everest, ISG and others. We have been ranked among the Best’ and ‘Fastest Growing’ analytics firms lists by Inc., Financial Times, Economic Times and Analytics India Magazine. Curious about the role? What your typical day would look like? Role Overview: We are looking for experienced Data Architects / Senior Data Architects to join our teams in Chennai, Bangalore, or Hyderabad. In this role, you will lead the architecture, design, and delivery of modern data platforms—including Data Lakes, Lakehouses, and Data Mesh—using Azure and Databricks. This is a hybrid role involving hands-on development, customer engagement, and technical leadership, where you will collaborate across teams to drive scalable and innovative data solutions end-to-end. Key Responsibilities: Architect and implement data solutions leveraging Azure and the Databricks ecosystem. Own the complete lifecycle of data platform implementations—from requirements gathering and platform selection to architecture design and deployment. Work closely with data scientists, application developers, and business stakeholders to deliver enterprise-grade data solutions. Continuously explore emerging technologies to solve business problems creatively. Mentor and guide a team of data engineers, promoting best practices and innovation. Contribute to broader organizational initiatives including capability building, solution development, talent acquisition, and industry events. Job Requirement 8+ years of overall technical experience with at least 4 years working hands-on with Microsoft Azure and Databricks. Experience of leading at least 2 end to end Data Lakehouse project on Azure Databricks involving Medallion Architecture. Deep expertise in the Databricks ecosystem, including: PySpark, Notebooks, Unity Catalog, Delta Live Tables, Workflows, SQL Warehouse, Mosaic AI, AI/BI Genie. Hands-on experience building modern data platforms on Azure using tools and services such as: Azure Databricks, Azure Data Factory, ADLS Gen2, SQL Database, Microsoft Fabric, Event Hub, Stream Analytics, Cosmos DB, Azure Purview, Log Analytics, and Azure Explorer. Experience of designing and developing metadata-driven frameworks (relevant for Data Engineering processes). Strong programming, debugging, and performance tuning skills in Python and SQL. Good Experience of Data modeling (both Dimensional and 3-NF) Good Exposure to developing LLM/GenAI-powered applications. Sound understanding of CI/CD processes using Git and Jenkins or Azure DevOps. Familiarity with big data platforms such as Cloudera (CDH) or Hortonworks (HDP) is a plus. Exposure to technologies like Neo4j, Elasticsearch, and vector databases is desirable. Bonus: Experience with Azure infrastructure provisioning, networking, security, and governance. Educational Background: Bachelor’s degree (B.E/B.Tech) in Computer Science, Information Technology, or a related field from a reputed institute (preferred). You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal- opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire, packages are among the best in industry.

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Data Scientist with a focus on Predictive Analytics and expertise in Databricks, your primary responsibilities will involve designing and implementing predictive models for various applications such as forecasting, churn analysis, and fraud detection. You will utilize tools like Python, SQL, Spark MLlib, and Databricks ML to deploy these models effectively. Your role will also include building end-to-end machine learning pipelines on the Databricks Lakehouse platform, encompassing data ingestion, feature engineering, model training, and deployment. It will be essential to optimize model performance through techniques like hyperparameter tuning, AutoML, and leveraging MLflow for tracking. Collaboration with engineering teams will be a key aspect of your job to ensure the operationalization of models, both in batch and real-time scenarios, using Databricks Jobs or REST APIs. You will be responsible for implementing Delta Lake to support scalable and ACID-compliant data workflows, as well as enabling CI/CD for machine learning pipelines using Databricks Repos and GitHub Actions. In addition to your technical duties, troubleshooting Spark Jobs and resolving issues within the Databricks Environment will be part of your routine tasks. To excel in this role, you should possess 3 to 5 years of experience in predictive analytics, with a strong background in regression, classification, and time-series modeling. Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark is crucial for success in this position. Familiarity with tools like MLflow, Feature Store, and Unity Catalog for governance purposes will be advantageous. Industry experience in Life Insurance or Property & Casualty (P&C) is preferred, and holding a certification as a Databricks Certified ML Practitioner would be considered a plus. Your technical skill set should include proficiency in Python, PySpark, MLflow, and Databricks AutoML. Expertise in predictive modeling techniques such as classification, clustering, regression, time series analysis, and NLP is required. Familiarity with cloud platforms like Azure or AWS, Delta Lake, and Unity Catalog will also be beneficial for this role.,

Posted 2 months ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As a Senior Data Modeller, you will be responsible for leading the design and development of conceptual, logical, and physical data models for enterprise and application-level databases. Your expertise in data modeling, data warehousing, and data governance, particularly in cloud environments, Databricks, and Unity Catalog, will be crucial for the role. You should have a deep understanding of business processes related to master data management in a B2B environment and experience with data governance and data quality concepts. Your key responsibilities will include designing and developing data models, translating business requirements into structured data models, defining and maintaining data standards, collaborating with cross-functional teams to implement models, analyzing existing data systems for optimization, creating entity relationship diagrams and data flow diagrams, supporting data governance initiatives, and ensuring compliance with organizational data policies and security requirements. To be successful in this role, you should have at least 12 years of experience in data modeling, data warehousing, and data governance. Strong familiarity with Databricks, Unity Catalog, and cloud environments (preferably Azure) is essential. Additionally, you should possess a background in data normalization, denormalization, dimensional modeling, and schema design, along with hands-on experience with data modeling tools like ERwin. Experience in Agile or Scrum environments, proficiency in integration, databases, data warehouses, and data processing, as well as a track record of successfully selling data and analytics software to enterprise customers are key requirements. Your technical expertise should cover Big Data, streaming platforms, Databricks, Snowflake, Redshift, Spark, Kafka, SQL Server, PostgreSQL, and modern BI tools. Your ability to design and scale data pipelines and architectures in complex environments, along with excellent soft skills including leadership, client communication, and stakeholder management will be valuable assets in this role.,

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad

Remote

We are seeking a skilled Azure Data Engineer with strong Power BI capabilities to design, build, and maintain enterprise data lakes on Azure, ingest data from diverse sources, and develop insightful reports and dashboards. This role requires hands-on experience in Azure data services, ETL processes, and BI visualization to support data-driven decision-making. Key Responsibilities Design and implement end-to-end data pipelines using Azure Data Factory (ADF) for batch ingestion from various enterprise sources. Build and maintain a multi-zone Medallion Architecture data lake in Azure Data Lake Storage Gen2 (ADLS Gen2), including raw staging with metadata tracking, silver layer transformations (cleansing, enrichment, schema standardization), and gold layer curation (joins, aggregations). Perform data processing and transformations using Azure Databricks (PySpark/SQL) and ADF, ensuring data lineage, traceability, and compliance. Integrate data governance and security using Databricks Unity Catalog, Azure Active Directory (Azure AD), Role-Based Access Control (RBAC), and Access Control Lists (ACLs) for fine-grained access. Develop and optimize analytical reports and dashboards in Power BI, including KPI identification, custom visuals, responsive designs, and export functionalities to Excel/Word. Conduct data modeling, mapping, and extraction during discovery phases, aligning with functional requirements for enterprise analytics. Collaborate with cross-functional teams to define schemas, handle API-based ingestion (REST/OData), and implement audit trails, logging, and compliance with data protection policies. Participate in testing (unit, integration, performance), UAT support, and production deployment, ensuring high availability and scalability. Create training content and provide knowledge transfer on data lake implementation and Power BI usage. Monitor and troubleshoot pipelines, optimizing for batch processing efficiency and data quality. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. 5+ years of experience in data engineering, with at least 3 years focused on Azure cloud services. Proven expertise in Azure Data Factory (ADF) for ETL/orchestration, Azure Data Lake Storage Gen2 (ADLS Gen2) for data lake management, and Azure Databricks for Spark-based transformations. Strong proficiency in Power BI for report and dashboard development, including DAX, custom visuals, data modeling, and integration with Azure data sources (e.g., DirectQuery or Import modes). Hands-on experience with Medallion Architecture (raw/silver/gold layers), data wrangling, and multi-source joins. Familiarity with API ingestion (REST, OData) from enterprise systems. Solid understanding of data governance tools like Databricks Unity Catalog, Azure AD for authentication, and RBAC/ACLs for security. Proficiency in SQL, PySpark, and data modeling techniques for dimensional and analytical schemas. Experience in agile methodologies, with the ability to deliver phased outcomes. Preferred Skills Certifications such as Microsoft Certified: Azure Data Engineer Associate (DP-203) or Power BI Data Analyst Associate (PL-300). Knowledge of Azure Synapse Analytics, Azure Monitor for logging, and integration with hybrid/on-premises sources. Experience in domains like energy, mobility, or enterprise analytics, with exposure to moderate data volumes. Strong problem-solving skills, with the ability to handle rate limits, pagination, and dynamic data in APIs. Familiarity with tools like Azure DevOps for CI/CD and version control of pipelines/notebooks. What We Offer Opportunity to work on cutting-edge data transformation projects. Competitive salary and benefits package. Collaborative environment with access to advanced Azure tools and training. Flexible work arrangements and professional growth opportunities. If you are a proactive engineer passionate about building scalable data solutions and delivering actionable insights, apply now. Role & responsibilities Preferred candidate profile

Posted 2 months ago

Apply

6.0 - 11.0 years

20 - 35 Lacs

Hyderabad

Remote

Databricks Administrator Azure/AWS | Remote | 6+ Years Job Description: We are looking for an experienced Databricks Administrator to manage and optimize our Databricks environment on AWS . You will be responsible for setting up and maintaining workspaces, clusters, access control, and integrations, while ensuring security, performance, and governance. Key Responsibilities: Databricks Administration: Manage Databricks workspaces, clusters, and jobs across AWS. User & Access Management: Control user roles, permissions, and workspace-level security. Unity Catalog & Data Governance: Set up and manage Unity Catalog, implement data governance policies. Security & Network Configuration: Configure encryption, authentication, VPCs, private links, and networking on AWS. Integration & Automation: Integrate with cloud services, BI tools, and automate processes using Python, Terraform, and Git. Monitoring & CI/CD: Implement monitoring (CloudWatch, Prometheus, etc.), and manage CI/CD pipelines using GitLab, Jenkins, or similar. Collaboration: Work closely with data engineers, analysts, and DevOps teams to support data workflows. Must-Have Skills: Strong experience with Databricks on AWS Unity Catalog setup and governance best practices AWS network/security configuration (VPC, IAM, KMS) Experience with CI/CD tools (Git, Jenkins, etc.) Terraform and Infrastructure as Code (IaC) Scripting knowledge in Python or Shell Email : Hrushikesh.akkala@numerictech.com Phone /Whatsapp : 9700111702 For immediate response and further opportunities, connect with me on LinkedIn: https://www.linkedin.com/in/hrushikesh-a-74a32126a/

Posted 2 months ago

Apply

5.0 - 9.0 years

10 - 20 Lacs

Hyderabad, Ahmedabad, Bengaluru

Work from Office

Sr. Data Analytics Engineer Power mission-critical decisions with governed insights Ajmera Infotech builds planet-scale software for NYSE-listed clients, driving decisions that can’t afford to fail. Our 120-engineer team specializes in highly regulated domains—HIPAA, FDA, SOC 2—and delivers production-grade systems that turn data into strategic advantage. Why You’ll Love It End-to-end impact — Build full-stack analytics from lakehouse pipelines to real-time dashboards. Fail-safe engineering — TDD, CI/CD, DAX optimization, Unity Catalog, cluster tuning. Modern stack — Databricks, PySpark, Delta Lake, Power BI, Airflow. Mentorship culture — Lead code reviews, share best practices, grow as a domain expert. Mission-critical context — Help enterprises migrate legacy analytics into cloud-native, governed platforms. Compliance-first mindset — Work in HIPAA-aligned environments where precision matters. Key Responsibilities Build scalable pipelines using SQL, PySpark, Delta Live Tables on Databricks. Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. Deliver robust Power BI solutions —dashboards, semantic layers, paginated reports, DAX. Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. Document everything —from pipeline logic to RLS rules—in Git-controlled formats. Collaborate cross-functionally to convert product analytics needs into resilient BI assets. Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills 5+ years in analytics engineering, with 3+ in production Databricks/Spark contexts. Advanced SQL (incl. windowing), expert PySpark , Delta Lake , Unity Catalog . Power BI mastery —DAX optimization, security rules, paginated reports. SSRS-to-Power BI migration experience (RDL logic replication). Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). Communication skills to bridge technical and business audiences. Nice-to-Have Skills Databricks Data Engineer Associate cert. Streaming pipeline experience (Kafka, Structured Streaming). dbt , Great Expectations , or similar data quality frameworks. BI diversity—experience with Tableau, Looker, or similar platforms. Cost governance familiarity (Power BI Premium capacity, Databricks chargeback). Benefits & Call-to-Action Ajmera offers competitive compensation, flexible schedules, and a deeply technical culture where engineers lead the narrative. If you’re driven by reliable, audit-ready data products and want to own systems from raw ingestion to KPI dashboards— apply now and engineer insights that matter.

Posted 2 months ago

Apply

0.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant/ Data Engineer. In this role, you will collaborate closely with cross-functional teams, including developers, business analysts, and stakeholders, to deliver high-quality software solutions that enhance operational efficiency and support strategic business objectives. Responsibilities . Provide technical leadership and architectural guidance on Data engineer projects. . Design and implement data pipelines, data lakes, and data warehouse solutions using the Data engineer. . Optimize Spark-based data workflows for performance, scalability, and cost-efficiency. . Ensure robust data governance and security, including the implementation of Unity Catalog. . Collaborate with data scientists, business users, and engineering teams to align solutions with business goals. . Stay updated with evolving Data engineer features, best practices, and industry trends. . Proven expertise in Data engineering, including Spark, Delta Lake, and Unity Catalog. . Strong background in data engineering, with hands-on experience in building production-grade data pipelines and lakes. . Proficient in Python (preferred) or Scala for data transformation and automation. . Strong command of SQL and Spark SQL for data querying and processing. . Experience with cloud platforms such as Azure, AWS, or GCP. . Familiarity with DevOps/DataOps practices in data pipeline development. . Knowledge of Profisee or other Master Data Management (MDM) tools is a plus. . Certifications in Data Engineering or Spark. . Experience with Delta Live Tables, structured streaming, or metadata-driven frameworks . Development of new reports and updating the existing reports as requested by customers. . Automate the respective reports by the creation of config files. . Validate the premium in the reports against the IMS application to ensure there are no discrepancies by creation of config file. . Validation of all the reports that run on a monthly basis and to analyze the respective reports if there is any discrepancy in Qualifications we seek in you! Minimum Qualifications . BE/ B Tech/ MCA Preferred Qualifications/ Skills . Excellent analytical, problem-solving, communication and interpersonal skills . Able to work effectively in a fast-paced, sometimes stressful environment, and deliver production quality software within tight schedules . Must be results-oriented, self-motivated and have the ability to thrive in a fast-paced environment . Strong Specialty Insurance domain & IT knowledge Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

10.0 - 13.0 years

5 - 10 Lacs

Hyderabad, Telangana, India

On-site

Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks. Mandatory Skills AWS, Databricks

Posted 2 months ago

Apply

10.0 - 12.0 years

10 - 12 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities: Workspace Management: Create and manage Databricks workspaces, ensuring proper configuration and access control. User & Identity Management: Administer user roles, permissions, and authentication mechanisms. Cluster Administration: Configure, monitor, and optimize Databricks clusters for efficient resource utilization. Security & Compliance: Implement security best practices, including data encryption, access policies, and compliance adherence. Performance Optimization: Troubleshoot and resolve performance issues related to Databricks workloads. Integration & Automation: Work with cloud platforms (AWS, Azure, GCP) to integrate Databricks with other services. Monitoring & Logging: Set up monitoring tools and analyze logs to ensure system health. Data Governance: Manage Unity Catalog and other governance tools for structured data access. Collaboration: Work closely with data engineers, analysts, and scientists to support their workflows. Qualifications: Proficiency in Python or Scala for scripting and automation. Knowledge of cloud platforms (AWS). Familiarity with Databricks Delta Lake and MLflow. Understanding of ETL processes and data warehousing concepts. Strong problem-solving and analytical skills.

Posted 2 months ago

Apply

12.0 - 14.0 years

6 - 11 Lacs

Bengaluru, Karnataka, India

On-site

Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks.

Posted 2 months ago

Apply

12.0 - 14.0 years

6 - 11 Lacs

Hyderabad, Telangana, India

On-site

Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks.

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Ready to build the future with AI? At Genpact, we don't just keep up with technology we set the pace. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of whats possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant- Databricks Developer AWS! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications • Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have excellent coding skills either Python or Scala, preferably Python. • Must have experience in Data Engineering domain . • Must have implemented at least 2 project end-to-end in Databricks. • Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration • Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. • Must have good understanding to create complex data pipeline • Must have good knowledge of Data structure & algorithms. • Must be strong in SQL and sprak-sql. • Must have strong performance optimization skills to improve efficiency and reduce cost. • Must have worked on both Batch and streaming data pipeline. • Must have extensive knowledge of Spark and Hive data processing framework. • Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. • Must be strong in writing unit test case and integration test • Must have strong communication skills and have worked on the team of size 5 plus • Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. • Good to have Databricks SQL Endpoint understanding. • Good To have CI/CD experience to build the pipeline for Databricks jobs. • Good to have if worked on migration project to build Unified data platform. • Good to have knowledge of DBT. • Good to have knowledge of docker and Kubernetes. br /> Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries. Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Ready to build the future with AI? At Genpact, we don't just keep up with technology we set the pace. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of whats possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant- Databricks Developer AWS! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications • Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have excellent coding skills either Python or Scala, preferably Python. • Must have experience in Data Engineering domain . • Must have implemented at least 2 project end-to-end in Databricks. • Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration • Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. • Must have good understanding to create complex data pipeline • Must have good knowledge of Data structure & algorithms. • Must be strong in SQL and sprak-sql. • Must have strong performance optimization skills to improve efficiency and reduce cost. • Must have worked on both Batch and streaming data pipeline. • Must have extensive knowledge of Spark and Hive data processing framework. • Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. • Must be strong in writing unit test case and integration test • Must have strong communication skills and have worked on the team of size 5 plus • Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. • Good to have Databricks SQL Endpoint understanding. • Good To have CI/CD experience to build the pipeline for Databricks jobs. • Good to have if worked on migration project to build Unified data platform. • Good to have knowledge of DBT. • Good to have knowledge of docker and Kubernetes. br /> Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries. Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience . Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 4 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Must have skills: Azure data factory, Azure data bricks, Python and Pyspark Expert with database technologies and ETL tools. Hands-on experience on designing and developing scripts for custom ETL processes and automation in Azure data factory, Azure databricks , Python, Pyspark etc. Good knowledge of AZURE, AWS, GCP Cloud platform services stack Hands-on experience on designing and developing scripts for custom ETL processes and automation in Azure data factory, Azure databricks , Delta lake, Databricks workflows orchestration, Python, Pyspark etc. Good Knowledge on Unity Catalog implementation. Good Knowledge on integration with other tools like - DBT, other transformation tools. Good knowledge on Unity Catalog integration with Snowlflake Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework . Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant - Databricks Platform Admin! Databricks Platform Admin is more to look at configuration issues, upgrade, patches, security, Databricks platform integration with AWS, IDM and other technologies, etc. Responsibilities Experience as the Databricks account owner, managing workspaces, AWS accounts, audit logs, and high-level usage monitoring Good understanding of Unity catalog. DBR Workspace and Configuration setups. Support the administrative duties such as user access provisioning, unity catalog and object creation. Compute administration. Experience optimizing usage for performance and cost Experience of building infrastructure as a code (Terraform/ Cloud formation template) Experience as Databricks workspace admin, managing workspace users and groups including single sign-on, provisioning, access control, and workspace storage. Experience managing S3 access across a large user base Experience managing cluster and jobs configuration options Experience with Databricks security and privacy setup Experience troubleshooting end user and platform-level issues Experience delivering client presentations and demos Ability to multitask and reprioritize tasking on the fly according to the needs of a growing platform and its stakeholders Qualifications Minimum qualifications Overall experience required . Experience as the Databricks account owner, managing workspaces, AWS accounts, audit logs, and high-level usage monitoring Good understanding of Unity catalog Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

4.0 - 7.0 years

15 - 20 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Must have excellent coding skills either Python or Scala, preferably Python. Must have at least 5+ years of experience in Data Engineering domain with total of 7+ years. Must have implemented at least 2 project end-to-end in Databricks. Musthave at least 2+ years of experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration

Posted 2 months ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Noida, Pune, Bengaluru

Hybrid

Work Mode: Hybrid (3 days WFO) Locations: Bangalore, Noida, Pune, Mumbai, Hyderabad (Candidates must be in Accion cities to collect assets and attend in-person meetings as required). Key Requirements: Technical Skills: Databricks Expertise: 5+ years of hands-on experience in data engineering/ETL using Databricks on AWS/Azure cloud infrastructure. Proficiency in Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT), MLflow, and Databricks SQL. Experience with Databricks CI/CD tools (e.g., BitBucket, GitHub Actions, Databricks CLI). Data Warehousing & Engineering: Strong understanding of data warehousing concepts (Dimensional, SCD2, Data Vault, OBT, etc.). Proven ability to implement highly performant data ingestion pipelines from multiple sources. Experience integrating end-to-end Databricks pipelines to ensure data quality and consistency. Programming: Strong proficiency in Python and SQL. Basic working knowledge of API or stream-based data extraction processes (e.g., Salesforce API, Bulk API). Cloud Technologies: Preferred experience with AWS services (e.g., S3, Athena, Glue, Lambda). Power BI: 3+ years of experience in Power BI and data warehousing for root cause analysis and business improvement opportunities. Additional Skills: Working knowledge of Data Management principles (quality, governance, security, privacy, lifecycle management, cataloging). Nice to have: Databricks certifications and AWS Solution Architect certification. Nice to have: Experience with building data pipelines from business applications like Salesforce, Marketo, NetSuite, Workday, etc. Responsibilities: Develop, implement, and maintain highly efficient ETL pipelines on Databricks. Perform root cause analysis and identify opportunities for data-driven business improvements. Ensure quality, consistency, and governance of all data pipelines and repositories. Work in an Agile/DevOps environment to deliver iterative solutions. Collaborate with cross-functional teams to meet business requirements. Stay updated on the latest Databricks and AWS features, tools, and best practices. Work Schedule: Regular: 11:00 AM to 8:00 PM. Flexibility is required for project-based overlap. Interested candidates should share their resumes with the following details: Current CTC Expected CTC Preferred Location: Bangalore, Noida, Pune, Mumbai, Hyderabad Notice Period Contact Information:

Posted 2 months ago

Apply

4.0 - 6.0 years

7 - 11 Lacs

Hyderabad, Chennai

Work from Office

Job Title : Data Scientist Location State : Tamil Nadu,Telangana Location City : Hyderabad, Chennai Experience Required : 4 to 6 Year(s) CTC Range : 7 to 11 LPA Shift: Day Shift Work Mode: Onsite Position Type: C2H Openings: 2 Company Name: VARITE INDIA PRIVATE LIMITED About The Client: Client is an Indian multinational technology company specializing in information technology services and consulting. Headquartered in Mumbai, it is a part of the Tata Group and operates in 150 locations across 46 countries. About The Job: Requirements: 5+ years in predictive analytics, with expertise in regression, classification, time-series modeling. Hands-on experience with Databricks Runtime for ML, Spark SQL, and PySpark. Familiarity with MLflow, Feature Store, and Unity Catalog for governance. Industry experience in Life Insurance or P&C. Skills: Python, PySpark , MLflow, Databricks AutoML. Predictive MoClienting (Classification , Clustering , Regression, timeseries and NLP). Cloud platform (Azure/AWS) , Delta Lake, Unity Catalog. Certifications; Databricks Certified ML Practitioner (Optional) Essential Job Functions: Design and deploy predictive models (e.g., forecasting, churn analysis, fraud detection) using Python/SQL, Spark MLlib, and Databricks ML. Build end-to-end ML pipelines (data ingestion, feature engineering, model training, deployment) on Databricks Lakehouse. Optimize model performance via hyperparameter tuning, AutoML, and MLflow tracking. Collaborate with engineering teams to operationalize models (batch/real-time) using Databricks Jobs or REST APIs. Implement Delta Lake for scalable, ACID-compliant data workflows. Enable CI/CD for ML pipelines using Databricks Repos and GitHub Actions. Troubleshoot issues in Spark Jobs and Databricks Environment. Qualifications: Skill Required: Data Science, Python for Data Science Experience Range in Required Skills: 4-6 Years How to Apply: Interested candidates are invited to submit their resume using the apply online button on this job post. About VARITE: VARITE is a global staffing and IT consulting company providing technical consulting and team augmentation services to Fortune 500 Companies in USA, UK, CANADA and INDIA. VARITE is currently a primary and direct vendor to the leading corporations in the verticals of Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. Equal Opportunity Employer: VARITE is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, veteran status, or disability status. Unlock Rewards: Refer Candidates and Earn. If you're not available or interested in this opportunity, please pass this along to anyone in your network who might be a good fit and interested in our open positions. VARITE offers a Candidate Referral program, where you'll receive a one-time referral bonus based on the following scale if the referred candidate completes a three-month assignment with VARITE. Exp Req - Referral Bonus 0 - 2 Yrs. - INR 5,000 2 - 6 Yrs. - INR 7,500 6 + Yrs. - INR 10,000

Posted 2 months ago

Apply

4.0 - 9.0 years

0 - 3 Lacs

Pune, Chennai, Bengaluru

Work from Office

Experience - 4 years - 12 years Location - Pune / Mumbai / Chennai / Bangalore Notice Period - Immediate - 45 Days Proven experience as a Databricks Developer or similar role, with a strong focus on Pyspark programming. Expertise in the Databricks platform, including Databricks Delta, Spark and Unity Catalog. Proficiency in Python for data manipulation, analysis, and automation. Strong understanding of data engineering concepts, data modeling, ETL processes, and data integration techniques. Excellent communication skills and ability to collaborate effectively with cross-functional teams. Stay updated with the latest advancements in Databricks, Python, and SQL technologies to continuously improve data solutions. Familiarity with Azure cloud platforms and their data services. Ability to work in an agile development environment and adapt to changing requirements.

Posted 2 months ago

Apply

5.0 - 7.0 years

14 - 16 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Job Title: Data/ML Platform Engineer Location: Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office) Notice Period: ImmediateiSource Services is hiring for one of their client for the position of Data/ML Platform Engineer. As a Data Engineer you will be relied on to independently develop and deliver high-quality features for our new ML Platform, refactor and translate our data products and finish various tasks to a high standard. Youll be part of the Data Foundation Team, which focuses on creating and maintaining the Data Platform for Marktplaats. 5 years of hands-on experience in using Python, Spark,Sql. Experienced in AWS Cloud usage and management. Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow). Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch. Experience with orchestrators such as Airflow and Kubeflow. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes). Fundamental understanding of Parquet, Delta Lake and other data file formats. Proficiency on an IaC tool such as Terraform, CDK or CloudFormation. Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst Location - Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office)

Posted 2 months ago

Apply

8.0 - 13.0 years

30 - 45 Lacs

Hyderabad

Work from Office

Role : Were looking for a skilled Databricks Solution Architect who will lead the design and implementation of data migration strategies and cloud-based data and analytics transformation on the Databricks platform. This role involves collaborating with stakeholders, analyzing data, defining architecture, building data pipelines, ensuring security and performance, and implementing Databricks solutions for machine learning and business intelligence. Key Responsibilities: Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. Design, implement, and optimize scalable, high-performance data architectures using Databricks. Build and manage data pipelines and workflows within Databricks. Ensure that best practices for security, scalability, and performance are followed. Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. Oversee the technical aspects of the migration process, from planning through to execution. Create documentation of the architecture, migration processes, and solutions. Provide training and support to teams post-migration to ensure they can leverage Databricks. Preferred candidate profile: Experience: 7+ years of experience in data engineering, cloud architecture, or related fields. 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. Experience in end-to-end data migration projects involving large-scale data infrastructure. Familiarity with ETL tools, data lakes, and data warehousing solutions. Skills: Expertise in Databricks architecture and best practices for data processing. Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. Proficiency in Databricks Asset Bundles Expertise in design and development of migration frameworks using Databricks Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. Familiarity with data governance, security, and compliance in cloud environments. Solid understanding of cloud-native data solutions and services.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies